Skip to content

Commit 2b90311

Browse files
committed
fix(logging): use structured log
1 parent ad6f45f commit 2b90311

File tree

11 files changed

+92
-38
lines changed

11 files changed

+92
-38
lines changed

pyproject.toml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,7 @@ dependencies = [
1313
"pandas>=2.2.3",
1414
"python-dotenv>=1.0.1",
1515
"requests>=2.32.3",
16+
"structlog>=25.1.0",
1617
]
1718

1819
[dependency-groups]
@@ -33,7 +34,7 @@ target-version = "py312"
3334

3435
[tool.ruff.lint]
3536
select = ["ALL"]
36-
ignore = ["D203", "D212", "COM812", "D", "S105", "ANN401", "T201", "ISC003"]
37+
ignore = ["D203", "D212", "COM812", "D", "S105", "ANN401", "ISC003"]
3738

3839
[tool.ruff.lint.extend-per-file-ignores]
3940
"tests/**/*.py" = ["S101", "ARG"]

src/data/final_consensus.json

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
{
2-
"aggregated_response": "Here's a combined response based on the majority features of the previous answers:\n\n**Why a freshly painted room may feel warmer than it actually is**\n\nWhen you enter a freshly painted room, you may notice that it feels warmer than it actually is. This warming effect is due to several factors:\n\n1. **Volatile Organic Compounds (VOCs)**: Paints and coatings often contain VOCs, which are chemicals that evaporate quickly at room temperature. These VOCs can contribute to the warming effect by releasing heat as they transition from a liquid to a gas state.\n2. **Moisture**: Paints can contain moisture, which is released as the paint dries. As the moisture warms up, it condenses into water vapor, releasing latent heat into the air.\n3. **Thermal Mass**: Painted surfaces can have a higher thermal mass than unpainted surfaces, absorbing and releasing heat more slowly. As the paint dries, it warms up the room by absorbing",
2+
"aggregated_response": "Here's a combined response based on the majority of the models:\n\nHeat is emitted in a freshly painted room due to two primary reasons: evaporation of solvents and release of volatile organic compounds (VOCs) from the paint, and the exothermic chemical reactions that occur during the drying process.\n\nWhen paint is applied to a surface, it contains solvents and other chemicals that are volatile, meaning they can evaporate at room temperature. As the paint dries, these solvents and VOCs evaporate into the air, carrying heat with them. This process of evaporation is endothermic, meaning it absorbs heat from the surrounding environment, resulting in a cooler temperature in the room.\n\nAdditionally, the paint itself also emits heat as it dries due to the exothermic chemical reactions that occur during the drying process. These reactions involve the breakdown of the paint's components, releasing energy in the form of heat. This heat is then distributed throughout the room, contributing to the overall warmth",
33
"responses": {
4-
"qwen/qwen-vl-plus:free": "When you enter a freshly painted room, you may notice that it feels warmer than it actually is. This warming effect is due to several factors:\n\n1. **Volatile Organic Compounds (VOCs)**: Paints and coatings often contain VOCs, which are chemicals that evaporate quickly at room temperature. These VOCs can contribute to the warming effect.\n2. **Moisture**: Paints can contain moisture, either from the paint itself or from the air in the room. As the paint dries, it releases moisture into the air, which can be cooled as it condenses, releasing heat.\n3. **Thermal Mass**: Painted surfaces can have a higher thermal mass than unpainted surfaces, meaning they can absorb and release heat more slowly. As the paint dries, it warms up the room by absorbing and storing heat.\n4. **Evaporation**: The process of paint drying releases heat as the solvent in the paint evaporates. This heat contributes to the",
5-
"sophosympatheia/rogue-rose-103b-v0.2:free": " When you enter a freshly painted room, it may feel warmer than it actually is due to several factors.\n\n1. **Volatile Organic Compounds (VOCs)**: Paints and coatings often contain VOCs, which are chemicals that evaporate quickly at room temperature. As these VOCs evaporate, they release heat, contributing to the warming effect in the room.\n2. **Moisture**: Paints can contain moisture, either from the paint itself or from the air in the room. As the paint dries, it releases moisture into the air, which can be cooled as it condenses, releasing heat.\n3. **Thermal Mass**: Painted surfaces can have a higher thermal mass than unpainted surfaces, meaning they can absorb and release heat more slowly. As the paint dries, it warms up",
6-
"meta-llama/llama-3.2-3b-instruct:free": "When you enter a freshly painted room, heat is emitted due to several factors:\n\n1. **Volatile Organic Compounds (VOCs)**: Paints and coatings often contain VOCs, which evaporate quickly at room temperature and release heat as they transition from a liquid to a gas state.\n2. **Moisture**: Paints can contain moisture, which is released as the paint dries. As the moisture warms up, it condenses into water vapor, releasing latent heat into the air.\n3. **Thermal Mass**: Painted surfaces can have a higher thermal mass than unpainted surfaces, absorbing and releasing heat more slowly. As the paint dries, it warms up the room by absorbing and storing heat.\n4. **Evaporation**: As the paint dries, the solvents and other volatile components evaporate, releasing heat into the air.\n\nThese factors combined contribute to the warming effect you may experience when entering a freshly painted room."
4+
"qwen/qwen-vl-plus:free": "Heat is emitted in a freshly painted room due to the evaporation of solvents and volatile organic compounds (VOCs) from the paint, which absorbs heat from the surrounding environment, and the chemical reactions that occur during the drying process, which release energy in the form of heat.",
5+
"sophosympatheia/rogue-rose-103b-v0.2:free": " Heat is emitted in a freshly painted room due to two primary reasons: evaporation of solvents and release of volatile organic compounds (VOCs) from the paint, and the exothermic chemical reactions that occur during the drying process.\n\n1. Evaporation of solvents and VOCs: When paint is applied to a surface, it contains solvents and other chemicals that are volatile, meaning they can evaporate at room temperature. As the paint dries, these solvents and VOCs evaporate into the air, carrying heat with them. This process of evaporation is endothermic, meaning it absorbs heat from the surrounding environment, resulting in a cooler temperature in the room.\n2. Exothermic chemical reactions: The paint itself also emits heat as it dries due to the exothermic chemical reactions that occur during",
6+
"meta-llama/llama-3.2-3b-instruct:free": "Heat is emitted in a freshly painted room because the paint contains volatile organic compounds (VOCs) that evaporate as the paint dries, absorbing heat from the surrounding environment, and the chemical reactions that occur during drying release energy in the form of heat."
77
}
88
}

src/flare_ai_consensus/consensus/consensus.py

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,13 @@
11
import asyncio
22

3+
import structlog
4+
35
from flare_ai_consensus.consensus.config import ConsensusConfig, ModelConfig
46
from flare_ai_consensus.router.client import AsyncOpenRouterClient
57
from flare_ai_consensus.utils.parser import parse_chat_response
68

9+
logger = structlog.get_logger(__name__)
10+
711

812
def build_improvement_conversation(
913
consensus_config: ConsensusConfig, aggregated_response: str
@@ -50,13 +54,13 @@ async def get_response_for_model(
5054
if aggregated_response is None:
5155
# Use initial prompt for the first round.
5256
conversation = consensus_config.initial_prompt
53-
print(f"Sending initial prompt to {model.model_id}.")
57+
logger.info("sending initial prompt", model_id=model.model_id)
5458
else:
5559
# Build the improvement conversation.
5660
conversation = build_improvement_conversation(
5761
consensus_config, aggregated_response
5862
)
59-
print(f"Sending improvement prompt to {model.model_id}.")
63+
logger.info("sending improvement prompt", model_id=model.model_id)
6064

6165
payload = {
6266
"model": model.model_id,
@@ -66,8 +70,7 @@ async def get_response_for_model(
6670
}
6771
response = await client.send_chat_completion(payload)
6872
text = parse_chat_response(response)
69-
print(f"{model.model_id} has provided a new response.")
70-
73+
logger.info("new response", model_id=model.model_id, response=text)
7174
return model.model_id, text
7275

7376

src/flare_ai_consensus/main.py

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,7 @@
11
import asyncio
22

3+
import structlog
4+
35
from flare_ai_consensus.config import config
46
from flare_ai_consensus.consensus import aggregator, consensus
57
from flare_ai_consensus.consensus.config import ConsensusConfig
@@ -9,6 +11,8 @@
911
saver,
1012
)
1113

14+
logger = structlog.get_logger(__name__)
15+
1216

1317
async def run_consensus(
1418
client: AsyncOpenRouterClient,
@@ -26,7 +30,7 @@ async def run_consensus(
2630
aggregated_response = await aggregator.async_centralized_llm_aggregator(
2731
client, consensus_config.aggregator_config, responses
2832
)
29-
print("\nInitial responses have been aggregated.")
33+
logger.info("initial response aggregation complete")
3034

3135
# Step 2: Improvement rounds.
3236
for i in range(consensus_config.iterations):
@@ -36,15 +40,15 @@ async def run_consensus(
3640
aggregated_response = await aggregator.async_centralized_llm_aggregator(
3741
client, consensus_config.aggregator_config, responses
3842
)
39-
print(f"\nThe responses have been aggregated after iteration {i + 1}:")
43+
logger.info("responses aggregated", iteration=i + 1)
4044

4145
# Step 3: Save final consensus.
4246
output_file = config.data_path / "final_consensus.json"
4347
saver.save_json(
4448
{"aggregated_response": aggregated_response, "responses": responses},
4549
output_file,
4650
)
47-
print(f"\nFinal consensus saved to {output_file}")
51+
logger.info("saved consensus", output_file=output_file)
4852

4953
# Close the async client to release resources.
5054
await client.close()
Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,14 @@
11
import json
22
from pathlib import Path
33

4+
import structlog
5+
6+
logger = structlog.get_logger(__name__)
7+
48

59
def save_json(contents: dict, file_path: Path) -> None:
610
"""Save json files to specified path."""
711

812
with file_path.open("w") as f:
913
json.dump(contents, f, indent=4)
10-
print(f"Data saved to {file_path}.")
14+
logger.info("saved data", file_path=file_path)

tests/chat_completion.py

Lines changed: 19 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,16 @@
11
import argparse
22

3+
import structlog
4+
35
from flare_ai_consensus.config import config
46
from flare_ai_consensus.router import (
57
requests, # This module should expose send_chat_completion
68
)
79
from flare_ai_consensus.router.client import OpenRouterClient
810
from flare_ai_consensus.utils.saver import save_json
911

12+
logger = structlog.get_logger(__name__)
13+
1014

1115
def parse_arguments() -> argparse.Namespace:
1216
parser = argparse.ArgumentParser(
@@ -43,40 +47,43 @@ def default_mode(
4347
"temperature": 0.7,
4448
}
4549
try:
46-
print(f"\nIteration {i + 1}: Sending conversation to model {model_id} ...")
50+
logger.info("sending conversation", model_id=model_id, iteration=i + 1)
4751
response = requests.send_chat_completion(client, payload)
4852
# Extract the assistant's response.
4953
assistant_response = (
5054
response.get("choices", [])[0].get("message", {}).get("content", "")
5155
)
52-
print(f"Assistant Response (Iteration {i + 1}):")
53-
print(assistant_response)
56+
logger.info(
57+
"assistant response",
58+
iteration=i + 1,
59+
assistant_response=assistant_response,
60+
)
5461

5562
# Append the assistant's response to the conversation history.
5663
conversation.append({"role": "assistant", "content": assistant_response})
5764

5865
# Ask for improved response
5966
if i < num_iterations - 1:
6067
conversation.append({"role": "user", "content": improvement_prompt})
61-
except Exception as e: # noqa: BLE001
62-
print(f"Error for model {model_id} in iteration {i + 1}: {e}")
68+
except Exception:
69+
logger.exception("error", model_id=model_id, iteration=i + 1)
6370
break
6471

6572
# Save the final conversation to a file.
6673
output_file = config.data_path / "chat_response.json"
6774
save_json({"conversation": conversation}, output_file)
68-
print(f"\nFinal conversation saved to {output_file}")
75+
logger.info("final conversation saved", output_file=output_file)
6976

7077

7178
def interactive_mode(client: OpenRouterClient, model_id: str) -> None:
7279
"""Run the chat in interactive mode."""
7380
conversation = []
74-
print("Interactive mode. Type 'exit' to quit.")
81+
print("Interactive mode. Type 'exit' to quit.") # noqa: T201
7582

7683
while True:
7784
user_input = input("\nEnter your 'user' prompt: ")
7885
if user_input.strip().lower() == "exit":
79-
print("Exiting interactive mode.")
86+
print("Exiting interactive mode.") # noqa: T201
8087
break
8188

8289
conversation.append({"role": "user", "content": user_input})
@@ -93,11 +100,11 @@ def interactive_mode(client: OpenRouterClient, model_id: str) -> None:
93100
assistant_msg = (
94101
response.get("choices", [])[0].get("message", {}).get("content", "")
95102
)
96-
print("\nAssistant:")
97-
print(assistant_msg)
103+
print("\nAssistant:") # noqa: T201
104+
print(assistant_msg) # noqa: T201
98105
conversation.append({"role": "assistant", "content": assistant_msg})
99106
except Exception as e: # noqa: BLE001
100-
print(f"Error for model {model_id}: {e}")
107+
print(f"Error for model {model_id}: {e}") # noqa: T201
101108

102109

103110
def main() -> None:
@@ -138,7 +145,7 @@ def main() -> None:
138145
elif args.mode == "interactive":
139146
interactive_mode(client, model_id)
140147
else:
141-
print("Invalid mode. Please choose 'default' or 'interactive'.")
148+
logger.info("invalid mode: choose 'default' or 'interactive'")
142149

143150

144151
if __name__ == "__main__":

tests/completion.py

Lines changed: 10 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,14 @@
11
import argparse
22

3+
import structlog
4+
35
from flare_ai_consensus.config import config
46
from flare_ai_consensus.router import requests
57
from flare_ai_consensus.router.client import OpenRouterClient
68
from flare_ai_consensus.utils.saver import save_json
79

10+
logger = structlog.get_logger(__name__)
11+
812

913
def parse_arguments() -> argparse.Namespace:
1014
"""Parse command-line arguments and return the parsed namespace."""
@@ -58,17 +62,19 @@ def start_chat(args: argparse.Namespace) -> None:
5862
)
5963

6064
try:
61-
print(f"Sending prompt to model {model_id} ...")
65+
logger.info("sending prompt", model_id=model_id)
6266
response = send_prompt(client, model_id, prompt)
6367

6468
# Save the full JSON response to a file.
6569
output_file = config.data_path / "response.json"
6670
save_json(response, output_file)
6771

6872
# Print response
69-
print(response.get("choices", [])[0].get("text", ""))
70-
except Exception as e: # noqa: BLE001
71-
print(f"Error for model {model_id}: {e}")
73+
response_text = response.get("choices", [])[0].get("text", "")
74+
logger.info("model response", response_text=response_text)
75+
76+
except Exception as e:
77+
logger.exception("error", model_id=model_id, error=e)
7278

7379

7480
def main() -> None:

tests/credits.py

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,15 @@
1+
import structlog
2+
13
from flare_ai_consensus.config import config
24
from flare_ai_consensus.router.client import OpenRouterClient
35

6+
logger = structlog.get_logger(__name__)
7+
48

59
def get_credits(client: OpenRouterClient) -> None:
610
# Retrieve available credits
711
current_credits = client.get_credits()
8-
print(current_credits)
12+
logger.info("current credits", current_credits=current_credits)
913

1014

1115
if __name__ == "__main__":

tests/model_endpoints.py

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,15 @@
1+
import structlog
2+
13
from flare_ai_consensus.config import config
24
from flare_ai_consensus.router.client import OpenRouterClient
35
from flare_ai_consensus.utils.openrouter import extract_author
46

7+
logger = structlog.get_logger(__name__)
8+
59

610
def get_model_endpoints(client: OpenRouterClient, author: str, slug: str) -> None:
711
endpoints = client.get_model_endpoints(author, slug)
8-
print(endpoints)
12+
logger.info("model endpoints", endpoints=endpoints)
913

1014

1115
if __name__ == "__main__":

tests/working_models.py

Lines changed: 17 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,15 @@
11
import asyncio
22

3+
import structlog
4+
35
from flare_ai_consensus.config import config
46
from flare_ai_consensus.consensus.config import ModelConfig
57
from flare_ai_consensus.router.client import AsyncOpenRouterClient
68
from flare_ai_consensus.utils.loader import load_json
79
from flare_ai_consensus.utils.saver import save_json
810

11+
logger = structlog.get_logger(__name__)
12+
913

1014
async def _test_model_completion(
1115
client: AsyncOpenRouterClient,
@@ -56,15 +60,17 @@ async def _test_model_completion(
5660
try:
5761
response = await send_func(payload)
5862
if "error" not in response:
59-
print(f"Model {model_id} works with {api_endpoint}!")
63+
logger.info("model works", model_id=model_id, api_endpoint=api_endpoint)
6064
return (model, True)
6165
error_info = response.get("error", {})
62-
print(
63-
f"Model {model_id} returned error in {api_endpoint}: "
64-
f"{error_info.get('message', 'Unknown error')}"
66+
logger.error(
67+
"testing model",
68+
model_id=model_id,
69+
api_endpoint=api_endpoint,
70+
error=error_info.get("message", "Unknown error"),
6571
)
66-
except Exception as e: # noqa: BLE001
67-
print(f"Error testing model {model_id} with {api_endpoint}: {e}")
72+
except Exception:
73+
logger.exception("testing model", model_id=model_id, api_endpoint=api_endpoint)
6874
return (model, False)
6975
else:
7076
return (model, False)
@@ -121,7 +127,11 @@ async def main() -> None:
121127
config.data_path / f"free_working_{endpoint}_models.json"
122128
)
123129
save_json({"data": working_models}, completion_output_file)
124-
print(f"\nWorking {endpoint} models saved to {completion_output_file}.\n")
130+
logger.info(
131+
"working models saved",
132+
endpoint=endpoint,
133+
completion_output_file=completion_output_file,
134+
)
125135

126136
await client.close()
127137

0 commit comments

Comments
 (0)