-
-
Notifications
You must be signed in to change notification settings - Fork 19
Feature/OpenAI financial market #53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Feature/OpenAI financial market #53
Conversation
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
This PR supersedes #52. Based on the feedback received, I have replaced the mock LLM with a fully functional OpenAI API client. |
|
I noticed the CI checks are failing, but it appears to be a Codecov upload error (Token required - not valid tokenless upload) caused by the PR coming from a fork. The actual tests seem to have generated the coverage report successfully before this upload step failed. I have also pushed a small update to use a 'lazy import' for OpenAI, ensuring the example doesn't crash environments where the library isn't installed. |
|
Thanks for your work, I'll review it asap ! We are indeed having a problem with codecov currently and it's being handled externally to this PR, so you don't have to worry abt it |
colinfrisch
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a good start ! But you should spend a bit more time on looking at the examples and really looking at the docs if you want to contribute. There are a few things that you rebuild that already exist in the repo, as well as a few good practices explained in mesa and mesa-llm tutorials that you ignore here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
File is a bit short, you can check out examples folder in the mesa-llm repo as what you can do for a readme.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need for a requirements file. Just indicate them directly in the readme
| return "HOLD" | ||
|
|
||
|
|
||
| class TraderAgent(mesa.Agent): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please separate model and agents in different files like done in mesa-llm examples.
|
|
||
| # Initialize one shared LLM client to prevent recreating it 5 times | ||
| try: | ||
| self.llm_client = FinancialLLM() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Making a new client variable defeats the whole purpose of the Built-In ModuleLLM class... do you think that you could use it instead ?
Summary
Introduced a functional financial market simulation that uses a Real LLM (OpenAI) for agent decision-making.
Motive
To address the feedback that the previous "Mock LLM" example was insufficient for showcasing Mesa's capabilities. This update demonstrates a production-ready pattern for integrating real AI APIs (OpenAI) into Mesa agents, allowing for true natural language processing and sentiment analysis within the simulation.
Implementation
os.getenv("OPENAI_API_KEY")to safely load API keys from the environment to prevent accidental commits.RandomActivationwithself.agents.shuffle_do("step").super().__init__()usage for Agent andModelclasses.openaito requirements.txt.Usage Examples
Run the simulation:
bash
Additional Notes
Gracefully handles missing API keys by stopping the simulation with a clear error message instead of crashing.
Configured with gpt-3.5-turbo and low temperature for cost-effective and consistent testing.