Skip to content

Conversation

@Tejasv-Singh
Copy link

Summary

Introduced a functional financial market simulation that uses a Real LLM (OpenAI) for agent decision-making.

Motive

To address the feedback that the previous "Mock LLM" example was insufficient for showcasing Mesa's capabilities. This update demonstrates a production-ready pattern for integrating real AI APIs (OpenAI) into Mesa agents, allowing for true natural language processing and sentiment analysis within the simulation.

Implementation

  • Real AI Integration: Implemented FinancialLLM in model.py which interfaces with the OpenAI API.
  • Security: Uses os.getenv("OPENAI_API_KEY") to safely load API keys from the environment to prevent accidental commits.
  • Mesa 3.0 Compatibility: Ensuring the example uses modern Mesa patterns:
    • Replaced deprecated RandomActivation with self.agents.shuffle_do("step").
    • Corrected super().__init__() usage for Agent and Model classes.
  • Dependencies: Added openai to requirements.txt.

Usage Examples

  1. Set your API Key:
    export OPENAI_API_KEY="sk-..."
    

Run the simulation:

bash

python examples/financial_market/run.py

Additional Notes

Gracefully handles missing API keys by stopping the simulation with a clear error message instead of crashing.
Configured with gpt-3.5-turbo and low temperature for cost-effective and consistent testing.

@coderabbitai
Copy link

coderabbitai bot commented Jan 5, 2026

Important

Review skipped

Auto reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@Tejasv-Singh
Copy link
Author

This PR supersedes #52. Based on the feedback received, I have replaced the mock LLM with a fully functional OpenAI API client.

@Tejasv-Singh
Copy link
Author

I noticed the CI checks are failing, but it appears to be a Codecov upload error (Token required - not valid tokenless upload) caused by the PR coming from a fork.

The actual tests seem to have generated the coverage report successfully before this upload step failed.

I have also pushed a small update to use a 'lazy import' for OpenAI, ensuring the example doesn't crash environments where the library isn't installed.

@colinfrisch
Copy link
Collaborator

Thanks for your work, I'll review it asap ! We are indeed having a problem with codecov currently and it's being handled externally to this PR, so you don't have to worry abt it

Copy link
Collaborator

@colinfrisch colinfrisch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a good start ! But you should spend a bit more time on looking at the examples and really looking at the docs if you want to contribute. There are a few things that you rebuild that already exist in the repo, as well as a few good practices explained in mesa and mesa-llm tutorials that you ignore here.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

File is a bit short, you can check out examples folder in the mesa-llm repo as what you can do for a readme.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need for a requirements file. Just indicate them directly in the readme

return "HOLD"


class TraderAgent(mesa.Agent):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please separate model and agents in different files like done in mesa-llm examples.


# Initialize one shared LLM client to prevent recreating it 5 times
try:
self.llm_client = FinancialLLM()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Making a new client variable defeats the whole purpose of the Built-In ModuleLLM class... do you think that you could use it instead ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants