Skip to content

Conversation

@niteshver
Copy link

This PR adds a new “first model” tutorial for mesa-llm, following the structure and level of detail of Mesa’s Creating Your First Model tutorial, but with content focused specifically on mesa-llm concepts.

What this tutorial covers

  • How mesa-llm integrates language-based reasoning into Mesa’s execution model
  • Defining a minimal language-driven agent that remains a standard Mesa agent
  • Using an LLM backend (Ollama with Llama 3) to replace rule-based decision logic
  • Keeping Mesa’s scheduling and lifecycle unchanged
  • Running and extending a simple language-reasoning model

Notes

  • Ollama is used as a lightweight local backend, but the design remains backend-agnostic.
  • The example is intentionally minimal to focus on core mesa-llm ideas rather than infrastructure.

I’ve addressed the feedback from the earlier review by:

  • aligning the structure and depth with Mesa’s first model tutorial,

I’d appreciate another review when you have time. Thanks!

@coderabbitai
Copy link

coderabbitai bot commented Dec 20, 2025

Important

Review skipped

Auto reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@niteshver
Copy link
Author

Hi @colinfrisch, I’ve created a Mesa LLM tutorial to help newcomers. Would appreciate you taking a look!

@colinfrisch
Copy link
Collaborator

Thanks a lot for your contribution, it's a very good start ! Could you make a model (and tutorial that would go with it) that would showcase the reasoning part a little more ?

Looking forward to review and merge this PR :)

@niteshver
Copy link
Author

Hi @colinfrisch
Thanks a lot for the feedback, I’m glad the tutorial is heading in the right direction!

That makes sense — I’ll extend the model slightly to make the reasoning step more explicit (e.g. by prompting agents to reason about the situation before deciding on an action) and update the tutorial accordingly.

I’ll push an update shortly. Thanks again!

Copy link
Collaborator

@colinfrisch colinfrisch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Essentially, you have not used any code from the mesa-llm library here, please try to build a working model yourself to check out how this library works before contributing as this is supposed to be a tutorial for mesa-llm. Also please be careful what tools you use. An excessive use of AI can result in a temporary ban for the mesa ecosystem and have definitive consequences on GSoC.

Comment on lines 68 to 75
* OpenAI
* Anthropic
* xAI
* Huggingface
* Ollama
* OpenRouter
* NovitaAI
* Gemini
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not necessary here, supported models are already in the docs

```python
import mesa_llm
import mesa
import ollama
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We will add ollama dependency directly in mesa-llm. No need to put it here

The generated response is treated as the agent’s reasoning output and printed directly, allowing us to observe how different agents interpret the same simulation context.
The LanguageAgent class is created with the following code:
```python
class LanguageAgent(mesa.Agent):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The whole point of this tutorial is to use LLMAgent. Please refer to the existing example models to build your tutorial (also please do not use AI)

@niteshver
Copy link
Author

Hi @colinfrisch
I’ve finalized the first model tutorial with a clear focus on LLMAgent and ReAct reasoning, keeping environments for later.
Please let me know if any further changes are needed

@colinfrisch
Copy link
Collaborator

Thank you, this is starting to look good ! Last step would be to make this a model that has a small purpose. What I suggest would be a small (and tutorial friendly) version of the negotiation model that you can find in the example folder of this repo

Of course, if you find something else that still showcases mesa-llm well and stays simple while being useful, I'm open to ideas :)

@niteshver
Copy link
Author

Thank you for the feedback. I’ll go with a negotiation model, since I’m already familiar with this setup from the existing example and feel confident using it to clearly demonstrate how LLM agents reason and interact. I’ll keep the model small and tutorial-friendly while focusing on clarity and purpose

@niteshver
Copy link
Author

niteshver commented Jan 6, 2026

Hi @colinfrisch
I’ve added a simplified negotiation tutorial inspired by the existing example.

Can I work on issue #31 while you review my PR?

@colinfrisch
Copy link
Collaborator

colinfrisch commented Jan 6, 2026

Yes of course, you can work on any subject you like (you can check out some prs that were opened and merged on #31). I'll review this one asap

Copy link
Collaborator

@colinfrisch colinfrisch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's starting to look like something we could merge ! Little suggestions down below. Also, for the negotiation tutorial, I think that it would be the right place to demonstrate the messaging system between agents : if you can manage to make use of it and make it tutorial friendly, it would be great !

Last thing also : when you write code in your tutorials, don't hesitate to add a lot of comments to explain directly in the code the role of each function/method/attribute, etc.

# Creating Your First mesa-llm Model

## Tutorial Overview
This tutorial introduces mesa-llm by walking through the construction of a simple language-driven agent model built on top of Mesa. Mesa-llm enables agents to reason using natural language while preserving Mesa’s standard execution model.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
This tutorial introduces mesa-llm by walking through the construction of a simple language-driven agent model built on top of Mesa. Mesa-llm enables agents to reason using natural language while preserving Mesa’s standard execution model.
This tutorial introduces mesa-llm by walking through the construction of a simple language-driven agent model built on top of Mesa. Mesa-llm enables agents to reason using natural language while preserving Mesa’s standard execution model. If it's your first time using mesa, we suggest starting with the classic [creating your first model tutorials](https://mesa.readthedocs.io/latest/tutorials/0_first_model.html) before diving into mesa-llm.

Comment on lines 128 to 135
selected_tools=[]
)

print(plan)
```
**Note on `selected_tools`:**
In this tutorial, `selected_tools=[]` indicates that the agent is reasoning
without access to any external tools.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we don't use tools in this tutorial, we don't have to talk about it here. By default it's None so you can simply remove it. We will probably make a whole tutorial dedicated to it.

## Create the Model
The model manages agent creation and advances the simulation.

mesa-llm provides the create_agents() helper, which correctly initializes agents and registers them with Mesa’s internal AgentSet.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mesa-LLM does not provide the create_agents() helper, it comes directly from the Mesa base ecosystem. LLMAgent is a wrapper of Agent (that is why we use super().__init__(*args, **kwargs) when initialising our agents). Also please use the ` quotes when you are talking about specific attributes or methods to explicitly show that it's code.

- In this introductory tutorial, action suggestions are not executed.
- Actions are shown only as part of the reasoning trace.
- Environments and action execution are introduced in later tutorials.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you maybe add a small exercise like in the mesa first model tutorial ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks! I’ll add a small exercise section at the end of the tutorial, similar to Mesa’s first model tutorial.

@niteshver
Copy link
Author

Thanks for the clarification! I’ve updated the documentation to clearly state that create_agents() comes from Mesa, clarified the relationship between LLMAgent and Agent, and fixed code formatting using backticks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants