Skip to content

Commit 006273f

Browse files
authored
Revise Gemini model details and add configuration example (#90)
Updated Gemini model information and added LiteLLM configuration example.
1 parent 69cf931 commit 006273f

File tree

1 file changed

+48
-2
lines changed

1 file changed

+48
-2
lines changed

README.md

Lines changed: 48 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -230,10 +230,56 @@ To set up and run Hercules on a Windows machine:
230230
- Mistral: Supports any version with function calling and coding capabilities. Mistral-large, Mistral-medium. Only heavey models.
231231
- OpenAI: Fully compatible with GPT-4o/o3-mini and above. Note: OpenAI GPT-4o-mini is only supported for sub-agents, for planner it is still recommended to use GPT-4o.
232232
- Ollama: Supported with medium models and function calling. Heavy models only 70b and above.
233-
- Gemini: [deprecated, because of flaky execution]. Refer: https://testzeuscommunityhq.slack.com/archives/C0828GV2HEC/p1740628636862819
233+
- Gemini: Can be used. Preferred with LiteLLM as below.
234234
- Deepseek: only deepseek-chat v3 support.
235235
- Hosting: supported on AWS bedrock, GCP VertexAI, AzureAI. [tested models, OpenAI, Anthropic Sonet and Haiku, Llamma 60b above with function calling]
236-
Note: Kindly ensure that the model you are using can handle agentic activities like function calling. For example larger models like OpenAI GPT 4O, Llama >70B, Mistral large etc.
236+
Note: Kindly ensure that the model you are using can handle agentic activities like function calling. For example larger models like OpenAI GPT 4O, Llama >70B, Mistral large etc. You can use agent_config file as below to fill LiteLLM details [https://docs.litellm.ai/docs/simple_proxy] :
237+
```JSON
238+
{
239+
"litellm-flash": {
240+
"planner_agent": {
241+
"model_name": "gemini-2.5-flash",
242+
"model_api_key": "sfasdfsadgbw",
243+
"model_base_url": "https://litellm-proxydeployment",
244+
"llm_config_params": {
245+
"cache_seed": 1234,
246+
"temperature": 0,
247+
"seed": 12345
248+
}
249+
},
250+
"nav_agent": {
251+
"model_name": "gemini-2.5-flash",
252+
"model_api_key": "sfasdfsadgbw",
253+
"model_base_url": "https://litellm-proxydeployment",
254+
"llm_config_params": {
255+
"cache_seed": 1234,
256+
"temperature": 0,
257+
"seed": 12345
258+
}
259+
},
260+
"mem_agent": {
261+
"model_name": "gemini-2.5-flash",
262+
"model_api_key": "sfasdfsadgbw",
263+
"model_base_url": "https://litellm-proxydeployment",
264+
"llm_config_params": {
265+
"cache_seed": 1234,
266+
"temperature": 0,
267+
"seed": 12345
268+
}
269+
},
270+
"helper_agent": {
271+
"model_name": "gemini-2.5-flash",
272+
"model_api_key": "sfasdfsadgbw",
273+
"model_base_url": "https://litellm-proxydeployment",
274+
"llm_config_params": {
275+
"cache_seed": 1234,
276+
"temperature": 0,
277+
"seed": 12345
278+
}
279+
}
280+
}
281+
}
282+
```
237283

238284
#### Execution Flow
239285

0 commit comments

Comments
 (0)