Skip to content

[Feature]: Add --model Flag to Agent Runner CLI #1485

@rishuraj1

Description

@rishuraj1

Problem Statement

The framework CLI (run and shell commands) does not provide a way to specify which LLM model should be used when executing an agent. This forces developers to rely on hardcoded defaults, making it frustrating to test agents across different providers (OpenAI, Anthropic, Gemini, etc.) without modifying framework code.
Example: Even if users have OPENAI_API_KEY set and want to use gpt-4o-mini, the CLI always falls back to the default model.

Proposed Solution

Add an optional --model (or -m) flag to both run and shell commands so users can override the default model at runtime.
Example desired usage:

python -m framework run exports/my_agent --model gemini/gemini-1.5-flash --input '{...}'

Alternatives Considered

Editing framework source code to change the default model
→ Not ideal, introduces friction and requires modifying tracked files.

Environment variable overrides
→ Still requires mapping logic and doesn't align with CLI ergonomics.

Custom wrappers around AgentRunner
→ Adds unnecessary boilerplate for a simple configuration need.

Additional Context

The framework markets itself as supporting 100+ LLMs through LiteLLM, but without this flag, users cannot practically test across providers without patching internals. This impacts developer experience during agent development and evaluation.

Implementation Ideas

Add --model / -m to argparse for both run_parser and shell_parser

Update cmd_run and cmd_shell to forward args.model to AgentRunner.load

No breaking changes expected since the flag is optional

Default behavior remains unchanged if flag is omitted

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions