Skip to content

No configuration option exists for rate-limiting LLM API calls #927

@Devendraxp

Description

@Devendraxp

I read the complete documentation and there is no option available for limit per minute model call. As I use it with free providers like groq cloud or even with openai, my agent hit per minute limit and with a error 429, it stops working.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions