Enhance 429 Too Many Requests error handling #41
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The MCP server should consider request number of tokens and be able to handle if the number even exceeds.
I specified the MODEL as
gpt-4o-miniin the.envfile. The problem of this setting gets the error below on the runtime:Error generating contextual embedding: Error code: 429 - {'error': {'message': 'Rate limit reached for gpt-4o-mini in organization org-xxxxxxxxxxxxxxxx on tokens per min (TPM): Limit 200000, Used 195663, Requested 7592. Please try again in 976ms. Visit https://platform.openai.com/account/rate-limits to learn more.', 'type': 'tokens', 'param': None, 'code': 'rate_limit_exceeded'}}. Using original chunk instead. INFO HTTP Request: POST _client.py:1025 https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests"Chaning the code to apply below resolved the error gone: