Add OPENAI_LLM_TIMEOUT env var for custom OpenAI-compatible provider#1022
Closed
joseluisll wants to merge 2 commits intoNousResearch:mainfrom
Closed
Add OPENAI_LLM_TIMEOUT env var for custom OpenAI-compatible provider#1022joseluisll wants to merge 2 commits intoNousResearch:mainfrom
joseluisll wants to merge 2 commits intoNousResearch:mainfrom
Conversation
Author
|
@teknium1 Can you please review this PR?. A 900s hardcoded delay is a stopper for developers using local LLM´s. |
arilotter
requested changes
Mar 12, 2026
arilotter
left a comment
There was a problem hiding this comment.
this commit includes many unrelated formatting changes. please revert those.
Adds support for OPENAI_LLM_TIMEOUT environment variable to control LLM timeout when using custom OpenAI-compatible providers. Key changes: - Added OPENAI_LLM_TIMEOUT to OPTIONAL_ENV_VARS in config.py - Modified _build_api_kwargs() in run_agent.py to use custom timeout for custom providers - Added comprehensive test coverage for all scenarios Behavior: - Custom provider + env set: Uses OPENAI_LLM_TIMEOUT value - Custom provider + env not set/invalid: Falls back to 900 seconds - Other providers: Always uses 900 seconds (ignores env var) This addresses issue NousResearch#1010 without any unrelated formatting changes.
5f24034 to
bc10916
Compare
Author
I have restored the formatting changes, so only the TIMEOUT remains, for the custom provider. |
Author
|
change done by PR: #1194 . Use the environment variable HERMES_API_TIMEOUT |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Adds support for
OPENAI_LLM_TIMEOUTenvironment variable to control LLM timeout when using custom OpenAI-compatible providers.This PR addresses issue #1010.
Changes
run_agent.py
_build_api_kwargs()method inAIAgentclass (line 2624)OPENAI_LLM_TIMEOUTenv var when provider is "custom"tests/test_run_agent.py
Added comprehensive test coverage:
test_timeout_custom_provider_with_env: Verifies custom timeout from env vartest_timeout_custom_provider_without_env: Verifies default timeout when env not settest_timeout_openrouter_ignores_env: Confirms other providers ignore the env vartest_timeout_invalid_env_value: Validates fallback to default on invalid valueshermes_cli/config.py
Added
OPENAI_LLM_TIMEOUTtoOPTIONAL_ENV_VARS(line 576)Behavior
Testing
All tests passing: