Skip to content

Fix binding options not loaded from environment variables#2585

Merged
danielaskdd merged 2 commits intoHKUDS:mainfrom
danielaskdd:fix-env
Jan 15, 2026
Merged

Fix binding options not loaded from environment variables#2585
danielaskdd merged 2 commits intoHKUDS:mainfrom
danielaskdd:fix-env

Conversation

@danielaskdd
Copy link
Copy Markdown
Collaborator

Fix binding options not loaded from environment variables

Summary

This PR fixes a critical bug where environment variables for LLM binding options (e.g., OPENAI_LLM_TEMPERATURE, OPENAI_LLM_MAX_TOKENS) were silently ignored, causing the server to always use default values.

Problem

When using OpenAI/Azure OpenAI bindings, environment variables like OPENAI_LLM_TEMPERATURE=0.7 had no effect. The startup log would show:

INFO: OpenAI LLM Options: {}

Root Cause: The _GlobalArgsProxy class did not properly support vars() calls. When binding_options.py called vars(args) on the proxy object, it returned an empty dict instead of the underlying argparse.Namespace attributes, causing all binding-specific options to be lost.

Changes

Commit 1: Refactor argument parsing (a000bdf0)

  • Consolidated duplicate --llm-binding and --embedding-binding parsing logic
  • Added edge case handling for CLI arguments starting with -
  • Improved code maintainability by reducing duplication

Commit 2: Fix GlobalArgsProxy (a555690c)

  • Replaced __getattr__ with __getattribute__ to intercept __dict__ access
  • When vars(proxy) is called, it now correctly returns the underlying namespace's __dict__
  • Enables binding_options.options_dict() to properly extract provider-specific options

Testing

Verified with:

from lightrag.api.config import global_args
from lightrag.llm.binding_options import OpenAILLMOptions

options = OpenAILLMOptions.options_dict(global_args)
print(options)  # {'temperature': 0.7, 'max_tokens': 9000}

Impact

  • Before: All OpenAI/Gemini/Ollama LLM binding options were ignored
  • After: Environment variables are properly loaded and applied to LLM calls

• Extract binding value determination logic
• Consolidate env var fallback handling
• Simplify conditional option registration
• Improve code readability and maintainability
• Use consistent parsing for both bindings
• Override __getattribute__ vs __getattr__
• Handle __dict__ access for vars() calls
• Maintain backward compatibility
• Support provider config extraction
• Add explicit global declarations
@danielaskdd
Copy link
Copy Markdown
Collaborator Author

@codex review

@chatgpt-codex-connector
Copy link
Copy Markdown

Codex Review: Didn't find any major issues. Chef's kiss.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@danielaskdd danielaskdd merged commit 4e7bcec into HKUDS:main Jan 15, 2026
3 checks passed
@danielaskdd danielaskdd deleted the fix-env branch January 15, 2026 11:49
cleo-ia added a commit to cleo-intelligence/LightRAG-MT that referenced this pull request Jan 16, 2026
Cherry-picked from HKUDS/LightRAG PR HKUDS#2585.

- Fix GlobalArgsProxy to support vars() for binding options extraction
- Refactor argument parsing for binding options to reduce duplication

Before: OPENAI_LLM_TEMPERATURE and similar env vars were silently ignored
After: Environment variables properly loaded and applied to LLM calls

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant