You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Cursor] Improve Reasoning Tokens Documentation and Implementation (#99)
* Refactor plan_exec_llm to use centralized LLM query function
- Update plan_exec_llm to use query_llm from llm_api
- Remove redundant LLM client creation and token tracking logic
- Add support for multiple LLM providers and models via CLI arguments
- Simplify token usage tracking by leveraging existing infrastructure
- Remove hardcoded OpenAI-specific code to improve provider flexibility
* [Cursor] Improve Reasoning Tokens Documentation and Implementation
This commit improves the handling and documentation of reasoning tokens across the codebase:
- Added comprehensive docstrings explaining reasoning tokens
- Enhanced query_llm function documentation for provider-specific behaviors
- Fixed token tracking for o1 model and non-o1 models
- Improved test coverage and documentation
- Added CHANGELOG.md to track changes
Key technical details:
- Reasoning tokens are o1-specific (OpenAI's most advanced model)
- All other models have reasoning_tokens=None
- Token tracking behavior varies by provider (OpenAI, Anthropic, Gemini)
Testing:
- All 21 tests passing
- Added specific test cases for reasoning tokens
- Improved test documentation and coverage
* update token check logic
Copy file name to clipboardExpand all lines: .cursorrules
+1Lines changed: 1 addition & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -127,6 +127,7 @@ If needed, you can further use the `web_scraper.py` file to scrape the web page
127
127
- When using seaborn styles in matplotlib, use 'seaborn-v0_8' instead of 'seaborn' as the style name due to recent seaborn version changes
128
128
- Use `gpt-4o` as the model name for OpenAI. It is the latest GPT model and has vision capabilities as well. `o1` is the most advanced and expensive model from OpenAI. Use it when you need to do reasoning, planning, or get blocked.
129
129
- Use `claude-3-5-sonnet-20241022` as the model name for Claude. It is the latest Claude model and has vision capabilities as well.
130
+
- When running Python scripts that import from other local modules, use `PYTHONPATH=.` to ensure Python can find the modules. For example: `PYTHONPATH=. python tools/plan_exec_llm.py` instead of just `python tools/plan_exec_llm.py`. This is especially important when using relative imports.
reasoning_tokens=response.usage.reasoning_tokensifmodel.lower().startswith("o") elseNone# Only checks if model starts with "o", e.g., o1, o1-preview, o1-mini, o3, etc. Can update this logic to specific models in the future.
parser=argparse.ArgumentParser(description='Query OpenAI o1 model with project plan context')
92
+
parser=argparse.ArgumentParser(description='Query LLM with project plan context')
142
93
parser.add_argument('--prompt', type=str, help='Additional prompt to send to the LLM', required=False)
143
94
parser.add_argument('--file', type=str, help='Path to a file whose content should be included in the prompt', required=False)
95
+
parser.add_argument('--provider', choices=['openai','anthropic','gemini','local','deepseek','azure'], default='openai', help='The API provider to use')
96
+
parser.add_argument('--model', type=str, help='The model to use (default depends on provider)')
0 commit comments