A powerful command-line interface for the ThinkingModels project that allows you to solve problems using AI and thinking models.
Ensure you have the required dependencies installed:
pip install click colorama rich requests-
Check available commands:
python thinking_models.py --help
-
View available thinking models:
python thinking_models.py models
-
Test your setup:
python thinking_models.py test -
Start interactive mode:
python thinking_models.py interactive
Set these environment variables for API access:
# Required
LLM_API_URL=https://your-llm-api-endpoint.com
# Optional
LLM_API_KEY=your-api-key-here
LLM_MODEL_NAME=gpt-3.5-turbo
LLM_TEMPERATURE=0.7
LLM_MAX_TOKENS=2000
THINKING_MODELS_DIR=modelsYou can also configure settings via command-line options:
python thinking_models.py --api-url https://api.example.com --model gpt-4 --temperature 0.5 query "How can I improve productivity?"Start an interactive session where you can ask questions and get responses:
python thinking_models.py interactiveInteractive Commands:
help- Show help informationmodels- List available thinking modelsconfig- Show current configurationquit/exit/q- Exit interactive mode
Process a single query and get a response:
python thinking_models.py query "How can I improve my startup's marketing strategy?"Options:
-o, --output-file FILE- Save results to a file--output-format [rich|json|plain]- Output format
Examples:
# Basic query
python thinking_models.py query "What's the best approach to prioritize tasks?"
# Save to file
python thinking_models.py query "How to reduce costs?" -o results.txt
# JSON output
python thinking_models.py --output-format json query "Investment strategies?"Process multiple queries from a file:
python thinking_models.py query -f example_queries.txtBatch File Format:
# Comments start with #
How can I improve my startup's marketing strategy?
What's the best approach to analyze large datasets?
Help me prioritize my daily tasks more effectively.
Options:
-f, --batch-file FILE- Input file with queries-o, --output-file FILE- Save results to a file
List all available thinking models:
python thinking_models.py modelsView current configuration:
python thinking_models.py configTest your ThinkingModels setup:
python thinking_models.py testBeautiful, formatted output with tables, panels, and colors:
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Query Information ┃
┠───────────────────────────────────────┨
┃ Query: How to improve productivity? ┃
┃ Models: eisenhower_matrix, agile ┃
┃ Time: 2.34s ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
Structured JSON output for programmatic use:
{
"query": "How to improve productivity?",
"selected_models": ["eisenhower_matrix", "agile"],
"solution": "To improve productivity...",
"processing_time": 2.34,
"error": null
}Simple text output:
Query: How to improve productivity?
Selected Models: eisenhower_matrix, agile
Processing Time: 2.34s
Solution:
To improve productivity...
$ python thinking_models.py interactive
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ ThinkingModels Interactive ┃
┃ Mode ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
✓ 140 thinking models loaded
Your query: How can I manage my time better?
[Processing...]
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Query Information ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛python thinking_models.py query -f business_questions.txt -o business_analysis.json --output-format jsonpython thinking_models.py --api-url https://api.openrouter.ai/api/v1 --model anthropic/claude-3-haiku query "Explain machine learning"-
Be Specific: More specific queries lead to better model selection and solutions.
-
Use Verbose Mode: Add
-vfor detailed processing information:python thinking_models.py -v query "Your question here" -
Check Model Relevance: Use
modelscommand to see what thinking models are available. -
Batch Processing: Use batch files for processing multiple related queries efficiently.
-
Output Formats: Use JSON format when integrating with other tools or scripts.
-
API Configuration Error
Error: LLM_API_URL environment variable must be setSolution: Set the LLM_API_URL environment variable or use
--api-urloption. -
Model Loading Error
Error loading models: [Errno 2] No such file or directory: 'models'Solution: Ensure the models directory exists or specify correct path with
--models-dir. -
API Connection Error
LLM API request failed after 3 attemptsSolution: Check your API URL, key, and internet connection.
Use verbose mode for detailed information:
python thinking_models.py -v testAlways run the test command after configuration:
python thinking_models.py testThis will check:
- Model loading (140 models expected)
- API configuration
- API connection
- Full query processing pipeline
python thinking_models.py --models-dir /path/to/custom/models query "Your question"# Save both rich and JSON outputs
python thinking_models.py query "Question" -o results.txt
python thinking_models.py --output-format json query "Question" -o results.jsonCreate different environment files:
.env.development
LLM_API_URL=http://localhost:1234/v1
LLM_MODEL_NAME=local-model.env.production
LLM_API_URL=https://api.openai.com/v1
LLM_API_KEY=your-production-key
LLM_MODEL_NAME=gpt-4The CLI can be easily integrated into scripts and workflows:
#!/bin/bash
# Process business questions
python thinking_models.py query -f business_questions.txt -o business_results.json --output-format json
# Check if successful
if [ $? -eq 0 ]; then
echo "Processing completed successfully"
# Process results.json with other tools
else
echo "Processing failed"
exit 1
fiimport subprocess
import json
# Run CLI command
result = subprocess.run([
'python', 'thinking_models.py',
'--output-format', 'json',
'query', 'How to improve team productivity?'
], capture_output=True, text=True)
if result.returncode == 0:
response = json.loads(result.stdout)
print(f"Selected models: {response['selected_models']}")
print(f"Solution: {response['solution']}")For issues and questions:
- Run
python thinking_models.py testto diagnose problems - Check that all 140 thinking models are loaded
- Verify API configuration with
python thinking_models.py config - Use verbose mode (
-v) for detailed error information