Automated tool for maintaining LibreChat YAML configurations with up-to-date model lists from 20+ AI providers.
Prerequisites: Python 3.8+ and pip
# Install dependencies
pip install -r requirements.txt
# For local use, configure API keys
cp .env.example .env
# Edit .env with your API keysRun the main update script:
python update.pyThis will:
- Prompt whether to update YAML formatting (should not be required)
- Prompt whether to update model lists
- Create backups before any modifications
- Process configuration files
For automated/scheduled runs without user interaction:
# Update all YAML files
python update.py --automatedOr using environment variables:
AUTOMATED_MODE=true python update.pyFor GitHub Actions and other CI/CD pipelines:
# Runs automated update with YAML validation
cd scripts
python automated_update.pyThis script:
- Fetches latest models from all providers
- Updates all YAML files
- Validates YAML syntax
- Returns appropriate exit codes:
0: Success1: Update failed2: YAML validation failed
You can also run individual scripts directly:
Convert YAML formatting:
python convert_yaml_style.pyUpdate model lists:
# Interactive mode
python update_models.pyThis repository includes automated daily model updates via GitHub Actions.
-
Add Repository Secrets
Go to your repository Settings → Secrets and variables → Actions, and add the following secrets:
AI302_API_KEY APIPIE_API_KEY COHERE_API_KEY DEEPSEEK_API_KEY FIREWORKS_API_KEY GLHF_API_KEY GROQ_API_KEY HUGGINGFACE_TOKEN HYPERBOLIC_API_KEY KLUSTER_API_KEY MISTRAL_API_KEY NANOGPT_API_KEY NVIDIA_API_KEY OPENROUTER_KEY PERPLEXITY_API_KEY SAMBANOVA_API_KEY TOGETHERAI_API_KEY UNIFY_API_KEY XAI_API_KEYOptional for notifications:
NOTIFICATION_WEBHOOK # Slack/Discord webhook URLOptional for automatic deploy webhook:
DEPLOY_WEBHOOK_URL # URL to trigger redeployment after model updates -
Workflow Configuration
The workflow is located at
.github/workflows/update-models.ymlSchedule: Runs every day at 00:00 UTC
Manual Trigger: Can be triggered manually from GitHub Actions tab
-
How It Works
- On Success: Changes are committed directly to main branch with notification
- On YAML Validation Failure: Creates a PR for manual review and notifies @Berry-13
- On Script Failure: Sends failure notification
- Deploy Webhook: After successful commit, triggers deploy webhook (if configured)
-
Deploy Webhook
After model updates are committed, the workflow can trigger a deploy webhook:
- Configuration: Add
DEPLOY_WEBHOOK_URLas a repository secret - Behavior: If not configured, the step logs a warning and continues without error
- Payload: JSON with
event,date,repository, andrun_idfields - Use case: Trigger redeployment on any platform (Docker server, cloud provider, etc.)
- Configuration: Add
-
Customizing the Schedule
Edit
.github/workflows/update-models.yml:on: schedule: - cron: '0 0 * * *'
Examples:
- Weekly (Mondays):
'0 0 * * 1' - Every 6 hours:
'0 */6 * * *' - Monthly (1st of month):
'0 0 1 * *'
- Weekly (Mondays):
The workflow sends notifications on:
- ✅ Success: Models updated and committed
⚠️ YAML Validation Failed: PR created for review- ❌ Failure: Script encountered errors
Configure webhook URL in repository secrets as NOTIFICATION_WEBHOOK.
Supported formats:
- Slack incoming webhooks
- Discord webhooks
- Any webhook accepting JSON with
textfield
librechat-config-yaml/
├── requirements.txt # Python dependencies
├── README.md # This file
├── scripts/ # Helper scripts
│ ├── README.md # Scripts documentation
│ ├── update.py # Main script
│ ├── convert_yaml_style.py # Convert YAML formatting
│ ├── update_models.py # Update model lists
│ ├── requirements.txt # Python dependencies
│ ├── .env.example # Example environment file
│ ├── .env # Your API keys (create this)
│ ├── ai302.py # Fetch models from 302.AI
│ ├── apipie.py # Fetch models from APIpie
│ ├── cohere.py # Fetch models from Cohere
│ ├── deepseek.py # Fetch models from Deepseek
│ ├── fireworks.py # Fetch models from Fireworks
│ ├── github.py # Fetch models from Github Models
│ ├── glhf.py # Fetch models from GLHF.chat
│ ├── groq.py # Fetch models from Groq
│ ├── huggingface.py # Fetch models from HuggingFace
│ ├── hyperbolic.py # Fetch models from Hyperbolic
│ ├── kluster.py # Fetch models from Kluster
│ ├── mistral.py # Fetch models from Mistral
│ ├── nanogpt.py # Fetch models from NanoGPT
│ ├── nvidia.py # Fetch models from Nvidia
│ ├── openrouter.py # Fetch models from OpenRouter
│ ├── perplexity.py # Fetch models from Perplexity
│ ├── sambanova.py # Fetch models from SambaNova
│ ├── togetherai.py # Fetch models from Together.ai
│ ├── unify.py # Fetch models from Unify
│ └── xai.py # Fetch models from XAI
└── *.yaml # LibreChat configuration files
- All scripts create
.bakfiles before modifying any YAML files - Logs are written to:
convert_yaml.logfor YAML style conversionupdate_models.logfor model updates
- Failed operations are logged with detailed error messages
- The scripts will continue processing remaining files if one fails
- A summary is displayed after completion showing:
- Successfully processed files
- Failed operations
- Model count updates
The tool can update model lists from:
- 302.AI
- APIpie
- Cohere
- Deepseek
- Fireworks
- Github Models
- GLHF.chat
- Groq
- HuggingFace
- Hyperbolic
- Kluster
- Mistral
- NanoGPT
- Nvidia
- OpenRouter
- Perplexity
- SambaNova
- Together.ai
- Unify
- XAI
The provider scripts have been analyzed against their official API documentation:
| Status | Count | Providers |
|---|---|---|
| ✅ Validated | 11 | Cohere, DeepSeek, Fireworks, GitHub, Groq, HuggingFace, Mistral, NVIDIA, OpenRouter, TogetherAI, xAI |
| 4 | APIpie (minor), NanoGPT, Perplexity, SambaNova | |
| ❓ Cannot Verify | 5 | 302.AI, GLHF, Hyperbolic, Kluster, Unify |
- Perplexity & SambaNova: Use web scraping instead of APIs (fragile, may break if page structure changes)
- Unify: Uses
/v0/API version (potentially experimental) - 5 providers: Lack public API documentation for verification
- Most scripts: Well-implemented with proper authentication and error handling
These scripts have been verified against official API documentation and use proper API endpoints:
| Provider | Script | API Endpoint |
|---|---|---|
| Cohere | cohere.py |
https://api.cohere.com/v1/models |
| DeepSeek | deepseek.py |
https://api.deepseek.com/models |
| Fireworks | fireworks.py |
https://api.fireworks.ai/inference/v1/models |
| GitHub | github.py |
https://models.inference.ai.azure.com/models |
| Groq | groq.py |
https://api.groq.com/openai/v1/models |
| HuggingFace | huggingface.py |
https://huggingface.co/api/models |
| Mistral | mistral.py |
https://api.mistral.ai/v1/models |
| NVIDIA | nvidia.py |
https://integrate.api.nvidia.com/v1/models |
| OpenRouter | openrouter.py |
https://openrouter.ai/api/v1/models |
| TogetherAI | togetherai.py |
https://api.together.xyz/v1/models |
| xAI | xai.py |
https://api.x.ai/v1/models |
| Provider | Issue | Impact |
|---|---|---|
| APIpie | Minor: No official docs found, but uses standard OpenAI-compatible endpoint | Low |
| NanoGPT | Web scraping from nano-gpt.com/api |
May break if page changes |
| Perplexity | Scrapes documentation page instead of API | Fragile, may break |
| SambaNova | Scrapes community docs instead of API | Fragile, may break |
These providers lack public API documentation:
- 302.AI - No public API docs available
- GLHF - No public API docs available
- Hyperbolic - No public API docs available
- Kluster - No public API docs available
- Unify - Uses
/v0/endpoint (experimental)
Most provider scripts require valid API keys for testing:
| Provider | Key Required | Notes |
|---|---|---|
| NVIDIA | ❌ No | Public API |
| OpenRouter | ❌ No | Public API |
| Cohere | ✅ Yes | Free tier available |
| DeepSeek | ✅ Yes | - |
| Fireworks | ✅ Yes | - |
| GitHub | ✅ Yes | Uses GITHUB_TOKEN |
| Groq | ✅ Yes | Free tier available |
| HuggingFace | ✅ Yes | Free tier available |
| Mistral | ✅ Yes | - |
| TogetherAI | ✅ Yes | - |
| xAI | ✅ Yes | - |
The following scripts use web scraping instead of official APIs and may break if the source website changes:
| Provider | Source URL | Risk Level |
|---|---|---|
| NanoGPT | nano-gpt.com/api |
Medium |
| Perplexity | Documentation page | High |
| SambaNova | Community docs | High |
Recommendation: Monitor these scripts for failures and consider implementing official API calls when documentation becomes available.
The following providers could not be verified due to lack of public API documentation:
- 302.AI (
ai302.py) - GLHF (
glhf.py) - Hyperbolic (
hyperbolic.py) - Kluster (
kluster.py) - Unify (
unify.py) - Uses experimental/v0/API
These scripts appear functional based on code review but cannot be validated against official specifications.
When adding new providers:
- Create a new fetcher script in the scripts directory
- Update the provider list in
update_models.py - Add any required API keys to
.env.example - Document the API endpoint and any authentication requirements
- Prefer official APIs over web scraping when available