Add OpenRouter provider support with example config#631
Add OpenRouter provider support with example config#631sebastiensimon1 wants to merge 1 commit intosrbhr:mainfrom
Conversation
There was a problem hiding this comment.
1 issue found across 1 file
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="apps/backend/data/config.json">
<violation number="1" location="apps/backend/data/config.json:4">
P1: `config.json` containing API key placeholders is tracked despite `.gitignore` intending to keep it untracked, so real keys could be accidentally committed once edited.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| { | ||
| "provider": "openrouter", | ||
| "model": "openai/gpt-5.2-pro", | ||
| "api_key": "INSERT_OPENROUTER_API", |
There was a problem hiding this comment.
P1: config.json containing API key placeholders is tracked despite .gitignore intending to keep it untracked, so real keys could be accidentally committed once edited.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At apps/backend/data/config.json, line 4:
<comment>`config.json` containing API key placeholders is tracked despite `.gitignore` intending to keep it untracked, so real keys could be accidentally committed once edited.</comment>
<file context>
@@ -0,0 +1,9 @@
+{
+ "provider": "openrouter",
+ "model": "openai/gpt-5.2-pro",
+ "api_key": "INSERT_OPENROUTER_API",
+ "api_base": null,
+ "api_keys": {
</file context>
|
Hey @sebastiensimon1 LiteLLM has support for multiple AI providers, including Open Router. |
|
@srbhr You're right that LiteLLM supports multiple providers. The config.json isn't about adding provider support but it's about improving the user experience. The problem with .env only, users must restart the server every time they change models. This is frustrating when testing different models or switching providers. The solution with config.json enables hot reloading. Users can switch models by editing one file, and it works immediately and no restart needed. |
Pull Request Title
Add OpenRouter provider support with example configuration
Related Issue
N/A - Feature enhancement to expand LLM provider options
Description
This PR adds support for OpenRouter as an alternative LLM provider, allowing users to access multiple AI models (OpenAI, Anthropic, Google, Meta, etc.) through a single API endpoint. This gives users more flexibility in model selection and potentially better pricing options.
The implementation includes:
Type
Proposed Changes
apps/backend/data/config.jsonwith OpenRouter configuration template.gitignoreto ensureconfig.jsonfiles are not committedREADME.mdwith OpenRouter setup and configuration instructionsScreenshots / Code Snippets (if applicable)
Example OpenRouter configuration in
config.json.example:{ "provider": "openrouter", "model": "openai/gpt-5.2-pro", "api_key": "your-openrouter-api-key-here", "api_base": "https://openrouter.ai/api/v1", "api_keys": { "openrouter": "your-openrouter-api-key-here" } }Users can choose from various models:
openai/gpt-4anthropic/claude-3-opusgoogle/gemini-prometa-llama/llama-3-70bHow to Test
config.jsonand replaceyour-openrouter-api-key-herewith your actual API keyChecklist
Additional Information
Benefits of OpenRouter Integration:
Security Considerations:
.gitignoreto prevent real API keys from being committedconfig.jsonwith their credentialsAdditional Resources: