Skip to content

Add OpenRouter provider support with example config#631

Open
sebastiensimon1 wants to merge 1 commit intosrbhr:mainfrom
sebastiensimon1:add-openrouter-support
Open

Add OpenRouter provider support with example config#631
sebastiensimon1 wants to merge 1 commit intosrbhr:mainfrom
sebastiensimon1:add-openrouter-support

Conversation

@sebastiensimon1
Copy link
Copy Markdown

@sebastiensimon1 sebastiensimon1 commented Jan 21, 2026

Pull Request Title

Add OpenRouter provider support with example configuration

Related Issue

N/A - Feature enhancement to expand LLM provider options

Description

This PR adds support for OpenRouter as an alternative LLM provider, allowing users to access multiple AI models (OpenAI, Anthropic, Google, Meta, etc.) through a single API endpoint. This gives users more flexibility in model selection and potentially better pricing options.

The implementation includes:

  • Example configuration file showing OpenRouter setup
  • Updated .gitignore to prevent API key exposure
  • Documentation on how to configure and use OpenRouter

Type

  • Bug Fix
  • Feature Enhancement
  • Documentation Update
  • Code Refactoring
  • Other (please specify):

Proposed Changes

  • Added apps/backend/data/config.json with OpenRouter configuration template
  • Updated .gitignore to ensure config.json files are not committed
  • Updated README.md with OpenRouter setup and configuration instructions
  • Provides secure template that prevents accidental API key commits

Screenshots / Code Snippets (if applicable)

Example OpenRouter configuration in config.json.example:

{
  "provider": "openrouter",
  "model": "openai/gpt-5.2-pro",
  "api_key": "your-openrouter-api-key-here",
  "api_base": "https://openrouter.ai/api/v1",
  "api_keys": {
    "openrouter": "your-openrouter-api-key-here"
  }
}

Users can choose from various models:

  • openai/gpt-4
  • anthropic/claude-3-opus
  • google/gemini-pro
  • meta-llama/llama-3-70b
  • And 100+ more models

How to Test

  1. Sign up for an OpenRouter account at https://openrouter.ai/
  2. Generate an API key from the dashboard
  3. Copy the example config file:
   cp apps/backend/data/config.json apps/backend/data/config.json
  1. Edit config.json and replace your-openrouter-api-key-here with your actual API key
  2. Choose your preferred model from https://openrouter.ai/models
  3. Run the Resume-Matcher application
  4. Verify successful connection to OpenRouter and proper LLM responses

Checklist

  • The code compiles successfully without any errors or warnings
  • The changes have been tested and verified
  • The documentation has been updated (if applicable)
  • The changes follow the project's coding guidelines and best practices
  • The commit messages are descriptive and follow the project's guidelines
  • All tests (if applicable) pass successfully
  • This pull request has been linked to the related issue (if applicable)

Additional Information

Benefits of OpenRouter Integration:

  • Multiple Providers: Access OpenAI, Anthropic, Google, Meta, and more through one API
  • Cost Flexibility: Compare pricing across providers and choose the most cost-effective option
  • Easy Model Switching: Change models without modifying code
  • Unified Interface: Single API key for all supported models
  • Transparent Pricing: Pay-as-you-go with clear per-token pricing

Security Considerations:

  • Example config file uses placeholder values only
  • Updated .gitignore to prevent real API keys from being committed
  • Users must manually create their own config.json with their credentials
  • Documentation emphasizes keeping API keys private

Additional Resources:

Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 1 file

Prompt for AI agents (all issues)

Check if these issues are valid — if so, understand the root cause of each and fix them.


<file name="apps/backend/data/config.json">

<violation number="1" location="apps/backend/data/config.json:4">
P1: `config.json` containing API key placeholders is tracked despite `.gitignore` intending to keep it untracked, so real keys could be accidentally committed once edited.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

{
"provider": "openrouter",
"model": "openai/gpt-5.2-pro",
"api_key": "INSERT_OPENROUTER_API",
Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai bot Jan 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: config.json containing API key placeholders is tracked despite .gitignore intending to keep it untracked, so real keys could be accidentally committed once edited.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At apps/backend/data/config.json, line 4:

<comment>`config.json` containing API key placeholders is tracked despite `.gitignore` intending to keep it untracked, so real keys could be accidentally committed once edited.</comment>

<file context>
@@ -0,0 +1,9 @@
+{
+  "provider": "openrouter",
+  "model": "openai/gpt-5.2-pro",
+  "api_key": "INSERT_OPENROUTER_API",
+  "api_base": null,
+  "api_keys": {
</file context>
Fix with Cubic

@srbhr
Copy link
Copy Markdown
Owner

srbhr commented Jan 23, 2026

Hey @sebastiensimon1
I'm using LiteLLM for AI calls. Can you please tell me how this is different?

LiteLLM has support for multiple AI providers, including Open Router.

@sebastiensimon1
Copy link
Copy Markdown
Author

@srbhr You're right that LiteLLM supports multiple providers. The config.json isn't about adding provider support but it's about improving the user experience.

The problem with .env only, users must restart the server every time they change models. This is frustrating when testing different models or switching providers.

The solution with config.json enables hot reloading. Users can switch models by editing one file, and it works immediately and no restart needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants