-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Description
Is your feature request related to a problem? Please describe.
Currently, the application only supports predefined API endpoints for OpenAI and Gemini, which limits flexibility when using third-party API providers that mimic OpenAI/Gemini's request/response formats (e.g., self-hosted models, alternative cloud services, or specialized API gateways). This forces users to either modify the application's core code or abandon third-party options entirely, creating friction for those seeking to leverage custom API infrastructure.A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
Describe the solution you'd like
I would like to see a new feature that allows users to configure custom API URLs for both OpenAI-format and Gemini-format APIs. This feature should include:
A settings panel where users can input a custom base URL for each API type (separate fields for OpenAI-style and Gemini-style endpoints)
Preservation of existing authentication workflows (e.g., API key input) to ensure compatibility with third-party providers that use standard API key authentication
Validation of the custom URL (basic format check) to prevent invalid entries
Fallback options to revert to the default official API URLs if needed
Describe alternatives you've considered
Hardcoding custom URLs: Modifying the application's source code to point to third-party endpoints works temporarily but is not sustainable (loses functionality on updates and requires technical expertise).
Using API proxies: Routing requests through a proxy to redirect traffic to custom endpoints adds extra latency and requires managing additional infrastructure, which is overkill for simple use cases.
Third-party plugins: No existing plugins for the application support dynamic URL configuration for these API formats, making this option unavailable.
Additional context
This feature would benefit users in multiple scenarios, such as:
Teams using self-hosted LLMs (e.g., Llama 3 with OpenAI-compatible wrappers) for privacy or compliance reasons
Developers testing against staging environments of third-party API providers
Organizations leveraging specialized API gateways to manage rate limits or logging
No breaking changes to existing functionality are expected—this would simply add an optional configuration layer for advanced users.