A Rust web service that provides a REST API for AI model interactions using the OpenRouter API.
- Multiple endpoints for AI model interactions:
/api/chatfor basic chat completions/api/promptfor enhanced message formatting
- Support for OpenAI and Anthropic models
- Environment-based configuration
- Health check endpoint
- Proper error handling and validation
- Comprehensive logging
- Rust (latest stable version)
- OpenRouter API key (get one at OpenRouter)
- Clone the repository:
git clone <repository-url>
cd ai-model-switcher- Create a
.envfile in the project root:
OPENROUTER_API_KEY=your-api-key-here
SERVER_HOST=0.0.0.0
SERVER_PORT=80
OPENROUTER_API_URL=https://openrouter.ai/api/v1/chat/completions- Build the project:
cargo buildRun the server:
cargo runThe server will start at http://0.0.0.0:80 by default.
GET /Returns "AI API Server is running" if the server is operational.
The /api/chat endpoint provides basic chat completion functionality.
POST /api/chat
Content-Type: application/json
{
"model": "openai/gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "What is the meaning of life?"
}
]
}The /api/prompt endpoint provides enhanced message formatting capabilities while maintaining compatibility with all supported models.
POST /api/prompt
Content-Type: application/json
{
"model": "anthropic/claude-3-5-sonnet",
"messages": [
{
"role": "system",
"content": [
{
"type": "text",
"text": "You are a helpful assistant."
}
]
},
{
"role": "user",
"content": [
{
"type": "text",
"text": "What is the meaning of life?"
}
]
}
]
}The endpoint supports two content formats:
- Simple string content (same as
/api/chat):
"content": "Your message here"- Structured content with parts:
"content": [
{
"type": "text",
"text": "Your message here"
}
]user: For user messagesassistant: For AI responsessystem: For system instructionstool: For tool responses (requirestool_call_id)
name: Optional identifier for the message sendertool_call_id: Required fortoolrole messages
openai/gpt-3.5-turboopenai/gpt-4anthropic/claude-3-5-sonnetmeta-llama/llama-3.2-3b-instruct:freemeta-llama/llama-3.2-1b-instruct:free
{
"message": {
"role": "assistant",
"content": "The model's response..."
}
}400 Bad Request: When messages array is empty or model is not specified400 Bad Request: When an unsupported model is specified500 Internal Server Error: For server-side errors
OPENROUTER_API_KEY: Your OpenRouter API key (required)SERVER_HOST: Host to bind the server to (default: "0.0.0.0")SERVER_PORT: Port to bind the server to (default: 80)OPENROUTER_API_URL: OpenRouter API URL (default: "https://openrouter.ai/api/v1/chat/completions")
For development, you might want to use different environment variables:
SERVER_HOST=127.0.0.1
SERVER_PORT=3000- Always check the response status code
- Implement proper timeout handling
- Handle rate limiting appropriately
- Validate input before sending to the API
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
For support, please open an issue in this repository.