LM Arena Bridge - CURRENTLY EXPERIMENTALLY FIXED DUE TO ANTI-BOT MEASURES BY LMARENA (#27)
A bridge to interact with LM Arena. This project provides an OpenAI compatible API endpoint that interacts with models on LM Arena.
- Python 3.x
- Clone the repository:
git clone https://github.com/CloudWaddie/LMArenaBridge.git
- Navigate to the project directory:
cd LMArenaBridge - Install the required packages:
pip install -r requirements.txt
To use the LM Arena Bridge, you need to get your authentication token from the LM Arena website.
- Open your web browser and go to the LM Arena website.
- Send a message in the chat to any model.
- After the model responds, open the developer tools in your browser (usually by pressing F12).
- Go to the "Application" or "Storage" tab (the name may vary depending on your browser).
- In the "Cookies" section, find the cookies for the LM Arena site.
- Look for a cookie named
arena-auth-prod-v1and copy its value. This is your authentication token. THIS IS THE TOKEN STARTING WITH base64-
- Go to the admin portal.
- Login.
- Add the token to the list.
Once you have configured your authentication token, you can run the application:
python src/main.pyThe application will start a server on localhost:8000.
You can use this project as a backend for OpenWebUI, a user-friendly web interface for Large Language Models.
-
Run the LM Arena Bridge: Make sure the
lmarenabridgeapplication is running.python src/main.py
-
Open OpenWebUI: Open the OpenWebUI interface in your web browser.
-
Configure the OpenAI Connection:
- Go to your Profile.
- Open the Admin Panel.
- Go to Settings.
- Go to Connections.
- Modify the OpenAI connection.
-
Set the API Base URL:
- In the OpenAI connection settings, set the API Base URL to the URL of the LM Arena Bridge API, which is
http://localhost:8000/api/v1. - You can leave the API Key field empty or enter any value. It is not used for authentication by the bridge itself.
- In the OpenAI connection settings, set the API Base URL to the URL of the LM Arena Bridge API, which is
-
Start Chatting: You should now be able to select and chat with the models available on LM Arena through OpenWebUI.
LMArenaBridge supports sending images to vision-capable models on LMArena. When you send a message with images to a model that supports image input, the images are automatically uploaded to LMArena's R2 storage and included in the request.
LMArenaBridge includes comprehensive error handling for production use:
- Request Validation: Validates JSON format, required fields, and data types
- Model Validation: Checks model availability and access permissions
- Image Processing: Validates image formats, sizes (max 10MB), and MIME types
- Upload Failures: Gracefully handles image upload failures with retry logic
- Timeout Handling: Configurable timeouts for all HTTP requests (30-120s)
- Rate Limiting: Built-in rate limiting per API key
- Error Responses: OpenAI-compatible error format for easy client integration
Debug mode is OFF by default in production. To enable debugging:
# In src/main.py
DEBUG = True # Set to True for detailed loggingWhen debug mode is enabled, you'll see:
- Detailed request/response logs
- Image upload progress
- Model capability checks
- Session management details
Important: Keep debug mode OFF in production to reduce log verbosity and improve performance.
Monitor these key metrics in production:
- API Response Times: Check for slow responses indicating timeout issues
- Error Rates: Track 4xx/5xx errors from
/api/v1/chat/completions - Model Usage: Dashboard shows top 10 most-used models
- Image Upload Success: Monitor image upload failures in logs
- API Keys: Use strong, randomly generated API keys (dashboard auto-generates secure keys)
- Rate Limiting: Configure appropriate rate limits per key in dashboard
- Admin Password: Change default admin password in
config.json - HTTPS: Use a reverse proxy (nginx, Caddy) with SSL for production
- Firewall: Restrict access to dashboard port (default 8000)
"LMArena API error: An error occurred"
- Check that your
arena-auth-prod-v1token is valid - Verify
cf_clearancecookie is not expired - Ensure model is available on LMArena
Image Upload Failures
- Verify image is under 10MB
- Check MIME type is supported (image/png, image/jpeg, etc.)
- Ensure LMArena R2 storage is accessible
Timeout Errors
- Increase timeout in
src/main.pyif needed (default 120s) - Check network connectivity to LMArena
- Consider using streaming mode for long responses
server {
listen 443 ssl;
server_name api.yourdomain.com;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
location / {
proxy_pass http://localhost:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# For streaming responses
proxy_buffering off;
proxy_cache off;
}
}Create /etc/systemd/system/lmarenabridge.service:
[Unit]
Description=LMArena Bridge API
After=network.target
[Service]
Type=simple
User=youruser
WorkingDirectory=/path/to/lmarenabridge
Environment="PATH=/path/to/venv/bin"
ExecStart=/path/to/venv/bin/python src/main.py
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.targetEnable and start:
sudo systemctl enable lmarenabridge
sudo systemctl start lmarenabridge
sudo systemctl status lmarenabridge