Skip to content

LMArena scripts to enable hosting an OpenAI compatible API endpoint that interacts with models on LMArena including experimental support for stealth models.

License

Notifications You must be signed in to change notification settings

CloudWaddie/LMArenaBridge

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

65 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LM Arena Bridge - CURRENTLY EXPERIMENTALLY FIXED DUE TO ANTI-BOT MEASURES BY LMARENA (#27)

Description

A bridge to interact with LM Arena. This project provides an OpenAI compatible API endpoint that interacts with models on LM Arena.

Pictures

image image image

Getting Started

Prerequisites

  • Python 3.x

Installation

  1. Clone the repository:
    git clone https://github.com/CloudWaddie/LMArenaBridge.git
  2. Navigate to the project directory:
    cd LMArenaBridge
  3. Install the required packages:
    pip install -r requirements.txt

Usage

1. Get your Authentication Token

To use the LM Arena Bridge, you need to get your authentication token from the LM Arena website.

  1. Open your web browser and go to the LM Arena website.
  2. Send a message in the chat to any model.
  3. After the model responds, open the developer tools in your browser (usually by pressing F12).
  4. Go to the "Application" or "Storage" tab (the name may vary depending on your browser).
  5. In the "Cookies" section, find the cookies for the LM Arena site.
  6. Look for a cookie named arena-auth-prod-v1 and copy its value. This is your authentication token. THIS IS THE TOKEN STARTING WITH base64-

2. Configure the Application

  1. Go to the admin portal.
  2. Login.
  3. Add the token to the list.

3. Run the Application

Once you have configured your authentication token, you can run the application:

python src/main.py

The application will start a server on localhost:8000.

Integration with OpenWebUI

You can use this project as a backend for OpenWebUI, a user-friendly web interface for Large Language Models.

Instructions

  1. Run the LM Arena Bridge: Make sure the lmarenabridge application is running.

    python src/main.py
  2. Open OpenWebUI: Open the OpenWebUI interface in your web browser.

  3. Configure the OpenAI Connection:

    • Go to your Profile.
    • Open the Admin Panel.
    • Go to Settings.
    • Go to Connections.
    • Modify the OpenAI connection.
  4. Set the API Base URL:

    • In the OpenAI connection settings, set the API Base URL to the URL of the LM Arena Bridge API, which is http://localhost:8000/api/v1.
    • You can leave the API Key field empty or enter any value. It is not used for authentication by the bridge itself.
  5. Start Chatting: You should now be able to select and chat with the models available on LM Arena through OpenWebUI.

Image Support

LMArenaBridge supports sending images to vision-capable models on LMArena. When you send a message with images to a model that supports image input, the images are automatically uploaded to LMArena's R2 storage and included in the request.

Production Deployment

Error Handling

LMArenaBridge includes comprehensive error handling for production use:

  • Request Validation: Validates JSON format, required fields, and data types
  • Model Validation: Checks model availability and access permissions
  • Image Processing: Validates image formats, sizes (max 10MB), and MIME types
  • Upload Failures: Gracefully handles image upload failures with retry logic
  • Timeout Handling: Configurable timeouts for all HTTP requests (30-120s)
  • Rate Limiting: Built-in rate limiting per API key
  • Error Responses: OpenAI-compatible error format for easy client integration

Debug Mode

Debug mode is OFF by default in production. To enable debugging:

# In src/main.py
DEBUG = True  # Set to True for detailed logging

When debug mode is enabled, you'll see:

  • Detailed request/response logs
  • Image upload progress
  • Model capability checks
  • Session management details

Important: Keep debug mode OFF in production to reduce log verbosity and improve performance.

Monitoring

Monitor these key metrics in production:

  • API Response Times: Check for slow responses indicating timeout issues
  • Error Rates: Track 4xx/5xx errors from /api/v1/chat/completions
  • Model Usage: Dashboard shows top 10 most-used models
  • Image Upload Success: Monitor image upload failures in logs

Security Best Practices

  1. API Keys: Use strong, randomly generated API keys (dashboard auto-generates secure keys)
  2. Rate Limiting: Configure appropriate rate limits per key in dashboard
  3. Admin Password: Change default admin password in config.json
  4. HTTPS: Use a reverse proxy (nginx, Caddy) with SSL for production
  5. Firewall: Restrict access to dashboard port (default 8000)

Common Issues

"LMArena API error: An error occurred"

  • Check that your arena-auth-prod-v1 token is valid
  • Verify cf_clearance cookie is not expired
  • Ensure model is available on LMArena

Image Upload Failures

  • Verify image is under 10MB
  • Check MIME type is supported (image/png, image/jpeg, etc.)
  • Ensure LMArena R2 storage is accessible

Timeout Errors

  • Increase timeout in src/main.py if needed (default 120s)
  • Check network connectivity to LMArena
  • Consider using streaming mode for long responses

Reverse Proxy Example (Nginx)

server {
    listen 443 ssl;
    server_name api.yourdomain.com;
    
    ssl_certificate /path/to/cert.pem;
    ssl_certificate_key /path/to/key.pem;
    
    location / {
        proxy_pass http://localhost:8000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        
        # For streaming responses
        proxy_buffering off;
        proxy_cache off;
    }
}

Running as a Service (systemd)

Create /etc/systemd/system/lmarenabridge.service:

[Unit]
Description=LMArena Bridge API
After=network.target

[Service]
Type=simple
User=youruser
WorkingDirectory=/path/to/lmarenabridge
Environment="PATH=/path/to/venv/bin"
ExecStart=/path/to/venv/bin/python src/main.py
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target

Enable and start:

sudo systemctl enable lmarenabridge
sudo systemctl start lmarenabridge
sudo systemctl status lmarenabridge

About

LMArena scripts to enable hosting an OpenAI compatible API endpoint that interacts with models on LMArena including experimental support for stealth models.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages