A high-performance, multi-protocol AI proxy server with intelligent account rotation, quota management, and a beautiful web console. Built with Rust for maximum performance and reliability.
- Multi-Protocol Support: Compatible with OpenAI, Anthropic (Claude), and Gemini API formats
- Intelligent Account Rotation: Automatically switches between accounts based on quota, rate limits, and session stickiness
- Model Router: Map client-requested models to your preferred upstream targets
- Multi-API-Key Management: Create multiple API keys with isolated usage tracking
- WebAuthn Authentication: Secure passkey-based authentication for the web console
- Real-time Monitoring: Track requests, tokens, and quota usage across all accounts
- Docker Ready: Easy deployment with pre-built Docker images
docker run -d --name antiproxy \
-p 8045:8045 \
-e ANTI_PROXY_BIND=0.0.0.0 \
-e ANTI_PROXY_ALLOW_LAN=1 \
-v antiproxy-data:/root/.AntiProxy \
linwanxiaoyehua/antiproxy:latestOpen the web console: http://localhost:8045
-
Install Rust (stable toolchain):
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
-
Clone and build:
git clone https://github.com/user/Antigaavity-Web.git cd Antigaavity-Web cargo build --release -
Run the server:
cargo run --release
-
Open the web console:
http://localhost:8045
The main dashboard showing:
- Summary Stats: Total accounts, average Gemini/Claude quota remaining
- Current Account: The account currently being used for API requests (updates in real-time as requests are made)
- Other Accounts: Quick view of all accounts with their quota status
- Add Account: Two methods to add new Google accounts:
- OAuth Login (Recommended): Click "Start OAuth Login" and authorize with your Google account
- Refresh Token: Manually paste a refresh token if you have one
Configure how the proxy handles API requests:
-
Model Router: Map model families to upstream targets
- Claude 4.5 Series (Opus, Sonnet, Haiku)
- Claude 3.5 Series (Sonnet, Haiku)
- GPT-4 Series (o1, o3, gpt-4)
- GPT-4o / 3.5 Series (4o, turbo, mini)
- GPT-5 Series
- Custom mappings for exact model name overrides
-
Multi-Protocol Support:
- OpenAI:
/v1/chat/completions,/v1/completions,/v1/responses - Anthropic:
/v1/messages - Gemini:
/v1beta/models/...
- OpenAI:
-
Code Examples: Ready-to-use integration examples for each protocol
Manage all your Google accounts:
- View account email, status (Active/Disabled), subscription tier
- See Gemini and Claude quota percentages
- Actions: Set as current, refresh quota, disable/enable, delete
- Drag to reorder account priority
Create and manage multiple API keys:
- Total Usage: Aggregated stats across all keys (requests, tokens)
- Per-Key Stats: Individual usage tracking for each API key
- Actions: Copy key, regenerate, enable/disable, reset usage, delete
- Appearance: Light/Dark/System theme
- Danger Zone: Reset authentication (removes all passkeys)
Configure Claude Code to use AntiProxy as the API endpoint:
# Set the API endpoint to your AntiProxy server
export ANTHROPIC_BASE_URL="http://localhost:8045"
# Set your AntiProxy API key (create one in the API Keys page)
export ANTHROPIC_API_KEY="sk-your-antiproxy-key"
# Run Claude Code as normal
claudeOr add to your shell profile (~/.bashrc, ~/.zshrc):
export ANTHROPIC_BASE_URL="http://localhost:8045"
export ANTHROPIC_API_KEY="sk-your-antiproxy-key"Configure Codex to use AntiProxy:
# Set the API endpoint
export OPENAI_BASE_URL="http://localhost:8045/v1"
# Set your AntiProxy API key
export OPENAI_API_KEY="sk-your-antiproxy-key"
# Run Codex as normal
codexConfigure Gemini CLI to use AntiProxy:
# Set the API endpoint
export GEMINI_API_BASE="http://localhost:8045"
# Set your AntiProxy API key (if auth is enabled)
export GEMINI_API_KEY="sk-your-antiproxy-key"
# Run Gemini CLI
geminifrom openai import OpenAI
client = OpenAI(
base_url="http://localhost:8045/v1",
api_key="sk-your-antiproxy-key"
)
response = client.chat.completions.create(
model="gpt-4o", # Will be routed based on your Model Router config
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)import anthropic
client = anthropic.Anthropic(
base_url="http://localhost:8045",
api_key="sk-your-antiproxy-key"
)
message = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello!"}]
)
print(message.content[0].text)# OpenAI-compatible endpoint
curl http://localhost:8045/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-your-antiproxy-key" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello!"}]
}'
# Anthropic-compatible endpoint
curl http://localhost:8045/v1/messages \
-H "Content-Type: application/json" \
-H "x-api-key: sk-your-antiproxy-key" \
-H "anthropic-version: 2023-06-01" \
-d '{
"model": "claude-sonnet-4-20250514",
"max_tokens": 1024,
"messages": [{"role": "user", "content": "Hello!"}]
}'# Run in development mode
cargo run
# Run with hot-reload (requires cargo-watch)
cargo install cargo-watch
cargo watch -x runCreate docker-compose.yml:
version: '3.8'
services:
antiproxy:
image: linwanxiaoyehua/antiproxy:latest
container_name: antiproxy
restart: unless-stopped
ports:
- "8045:8045"
environment:
- ANTI_PROXY_BIND=0.0.0.0
- ANTI_PROXY_ALLOW_LAN=1
volumes:
- antiproxy-data:/root/.AntiProxy
volumes:
antiproxy-data:Run:
docker-compose up -dCreate /etc/systemd/system/antiproxy.service:
[Unit]
Description=AntiProxy AI Gateway
After=network.target
[Service]
Type=simple
User=antiproxy
WorkingDirectory=/opt/antiproxy
ExecStart=/opt/antiproxy/antiproxy
Restart=always
RestartSec=5
Environment=ANTI_PROXY_BIND=0.0.0.0
Environment=ANTI_PROXY_ALLOW_LAN=1
[Install]
WantedBy=multi-user.targetEnable and start:
sudo systemctl daemon-reload
sudo systemctl enable antiproxy
sudo systemctl start antiproxyserver {
listen 443 ssl http2;
server_name api.example.com;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
location / {
proxy_pass http://127.0.0.1:8045;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# For streaming responses
proxy_buffering off;
proxy_cache off;
proxy_read_timeout 600s;
}
}| Variable | Description | Default |
|---|---|---|
ANTI_PROXY_BIND |
Bind address | 127.0.0.1 |
ANTI_PROXY_ALLOW_LAN |
Allow LAN access (1/true/yes/on) |
false |
ANTI_PROXY_ENABLED |
Force enable proxy | false |
ANTI_PROXY_PORT |
Server port | 8045 |
web_config.json is created automatically on first run. You can adjust:
{
"port": 8045,
"allow_lan_access": false,
"auth_mode": "none",
"anthropic_mapping": { ... },
"openai_mapping": { ... },
"custom_mapping": { ... }
}All data is stored in ~/.AntiProxy/:
~/.AntiProxy/
├── accounts/ # Google account credentials
│ ├── {id}.json
│ └── ...
├── account_index.json # Account list and current account
├── web_config.json # Proxy configuration
├── api_keys.db # API keys database
├── proxy_logs.db # Request logs database
└── webauthn.db # WebAuthn credentials
AntiProxy supports multiple authentication methods:
- No Auth: Open access (suitable for local development)
- API Key: Require
Authorization: Bearer <key>orx-api-keyheader - WebAuthn: Passkey-based authentication for web console
Create API keys in the API Keys page to enable authenticated access.
Ensure the Monitor is enabled. In the web console, check Settings or restart the server - Monitor is enabled by default.
Click "Refresh All Quotas" in the Overview page to force-refresh quota data from Google.
AntiProxy automatically rotates accounts when rate limits are hit. If all accounts are limited:
- Wait for the rate limit to reset (typically a few minutes)
- Add more accounts to increase capacity
Check that:
- The server is running (
cargo runordocker ps) - The port is correct (default: 8045)
- Firewall allows the connection
- For LAN access, set
ANTI_PROXY_ALLOW_LAN=1
Inspired by Antigravity-Manager by lbjlaq, with some code adapted from the original project.
This project is licensed under the same terms (CC BY-NC-SA 4.0). See LICENSE for details.