A modular system for text generation using multiple AI providers with automatic API key rotation.
- 🔄 Automatic API key rotation
- 🔌 Dynamic provider integration
- 🌐 Clean architecture with separation of concerns
- 🚀 Real-time streaming responses
- 📝 Markdown rendering support
- 🛠 Easy provider integration
The system follows a clean architecture pattern with these components:
-
core/- Core system componentsconfig.py- Configuration settingsexceptions.py- Error handlingkey_manager.py- API key managementtext_generation.py- Core business logic
-
models/- Provider-specific implementationsgemini.py- Google Gemini implementationgroq.py- Groq implementationcohere.py- Cohere implementation- etc.
index.html- Web interface with streaming support
- Install dependencies:
pip install -r backend/requirements.txt- Configure API keys in
backend/apikeys.csv:
GROQ API ,GEMINI API,COHERE AI,Samba api key
key1,key1,key1,key1
key2,key2,key2,key2
...- Configure models in
backend/MODELS.csv:
GROQ MODELS,COHERE,SambaNova,GEMINI
model1,model1,model1,model1
model2,model2,model2,model2
...- Start the server:
python backend/app.py- Access the web interface at:
http://localhost:8000/static/index.html
- Add provider column to
MODELS.csv:
GROQ MODELS,COHERE,NEW_PROVIDER
model1,model1,new-model-1
model2,model2,new-model-2- Add API keys to
apikeys.csv:
GROQ API ,COHERE AI,NEW_PROVIDER API
key1,key1,new-key-1
key2,key2,new-key-2- Create provider implementation in
backend/models/:
# backend/models/new_provider.py
async def run_model_stream(api_key: str, model: str, prompt: str):
"""
Implement streaming response for the new provider.
Args:
api_key: The API key to use
model: Model identifier
prompt: User input prompt
Yields:
str: Generated text chunks
"""
try:
# Provider-specific implementation
async for chunk in your_implementation():
yield chunk
except Exception as e:
raise Exception(f"Error with provider: {str(e)}")- Restart the server - the system will automatically:
- Detect the new provider
- Load its models
- Set up API key rotation
- Make it available in the UI
GET /models- List available modelsPOST /generate- Generate text with streaming response{ "model": "model-name", "prompt": "Your prompt here" }
The system includes comprehensive error handling:
- API key validation
- Model availability checks
- Provider module validation
- Request validation
- Streaming error handling
Monitor API key rotation and usage through server logs:
[INFO] Using key #1/5 (AIza...22_o)
[INFO] Next request will use key #2/5
MIT License