|
| 1 | +--- |
| 2 | +title: "Bring Your Own Key (BYOK) Guide" |
| 3 | +description: "Configure and use your own API keys with CAMEL-AI to access various LLM providers" |
| 4 | +icon: key |
| 5 | +--- |
| 6 | + |
| 7 | +## What is BYOK? |
| 8 | + |
| 9 | +**Bring Your Own Key (BYOK)** allows you to use your personal API keys from various LLM providers (OpenAI, Anthropic, Google, etc.) with CAMEL-AI. This gives you: |
| 10 | + |
| 11 | +- **Full Control**: Use your own billing accounts and usage quotas |
| 12 | +- **Flexibility**: Switch between providers without changing your code structure |
| 13 | +- **Security**: Your API keys stay under your control |
| 14 | + |
| 15 | +## OpenAI API Key Configuration |
| 16 | + |
| 17 | +### Step 1: Obtain Your API Key |
| 18 | + |
| 19 | +1. Go to [OpenAI Platform](https://platform.openai.com/) |
| 20 | +2. Sign in or create an account |
| 21 | +3. Navigate to **API Keys** section |
| 22 | +4. Click **Create new secret key** |
| 23 | +5. Copy and securely store your API key |
| 24 | + |
| 25 | +### Step 2: Configure Your Environment |
| 26 | + |
| 27 | +Choose one of the following methods: |
| 28 | + |
| 29 | +**Option A: Environment Variable (Recommended)** |
| 30 | + |
| 31 | +```bash |
| 32 | +# macOS/Linux |
| 33 | +echo 'export OPENAI_API_KEY="sk-your-api-key-here"' >> ~/.zshrc |
| 34 | +source ~/.zshrc |
| 35 | + |
| 36 | +# Windows PowerShell |
| 37 | +setx OPENAI_API_KEY "sk-your-api-key-here" /M |
| 38 | +``` |
| 39 | + |
| 40 | +**Option B: .env File** |
| 41 | + |
| 42 | +Create a `.env` file in your project root: |
| 43 | + |
| 44 | +```dotenv |
| 45 | +OPENAI_API_KEY=sk-your-api-key-here |
| 46 | +OPENAI_API_BASE_URL=https://api.openai.com/v1 # Optional custom endpoint |
| 47 | +``` |
| 48 | + |
| 49 | +**Option C: Direct Parameter** |
| 50 | + |
| 51 | +```python |
| 52 | +from camel.models import ModelFactory |
| 53 | +from camel.types import ModelPlatformType, ModelType |
| 54 | + |
| 55 | +model = ModelFactory.create( |
| 56 | + model_platform=ModelPlatformType.OPENAI, |
| 57 | + model_type=ModelType.GPT_4O_MINI, |
| 58 | + api_key="sk-your-api-key-here", # Pass directly |
| 59 | +) |
| 60 | +``` |
| 61 | + |
| 62 | +### Step 3: Verify Your Configuration |
| 63 | + |
| 64 | +```python |
| 65 | +from camel.models import ModelFactory |
| 66 | +from camel.types import ModelPlatformType, ModelType |
| 67 | +from camel.configs import ChatGPTConfig |
| 68 | +from camel.agents import ChatAgent |
| 69 | + |
| 70 | +# Create model instance |
| 71 | +model = ModelFactory.create( |
| 72 | + model_platform=ModelPlatformType.OPENAI, |
| 73 | + model_type=ModelType.GPT_4O_MINI, |
| 74 | + model_config_dict=ChatGPTConfig(temperature=0.2).as_dict(), |
| 75 | +) |
| 76 | + |
| 77 | +# Test with a simple agent |
| 78 | +agent = ChatAgent( |
| 79 | + system_message="You are a helpful assistant.", |
| 80 | + model=model |
| 81 | +) |
| 82 | + |
| 83 | +response = agent.step("Hello!") |
| 84 | +print(response.msg.content) |
| 85 | +``` |
| 86 | + |
| 87 | +## Supported Model Configuration Fields |
| 88 | + |
| 89 | +### ChatGPTConfig (OpenAI) |
| 90 | + |
| 91 | +| Field | Type | Description | |
| 92 | +|-------|------|-------------| |
| 93 | +| `temperature` | float (0-2) | Controls randomness. Higher = more creative | |
| 94 | +| `top_p` | float (0-1) | Nucleus sampling threshold | |
| 95 | +| `max_tokens` | int | Maximum response length | |
| 96 | +| `stop` | str/list | Stop sequences (up to 4) | |
| 97 | +| `presence_penalty` | float (-2 to 2) | Penalize new topics | |
| 98 | +| `frequency_penalty` | float (-2 to 2) | Penalize repetition | |
| 99 | +| `response_format` | dict | Output format (e.g., JSON mode) | |
| 100 | +| `tool_choice` | str/dict | Tool calling behavior | |
| 101 | +| `reasoning_effort` | str | For o1/o3 models: "low", "medium", "high" | |
| 102 | +| `extra_headers` | dict | Custom HTTP headers | |
| 103 | + |
| 104 | +### ModelFactory.create() Parameters |
| 105 | + |
| 106 | +| Parameter | Type | Description | |
| 107 | +|-----------|------|-------------| |
| 108 | +| `model_platform` | ModelPlatformType/str | Provider (e.g., OPENAI, ANTHROPIC) | |
| 109 | +| `model_type` | ModelType/str | Model name (e.g., GPT_4O_MINI) | |
| 110 | +| `model_config_dict` | dict | Configuration parameters | |
| 111 | +| `api_key` | str | API key (optional if set in env) | |
| 112 | +| `url` | str | Custom API endpoint | |
| 113 | +| `timeout` | float | Request timeout in seconds (default: 180) | |
| 114 | +| `max_retries` | int | Retry attempts (default: 3) | |
| 115 | + |
| 116 | +## Common Errors and Solutions |
| 117 | + |
| 118 | +### Missing API Key |
| 119 | + |
| 120 | +**Error:** |
| 121 | +``` |
| 122 | +ValueError: Missing or empty required API keys in environment variables: OPENAI_API_KEY. |
| 123 | +You can obtain the API key from https://platform.openai.com/docs/overview |
| 124 | +``` |
| 125 | + |
| 126 | +**Solution:** Set your API key using one of the methods in Step 2. |
| 127 | + |
| 128 | +### Invalid API Key |
| 129 | + |
| 130 | +**Error:** |
| 131 | +``` |
| 132 | +openai.AuthenticationError: Incorrect API key provided |
| 133 | +``` |
| 134 | + |
| 135 | +**Solution:** Verify your API key is correct and hasn't expired. Generate a new key if needed. |
| 136 | + |
| 137 | +### Invalid Model Name |
| 138 | + |
| 139 | +**Error:** |
| 140 | +``` |
| 141 | +openai.NotFoundError: The model 'invalid-model' does not exist |
| 142 | +``` |
| 143 | + |
| 144 | +**Solution:** Use a valid model name from the ModelType enum or check the provider's documentation for available models. |
| 145 | + |
| 146 | +### Unknown Model Platform |
| 147 | + |
| 148 | +**Error:** |
| 149 | +``` |
| 150 | +ValueError: Unknown model platform: invalid-platform |
| 151 | +``` |
| 152 | + |
| 153 | +**Solution:** Use a valid ModelPlatformType enum value (e.g., `ModelPlatformType.OPENAI`). |
| 154 | + |
| 155 | +### Rate Limit Exceeded |
| 156 | + |
| 157 | +**Error:** |
| 158 | +``` |
| 159 | +openai.RateLimitError: Rate limit reached for requests |
| 160 | +``` |
| 161 | + |
| 162 | +**Solution:** Reduce request frequency or upgrade your API plan. |
| 163 | + |
| 164 | +### Unsupported Parameters for Reasoning Models |
| 165 | + |
| 166 | +**Warning:** Some parameters are not supported for o1/o3 reasoning models: |
| 167 | +- `temperature`, `top_p`, `presence_penalty`, `frequency_penalty` |
| 168 | +- `logprobs`, `top_logprobs`, `logit_bias` |
| 169 | + |
| 170 | +These will be automatically filtered when using reasoning models. |
| 171 | + |
| 172 | +## Supported BYOK Providers |
| 173 | + |
| 174 | +### Direct Integrations |
| 175 | + |
| 176 | +| Provider | Environment Variable | API Documentation | |
| 177 | +|----------|---------------------|-------------------| |
| 178 | +| **OpenAI** | `OPENAI_API_KEY` | [OpenAI API Docs](https://platform.openai.com/docs/overview) | |
| 179 | +| **Anthropic** | `ANTHROPIC_API_KEY` | [Anthropic API Docs](https://docs.anthropic.com/en/api/getting-started) | |
| 180 | +| **Google Gemini** | `GEMINI_API_KEY` | [Gemini API Docs](https://ai.google.dev/gemini-api/docs) | |
| 181 | +| **Mistral AI** | `MISTRAL_API_KEY` | [Mistral API Docs](https://docs.mistral.ai/) | |
| 182 | +| **DeepSeek** | `DEEPSEEK_API_KEY` | [DeepSeek API Docs](https://platform.deepseek.com/api-docs) | |
| 183 | +| **Cohere** | `COHERE_API_KEY` | [Cohere API Docs](https://docs.cohere.com/) | |
| 184 | +| **Qwen** | `QWEN_API_KEY` | [Qwen API Docs](https://help.aliyun.com/zh/model-studio/developer-reference/api-reference) | |
| 185 | +| **Moonshot (Kimi)** | `MOONSHOT_API_KEY` | [Moonshot API Docs](https://platform.moonshot.cn/docs/) | |
| 186 | +| **ZhipuAI (GLM)** | `ZHIPUAI_API_KEY` | [ZhipuAI API Docs](https://open.bigmodel.cn/dev/api) | |
| 187 | +| **Yi (Lingyiwanwu)** | `YI_API_KEY` | [Yi API Docs](https://platform.lingyiwanwu.com/docs) | |
| 188 | +| **Reka** | `REKA_API_KEY` | [Reka API Docs](https://docs.reka.ai/quick-start) | |
| 189 | +| **InternLM** | `INTERNLM_API_KEY` | [InternLM API Docs](https://internlm.intern-ai.org.cn/api/document) | |
| 190 | +| **xAI (Grok)** | `XAI_API_KEY` | [xAI API Docs](https://docs.x.ai/api) | |
| 191 | + |
| 192 | +### API & Connector Platforms |
| 193 | + |
| 194 | +| Provider | Environment Variable | API Documentation | |
| 195 | +|----------|---------------------|-------------------| |
| 196 | +| **Azure OpenAI** | `AZURE_OPENAI_API_KEY` | [Azure OpenAI Docs](https://learn.microsoft.com/en-us/azure/ai-services/openai/) | |
| 197 | +| **AWS Bedrock** | AWS credentials | [Bedrock Docs](https://docs.aws.amazon.com/bedrock/) | |
| 198 | +| **Groq** | `GROQ_API_KEY` | [Groq API Docs](https://console.groq.com/docs/quickstart) | |
| 199 | +| **Together AI** | `TOGETHER_API_KEY` | [Together Docs](https://docs.together.ai/docs/quickstart) | |
| 200 | +| **SambaNova** | `SAMBA_API_KEY` | [SambaNova Docs](https://docs.sambanova.ai/) | |
| 201 | +| **NVIDIA NIM** | `NVIDIA_API_KEY` | [NVIDIA NIM Docs](https://docs.api.nvidia.com/nim/reference/llm-apis) | |
| 202 | +| **OpenRouter** | `OPENROUTER_API_KEY` | [OpenRouter Docs](https://openrouter.ai/docs) | |
| 203 | +| **CometAPI** | `COMETAPI_KEY` | [CometAPI Docs](https://api.cometapi.com/docs) | |
| 204 | +| **Nebius** | `NEBIUS_API_KEY` | [Nebius Docs](https://nebius.com/docs/) | |
| 205 | +| **AIML API** | `AIML_API_KEY` | [AIML Docs](https://docs.aimlapi.com/) | |
| 206 | +| **SiliconFlow** | `SILICONFLOW_API_KEY` | [SiliconFlow Docs](https://docs.siliconflow.cn/) | |
| 207 | +| **Novita** | `NOVITA_API_KEY` | [Novita Docs](https://novita.ai/docs) | |
| 208 | +| **ModelScope** | `MODELSCOPE_API_KEY` | [ModelScope Docs](https://www.modelscope.cn/docs/model-service/API-Inference/intro) | |
| 209 | +| **IBM WatsonX** | `WATSONX_API_KEY` | [WatsonX Docs](https://cloud.ibm.com/apidocs/watsonx-ai) | |
| 210 | +| **Qianfan (ERNIE)** | `QIANFAN_ACCESS_KEY` | [Qianfan Docs](https://cloud.baidu.com/doc/WENXINWORKSHOP/) | |
| 211 | +| **Volcano Engine** | `VOLCANO_API_KEY` | [Volcano Docs](https://www.volcengine.com/docs/82379) | |
| 212 | +| **Crynux** | `CRYNUX_API_KEY` | [Crynux Docs](https://docs.crynux.ai/) | |
| 213 | +| **AihubMix** | `AIHUBMIX_API_KEY` | [AihubMix Docs](https://doc.aihubmix.com/) | |
| 214 | +| **MiniMax** | `MINIMAX_API_KEY` | [MiniMax Docs](https://platform.minimaxi.com/document/Guides) | |
| 215 | +| **Cerebras** | `CEREBRAS_API_KEY` | [Cerebras Docs](https://inference-docs.cerebras.ai/) | |
| 216 | +| **AMD** | `AMD_API_KEY` | [AMD LLM API](https://llm-api.amd.com/) | |
| 217 | +| **PPIO** | `PPIO_API_KEY` | [PPIO Docs](https://ppioai.com/docs) | |
| 218 | +| **NetMind** | `NETMIND_API_KEY` | [NetMind Docs](https://www.netmind.ai/docs) | |
| 219 | + |
| 220 | +### Local/Self-Hosted Platforms |
| 221 | + |
| 222 | +| Platform | Documentation | |
| 223 | +|----------|---------------| |
| 224 | +| **Ollama** | [Ollama Docs](https://ollama.com/library) | |
| 225 | +| **vLLM** | [vLLM Docs](https://docs.vllm.ai/en/latest/) | |
| 226 | +| **SGLang** | [SGLang Docs](https://docs.sglang.ai/) | |
| 227 | +| **LMStudio** | [LMStudio Docs](https://lmstudio.ai/docs/) | |
| 228 | +| **LiteLLM** | [LiteLLM Docs](https://docs.litellm.ai/docs/) | |
| 229 | +| **Function Gemma** | Local Ollama model for function calling | |
| 230 | +| **OpenAI Compatible** | Any OpenAI-compatible API endpoint | |
| 231 | + |
| 232 | +## Quick Reference: Model Platform Types |
| 233 | + |
| 234 | +```python |
| 235 | +from camel.types import ModelPlatformType |
| 236 | + |
| 237 | +# Direct providers |
| 238 | +ModelPlatformType.OPENAI |
| 239 | +ModelPlatformType.ANTHROPIC |
| 240 | +ModelPlatformType.GEMINI |
| 241 | +ModelPlatformType.MISTRAL |
| 242 | +ModelPlatformType.DEEPSEEK |
| 243 | +ModelPlatformType.COHERE |
| 244 | +ModelPlatformType.QWEN |
| 245 | +ModelPlatformType.MOONSHOT |
| 246 | +ModelPlatformType.ZHIPU |
| 247 | +ModelPlatformType.YI |
| 248 | +ModelPlatformType.REKA |
| 249 | +ModelPlatformType.INTERNLM |
| 250 | + |
| 251 | +# API platforms |
| 252 | +ModelPlatformType.AZURE |
| 253 | +ModelPlatformType.AWS_BEDROCK |
| 254 | +ModelPlatformType.GROQ |
| 255 | +ModelPlatformType.TOGETHER |
| 256 | +ModelPlatformType.SAMBA |
| 257 | +ModelPlatformType.NVIDIA |
| 258 | +ModelPlatformType.OPENROUTER |
| 259 | +ModelPlatformType.COMETAPI |
| 260 | +ModelPlatformType.NEBIUS |
| 261 | +ModelPlatformType.AIML |
| 262 | +ModelPlatformType.SILICONFLOW |
| 263 | +ModelPlatformType.NOVITA |
| 264 | +ModelPlatformType.MODELSCOPE |
| 265 | +ModelPlatformType.WATSONX |
| 266 | +ModelPlatformType.QIANFAN |
| 267 | +ModelPlatformType.VOLCANO |
| 268 | +ModelPlatformType.CRYNUX |
| 269 | +ModelPlatformType.AIHUBMIX |
| 270 | +ModelPlatformType.PPIO |
| 271 | +ModelPlatformType.NETMIND |
| 272 | +ModelPlatformType.CEREBRAS |
| 273 | +ModelPlatformType.MINIMAX |
| 274 | + |
| 275 | +# Local/Self-hosted |
| 276 | +ModelPlatformType.OLLAMA |
| 277 | +ModelPlatformType.VLLM |
| 278 | +ModelPlatformType.SGLANG |
| 279 | +ModelPlatformType.LMSTUDIO |
| 280 | +ModelPlatformType.LITELLM |
| 281 | +ModelPlatformType.OPENAI_COMPATIBLE_MODEL |
| 282 | +``` |
0 commit comments