You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: documentation/docs/getting-started/providers.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,9 +34,10 @@ goose is compatible with a wide range of LLM providers, allowing you to choose a
34
34
|[LiteLLM](https://docs.litellm.ai/docs/)| LiteLLM proxy supporting multiple models with automatic prompt caching and unified API access. |`LITELLM_HOST`, `LITELLM_BASE_PATH` (optional), `LITELLM_API_KEY` (optional), `LITELLM_CUSTOM_HEADERS` (optional), `LITELLM_TIMEOUT` (optional) |
35
35
|[Mistral AI](https://mistral.ai/)| Provides access to Mistral models including general-purpose models, specialized coding models (Codestral), and multimodal models (Pixtral). |`MISTRAL_API_KEY`|
36
36
|[Ollama](https://ollama.com/)| Local model runner supporting Qwen, Llama, DeepSeek, and other open-source models. **Because this provider runs locally, you must first [download and run a model](#local-llms).**|`OLLAMA_HOST`|
37
-
|[Ramalama](https://ramalama.ai/)| Local model using native [OCI](https://opencontainers.org/) container runtimes, [CNCF](https://www.cncf.io/) tools, and supporting models as OCI artifacts. Ramalama API an compatible alternative to Ollama and can be used with the goose Ollama provider. Supports Qwen, Llama, DeepSeek, and other open-source models. **Because this provider runs locally, you must first [download and run a model](#local-llms).**|`OLLAMA_HOST`|
38
37
|[OpenAI](https://platform.openai.com/api-keys)| Provides gpt-4o, o1, and other advanced language models. Also supports OpenAI-compatible endpoints (e.g., self-hosted LLaMA, vLLM, KServe). **o1-mini and o1-preview are not supported because goose uses tool calling.**|`OPENAI_API_KEY`, `OPENAI_HOST` (optional), `OPENAI_ORGANIZATION` (optional), `OPENAI_PROJECT` (optional), `OPENAI_CUSTOM_HEADERS` (optional) |
39
38
|[OpenRouter](https://openrouter.ai/)| API gateway for unified access to various models with features like rate-limiting management. |`OPENROUTER_API_KEY`|
39
+
|[OVHcloud AI](https://www.ovhcloud.com/en/public-cloud/ai-endpoints/)| Provides access to open-source models including Qwen, Llama, Mistral, and DeepSeek through AI Endpoints service. |`OVHCLOUD_API_KEY`|
40
+
|[Ramalama](https://ramalama.ai/)| Local model using native [OCI](https://opencontainers.org/) container runtimes, [CNCF](https://www.cncf.io/) tools, and supporting models as OCI artifacts. Ramalama API is a compatible alternative to Ollama and can be used with the goose Ollama provider. Supports Qwen, Llama, DeepSeek, and other open-source models. **Because this provider runs locally, you must first [download and run a model](#local-llms).**|`OLLAMA_HOST`|
40
41
|[Snowflake](https://docs.snowflake.com/user-guide/snowflake-cortex/aisql#choosing-a-model)| Access the latest models using Snowflake Cortex services, including Claude models. **Requires a Snowflake account and programmatic access token (PAT)**. |`SNOWFLAKE_HOST`, `SNOWFLAKE_TOKEN`|
41
42
|[Tetrate Agent Router Service](https://router.tetrate.ai)| Unified API gateway for AI models including Claude, Gemini, GPT, open-weight models, and others. Supports PKCE authentication flow for secure API key generation. |`TETRATE_API_KEY`, `TETRATE_HOST` (optional) |
42
43
|[Venice AI](https://venice.ai/home)| Provides access to open source models like Llama, Mistral, and Qwen while prioritizing user privacy. **Requires an account and an [API key](https://docs.venice.ai/overview/guides/generating-api-key)**. |`VENICE_API_KEY`, `VENICE_HOST` (optional), `VENICE_BASE_PATH` (optional), `VENICE_MODELS_PATH` (optional) |
0 commit comments