Skip to content
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
47 changes: 47 additions & 0 deletions crates/goose/src/providers/declarative/novita.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
{
"name": "novita",
"engine": "openai",
"display_name": "Novita AI",
"description": "90+ open-source models with OpenAI-compatible API and competitive pricing",
"api_key_env": "NOVITA_API_KEY",
"base_url": "https://api.novita.ai/openai",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Use Novita chat-completions endpoint path

With the current OpenAI-compatible loader, this base_url is split into host https://api.novita.ai and request path openai, so chat calls are sent to POST /openai (not a .../chat/completions route). In OpenAiProvider::from_custom_config/stream, non-empty custom paths are treated as final request paths, so Novita requests will fail unless that root path is itself a completions endpoint. Set this to the actual chat-completions URL (or provide base_path) so requests hit the correct route.

Useful? React with 👍 / 👎.

"catalog_provider_id": "novita-ai",
"models": [
{
"name": "moonshotai/kimi-k2.5",
"context_limit": 262144,
"max_tokens": 262144
},
{
"name": "deepseek/deepseek-v3.2",
"context_limit": 163840,
"max_tokens": 65536
},
{
"name": "deepseek/deepseek-r1",
"context_limit": 163840,
"max_tokens": 32768
},
{
"name": "zai-org/glm-5",
"context_limit": 202800,
"max_tokens": 131072
},
{
"name": "minimax/minimax-m2.5",
"context_limit": 204800,
"max_tokens": 131100
},
{
"name": "qwen/qwen3-coder-480b-a35b-instruct",
"context_limit": 262144,
"max_tokens": 65536
},
{
"name": "meta-llama/llama-4-maverick-17b-128e-instruct",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Use a valid Novita model identifier

This model name does not match the bundled Novita canonical catalog, which only includes meta-llama/llama-4-maverick-17b-128e-instruct-fp8 for novita-ai. Because goose forwards the selected model string directly to the provider, choosing this entry can produce invalid-model errors and skip canonical limit mapping for this model. Replace it with the provider’s exact model ID.

Useful? React with 👍 / 👎.

"context_limit": 131072,
"max_tokens": 131072
}
],
"supports_streaming": true
}
41 changes: 41 additions & 0 deletions documentation/docs/getting-started/providers.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@ goose is compatible with a wide range of LLM providers, allowing you to choose a
| [LiteLLM](https://docs.litellm.ai/docs/) | LiteLLM proxy supporting multiple models with automatic prompt caching and unified API access. | `LITELLM_HOST`, `LITELLM_BASE_PATH` (optional), `LITELLM_API_KEY` (optional), `LITELLM_CUSTOM_HEADERS` (optional), `LITELLM_TIMEOUT` (optional) |
| [LM Studio](https://lmstudio.ai/) | Run local models with LM Studio's OpenAI-compatible server. **Because this provider runs locally, you must first [download a model](#local-llms).** | None required. Connects to local server at `localhost:1234` by default. |
| [Mistral AI](https://mistral.ai/) | Provides access to Mistral models including general-purpose models, specialized coding models (Codestral), and multimodal models (Pixtral). | `MISTRAL_API_KEY` |
| [Novita AI](https://novita.ai/) | 90+ open-source models with OpenAI-compatible API and competitive pricing. Supports Kimi K2.5, DeepSeek, GLM, MiniMax, Qwen, and more. | `NOVITA_API_KEY` |
| [Ollama](https://ollama.com/) | Local model runner supporting Qwen, Llama, DeepSeek, and other open-source models. **Because this provider runs locally, you must first [download and run a model](#local-llms).** | `OLLAMA_HOST` |
| [OpenAI](https://platform.openai.com/api-keys) | Provides gpt-4o, o1, and other advanced language models. Also supports OpenAI-compatible endpoints (e.g., self-hosted LLaMA, vLLM, KServe). **o1-mini and o1-preview are not supported because goose uses tool calling.** | `OPENAI_API_KEY`, `OPENAI_HOST` (optional), `OPENAI_ORGANIZATION` (optional), `OPENAI_PROJECT` (optional), `OPENAI_CUSTOM_HEADERS` (optional) |
| [OpenRouter](https://openrouter.ai/) | API gateway for unified access to various models with features like rate-limiting management. | `OPENROUTER_API_KEY` |
Expand Down Expand Up @@ -693,6 +694,46 @@ To set up Groq with goose, follow these steps:
</TabItem>
</Tabs>

### Novita AI
[Novita AI](https://novita.ai/) provides access to 90+ open-source models via an OpenAI-compatible API with competitive pricing. To use Novita AI with goose, you need an API key from [Novita AI](https://novita.ai/settings#key-management).

Novita AI offers many models that support tool calling, including:
- **moonshotai/kimi-k2.5** - Moonshot's latest model with 262K context window
- **deepseek/deepseek-v3.2** - DeepSeek V3.2 with 164K context
- **deepseek/deepseek-r1** - DeepSeek reasoning model
- **zai-org/glm-5** - Zhipu's GLM-5 with 203K context
- **minimax/minimax-m2.5** - MiniMax M2.5 with 205K context
- **qwen/qwen3-coder-480b-a35b-instruct** - Qwen3 Coder with 262K context

For the complete list of supported Novita AI models, see [novita.json](https://github.com/aaif-goose/goose/blob/main/crates/goose/src/providers/declarative/novita.json).

To set up Novita AI with goose, follow these steps:

<Tabs groupId="interface">
<TabItem value="ui" label="goose Desktop" default>
**To update your LLM provider and API key:**

1. Click the <PanelLeft className="inline" size={16} /> button in the top-left to open the sidebar.
2. Click the `Settings` button on the sidebar.
3. Click the `Models` tab.
4. Click `Configure Providers`
5. Choose `Novita AI` as provider from the list.
6. Click `Configure`, enter your API key, and click `Submit`.
7. Select the Novita AI model of your choice.

</TabItem>
<TabItem value="cli" label="goose CLI">
1. Run:
```sh
goose configure
```
2. Select `Configure Providers` from the menu.
3. Follow the prompts to choose `Novita AI` as the provider.
4. Enter your API key when prompted.
5. Select the Novita AI model of your choice.
</TabItem>
</Tabs>

### Google Gemini
Google Gemini provides a free tier. To start using the Gemini API with goose, you need an API Key from [Google AI studio](https://aistudio.google.com/app/apikey).

Expand Down
Loading