Skip to content

feat: add Novita AI as declarative provider#8432

Open
Alex-wuhu wants to merge 3 commits intoaaif-goose:mainfrom
Alex-wuhu:feat/add-novita-provider
Open

feat: add Novita AI as declarative provider#8432
Alex-wuhu wants to merge 3 commits intoaaif-goose:mainfrom
Alex-wuhu:feat/add-novita-provider

Conversation

@Alex-wuhu
Copy link
Copy Markdown

@Alex-wuhu Alex-wuhu commented Apr 9, 2026

Summary

  • Add Novita AI as a new declarative LLM provider (novita.json)
  • Novita offers an OpenAI-compatible API with 90+ open-source models and competitive pricing
  • Add Novita AI section to providers documentation

Models included

Model Context Max Output
moonshotai/kimi-k2.5 262K 262K
deepseek/deepseek-v3.2 164K 65K
deepseek/deepseek-r1 164K 32K
zai-org/glm-5 203K 131K
minimax/minimax-m2.5 205K 131K
qwen/qwen3-coder-480b-a35b-instruct 262K 65K
meta-llama/llama-4-maverick-17b-128e-instruct 131K 131K

Configuration

NOVITA_API_KEY=your_key_here

Endpoint: https://api.novita.ai/openai

Test plan

  • Verify goose configure lists Novita AI as a provider option
  • Verify API key configuration via NOVITA_API_KEY
  • Test chat completion with moonshotai/kimi-k2.5 model
  • Test streaming responses
  • Test tool calling functionality

Add Novita AI (https://novita.ai) as a new declarative LLM provider.
Novita offers an OpenAI-compatible API with 90+ open-source models
including Kimi K2.5, DeepSeek V3.2/R1, GLM-5, MiniMax M2.5, and
Qwen3 Coder.

- Add novita.json declarative provider config
- Add Novita AI section to providers documentation

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: Alex-wuhu <yanglongwei06@gmail.com>
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: ed69885a5c

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

"display_name": "Novita AI",
"description": "90+ open-source models with OpenAI-compatible API and competitive pricing",
"api_key_env": "NOVITA_API_KEY",
"base_url": "https://api.novita.ai/openai",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Use Novita chat-completions endpoint path

With the current OpenAI-compatible loader, this base_url is split into host https://api.novita.ai and request path openai, so chat calls are sent to POST /openai (not a .../chat/completions route). In OpenAiProvider::from_custom_config/stream, non-empty custom paths are treated as final request paths, so Novita requests will fail unless that root path is itself a completions endpoint. Set this to the actual chat-completions URL (or provide base_path) so requests hit the correct route.

Useful? React with 👍 / 👎.

"max_tokens": 65536
},
{
"name": "meta-llama/llama-4-maverick-17b-128e-instruct",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Use a valid Novita model identifier

This model name does not match the bundled Novita canonical catalog, which only includes meta-llama/llama-4-maverick-17b-128e-instruct-fp8 for novita-ai. Because goose forwards the selected model string directly to the provider, choosing this entry can produce invalid-model errors and skip canonical limit mapping for this model. Replace it with the provider’s exact model ID.

Useful? React with 👍 / 👎.

… deepseek-v3.2, gemma-4

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: Alex-wuhu <yanglongwei06@gmail.com>
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 9d8039eb67

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

@alexhancock
Copy link
Copy Markdown
Collaborator

Hi @Alex-wuhu

I would take a look and address the codex review comments, as they seem like they may be correct

You can also test this with a local copy of goose and verify it works

just run-ui will run the desktop from source

@Alex-wuhu
Copy link
Copy Markdown
Author

Alex-wuhu commented Apr 9, 2026

Hi @Alex-wuhu

I would take a look and address the codex review comments, as they seem like they may be correct

You can also test this with a local copy of goose and verify it works

just run-ui will run the desktop from source

Thanks! checking

- Fix base_url to use full chat completions path (https://api.novita.ai/openai/chat/completions)
  so requests hit the correct endpoint instead of bare /openai
- Add "novita" => "novita-ai" mapping in map_provider_name() to enable
  canonical model lookup for pricing and limits
@Alex-wuhu
Copy link
Copy Markdown
Author

Hi @alexhancock, thanks for the review! I've addressed all the Codex feedback:

[P1] base_url fix: Changed from https://api.novita.ai/openai to https://api.novita.ai/openai/chat/completions so requests hit the correct chat completions endpoint. Verified with both non-streaming and streaming requests.

[P2] Provider name mapping: Added "novita" => "novita-ai" in map_provider_name() so canonical model lookups (pricing, limits, etc.) work correctly.

[P2] Model name: The meta-llama/llama-4-maverick-17b-128e-instruct was already removed in my previous commit, so this is no longer an issue.

Testing done:

  • cargo build and cargo test pass (1093/1094, the one failure is an unrelated local snapshot test)
  • Started goosed agent server and verified:
    • Novita listed in /config/providers with is_configured: true
    • /config/check_provider returns 200
    • /config/providers/novita/models returns 50+ models
    • /config/canonical-model-info correctly returns pricing for moonshotai/kimi-k2.5
  • Tested the exact endpoint (https://api.novita.ai/openai/chat/completions) with:
    • Chat completions (non-streaming) ✅
    • Streaming (SSE) ✅
    • Tool calling ✅

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants