Skip to content

Conversation

@0xApotheosis
Copy link
Member

@0xApotheosis 0xApotheosis commented Dec 9, 2025

Summary by CodeRabbit

  • New Features

    • Added configurable AI provider selection via new environment variables (including VENICE_API_KEY and AI_PROVIDER).
    • Added support for the Venice AI provider and dynamic model selection for chat.
  • Chores

    • Updated runtime dependencies to enable multi-provider AI support.
    • Enhanced environment configuration example with the new provider settings.

✏️ Tip: You can customize this high-level summary in your review settings.

@vercel
Copy link

vercel bot commented Dec 9, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
shapeshift-agentic Ready Ready Preview Comment Dec 9, 2025 6:34am

@coderabbitai
Copy link

coderabbitai bot commented Dec 9, 2025

📝 Walkthrough

Walkthrough

Adds Venice AI provider and runtime selection: new env vars and dependencies, a Venice provider config in models.ts, and a getModel() function used by the chat route to choose between Anthropic and Venice at runtime.

Changes

Cohort / File(s) Summary
Configuration
\.env.example
Added VENICE_API_KEY and AI_PROVIDER (default anthropic, can be venice).
Dependencies
apps/agentic-server/package.json
Added @ai-sdk/openai-compatible and @ai-sdk/provider dependencies to support Venice/OpenAI-compatible provider.
Provider Abstraction & Model Selection
apps/agentic-server/src/models.ts
Added Venice provider via createOpenAICompatible, introduced models map, exported AIProvider type and getModel() which reads AI_PROVIDER env var and returns a LanguageModelV2 instance (falls back to Anthropic).
Chat Route Integration
apps/agentic-server/src/routes/chat.ts
Replaced direct anthropic('claude-haiku-4-5') usage with getModel() import and call to enable dynamic provider selection.

Sequence Diagram

sequenceDiagram
    actor Client
    participant ChatRoute as Chat Route
    participant getModel as getModel()
    participant Env as Environment
    participant Provider as Provider<br/>(Anthropic or Venice)

    Client->>ChatRoute: POST /chat
    ChatRoute->>getModel: request model instance
    getModel->>Env: read AI_PROVIDER
    alt AI_PROVIDER == "venice"
        getModel->>Provider: configure Venice (openai-compatible)
    else
        getModel->>Provider: configure Anthropic
    end
    getModel-->>ChatRoute: return LanguageModelV2
    ChatRoute->>Provider: stream completion/request
    Provider-->>ChatRoute: stream tokens
    ChatRoute-->>Client: streamed response
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Review models.ts for correct baseURL, Authorization header handling, and initialization of the openai-compatible provider.
  • Verify AI_PROVIDER parsing, fallback behavior, and logging in getModel().
  • Confirm new dependencies resolve and imports (createOpenAICompatible, LanguageModelV2) match package APIs.
  • Check chat.ts streaming integration still behaves identically with the returned LanguageModelV2 instance.

Poem

🐰 I hopped to add a Venice shore today,
Environment signs tell me which path to play,
A tiny getModel() opens the gate,
Anthropic or Venice—either's great! ✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title 'feat: add venice support' accurately describes the main change—adding Venice AI provider integration to the agentic server.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch support-venice

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
.env.example (1)

2-3: Clarify provider config and align with .env linting (optional)

The new VENICE_API_KEY and AI_PROVIDER entries make the provider setup clear. To satisfy dotenv-linter and avoid any parsing surprises, consider:

  • Quoting the AI_PROVIDER value so the inline comment is unambiguous, e.g.:
    AI_PROVIDER="anthropic"  # 'anthropic' or 'venice'
  • (Optional) Moving AI_PROVIDER above ANTHROPIC_API_KEY if you want to follow the linter’s key-ordering rule.

Functionally this is fine as-is; these tweaks are mainly for tooling cleanliness.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between e575a03 and 7db24f0.

⛔ Files ignored due to path filters (1)
  • bun.lock is excluded by !**/*.lock
📒 Files selected for processing (4)
  • .env.example (1 hunks)
  • apps/agentic-server/package.json (1 hunks)
  • apps/agentic-server/src/models.ts (1 hunks)
  • apps/agentic-server/src/routes/chat.ts (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
apps/agentic-server/src/routes/chat.ts (1)
apps/agentic-server/src/models.ts (1)
  • getModel (37-44)
🪛 dotenv-linter (4.0.0)
.env.example

[warning] 3-3: [UnorderedKey] The AI_PROVIDER key should go before the ANTHROPIC_API_KEY key

(UnorderedKey)


[warning] 3-3: [ValueWithoutQuotes] This value needs to be surrounded in quotes

(ValueWithoutQuotes)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: main
🔇 Additional comments (2)
apps/agentic-server/src/routes/chat.ts (1)

17-17: Type compatibility confirmed—getModel() correctly implements provider switching

The implementation properly centralizes model selection. getModel() explicitly returns LanguageModelV2 (line 37 of models.ts), which is the correct type for streamText in [email protected]. Both Anthropic (via createAnthropic) and Venice (via createOpenAICompatible) providers correctly instantiate models that satisfy the LanguageModelV2 interface.

The env-based provider switching via AI_PROVIDER works as intended, defaulting to anthropic. No per-provider configuration differences require adjustment—both providers are configured with their respective API keys and endpoints.

apps/agentic-server/src/models.ts (1)

2-44: Venice integration and getModel() design are correctly configured

The setup is clean and well-structured: one central models map, AIProvider derived from it, and getModel() handling env selection with a safe fallback to Anthropic plus useful logging.

All Venice configuration details are accurate:

  • Base URL https://api.venice.ai/api/v1 is correct for the OpenAI-compatible endpoint.
  • Authorization via Bearer <VENICE_API_KEY> in the header is the expected auth mechanism.
  • Model ID zai-org-glm-4.6 is valid, available, and supports ~128k token context.

The fallback logic const config = models[provider] || models.anthropic provides good runtime safety if AI_PROVIDER is mis-set. API key handling (lazy loading on first use) matches your existing Anthropic approach and is acceptable for this use case.

If you want slightly clearer observability, consider logging when an unknown AI_PROVIDER value falls back to Anthropic, but this is optional:

export function getModel(): LanguageModelV2 {
  const raw = process.env.AI_PROVIDER || 'anthropic'
  const provider = (raw in models ? raw : 'anthropic') as AIProvider
  const config = models[provider]

  if (raw !== provider) {
    console.warn(`[AI] Unknown AI_PROVIDER="${raw}", falling back to "${provider}".`)
  }

  console.log(`[AI] Using provider: ${provider}, model: ${config.model}`)
  return config.provider(config.model)
}

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
apps/agentic-server/package.json (1)

24-25: Dependency compatibility issue already flagged.

This concern was already raised in a previous review: @ai-sdk/openai-compatible@^1.0.28 is incompatible with ai@^5.0.76 and will cause runtime errors. Please refer to the existing review comment for resolution options.

🧹 Nitpick comments (1)
.env.example (1)

2-3: LGTM! Clear documentation for Venice integration.

The new environment variables are well-documented with sensible defaults. The inline comment clearly indicates the valid options for AI_PROVIDER.

Optional: The static analysis tool suggests alphabetical ordering and adding quotes around the value. While not required, you could apply this diff for consistency:

 ANTHROPIC_API_KEY=sk-ant-
+AI_PROVIDER="anthropic"  # 'anthropic' or 'venice'
 VENICE_API_KEY=
-AI_PROVIDER=anthropic  # 'anthropic' or 'venice'
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 7db24f0 and a8910d4.

⛔ Files ignored due to path filters (1)
  • bun.lock is excluded by !**/*.lock
📒 Files selected for processing (4)
  • .env.example (1 hunks)
  • apps/agentic-server/package.json (1 hunks)
  • apps/agentic-server/src/models.ts (1 hunks)
  • apps/agentic-server/src/routes/chat.ts (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • apps/agentic-server/src/models.ts
🧰 Additional context used
🪛 dotenv-linter (4.0.0)
.env.example

[warning] 3-3: [UnorderedKey] The AI_PROVIDER key should go before the ANTHROPIC_API_KEY key

(UnorderedKey)


[warning] 3-3: [ValueWithoutQuotes] This value needs to be surrounded in quotes

(ValueWithoutQuotes)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: main
🔇 Additional comments (1)
apps/agentic-server/src/routes/chat.ts (1)

17-17: Error handling in getModel() is adequate.

The getModel() function correctly handles the key error scenarios:

  1. Missing/invalid AI_PROVIDER: Defaults to 'anthropic' and falls back to the anthropic configuration for unknown providers (lines 38-39).
  2. Return type: Properly annotated as LanguageModelV2 (line 37) and correctly invoked as config.provider(config.model) (line 43), ensuring compatibility with streamText.
  3. Missing API keys: While keys (ANTHROPIC_API_KEY, VENICE_API_KEY) are passed to providers at creation time, validation is appropriately deferred to runtime when API calls occur. This is standard practice with provider SDKs.

The refactoring is sound and ready to merge.

@premiumjibles premiumjibles merged commit 130c054 into main Dec 9, 2025
4 checks passed
@premiumjibles premiumjibles deleted the support-venice branch December 9, 2025 06:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants