Skip to content

feat: add Alibaba Coding Plan (DashScope) provider support#509

Merged
kevincodex1 merged 7 commits into
Gitlawb:mainfrom
regisksc:feat/dashscope-provider
Apr 17, 2026
Merged

feat: add Alibaba Coding Plan (DashScope) provider support#509
kevincodex1 merged 7 commits into
Gitlawb:mainfrom
regisksc:feat/dashscope-provider

Conversation

@regisksc
Copy link
Copy Markdown
Contributor

@regisksc regisksc commented Apr 8, 2026

Summary

Adds Alibaba Coding Plan (DashScope) as a new provider option, enabling OpenClaude users to connect to Qwen, Kimi, GLM, and MiniMax models via DashScope's OpenAI-compatible API.

What Changed

Files modified (3 total):

  • src/utils/providerProfiles.ts — Added dashscope-cn and dashscope-intl presets
  • src/utils/model/openaiContextWindows.ts — Added context window mappings for 10+ DashScope models
  • src/components/ProviderManager.tsx — Added DashScope options to provider selection UI

Provider priority:
Same as existing OpenAI-compatible providers (Moonshot AI, DeepSeek, Groq, etc.)

Why This Matters

DashScope provides access to:

  • Qwen3.5-Plus — 1M context window, strong coding/reasoning
  • Qwen3-Coder series — Specialized coding models
  • Kimi K2.5 — 256k context, strong long-document handling
  • GLM-5/4.7 — Zhipu's flagship models
  • MiniMax-M2.5 — 192k context

All at competitive pricing with China and International endpoint options.

Configuration

# China endpoint
export DASHSCOPE_API_KEY="your-key"
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL="https://coding.dashscope.aliyuncs.com/v1"
export OPENAI_MODEL="qwen3.5-plus"

# Or International endpoint
export OPENAI_BASE_URL="https://coding-intl.dashscope.aliyuncs.com/v1"

Or via /provider UI → select "Alibaba Coding Plan" or "Alibaba Coding Plan (China)"

Testing

bun test              # 544 pass, 0 fail
bun run build         # ✓

Manual verification:

  • Provider preset appears in /provider selection
  • qwen3.5-plus responds correctly
  • /context shows 1M token context window

Reference

Implementation follows the existing OpenAI-compatible provider pattern (Moonshot AI, DeepSeek) and references the ../opencode alibaba-coding-plan provider configuration.

Checklist

  • Provider preset visible in UI
  • Context windows correctly resolved per model
  • All existing tests pass
  • No breaking changes
  • Follows existing code patterns

@kevincodex1 kevincodex1 requested a review from auriti April 8, 2026 11:22
'MiniMax-M2.5': 196_608,
'minimax-m2.5': 196_608,
'glm-5': 202_752,
'glm-4.7': 202_752,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

they don't have Minimax-2.7?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image

I've only added these they reference

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unfortunately not

Comment thread src/utils/model/openaiContextWindows.ts Outdated
'qwen3-max-2026-01-23': 32_768,
'kimi-k2.5': 32_768,
'MiniMax-M2.5': 24_576,
'minimax-m2.5': 24_576,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here missing minimax 2.7?

@kevincodex1
Copy link
Copy Markdown
Contributor

hello @regisksc thank you so much for this just left a couple of questions

@Vasanthdev2004
Copy link
Copy Markdown
Collaborator

Straightforward addition, follows the existing pattern well. A couple of things:

Case sensitivity in the context window lookups

MiniMax-M2.5 and minimax-m2.5 are both listed — are these actually two different model IDs that DashScope accepts? If it's just case-insensitive matching, having both is fine as a safety net, but if lookupByModel already normalizes case, the duplicate is unnecessary. Worth checking how lookupByModel handles this since it would affect other providers too if they have similar casing variations.

1M context for qwen3.5-plus and qwen3-coder-plus

Just want to confirm these are correct from the DashScope API docs. 1M is a big number and if it's wrong, users will get confusing token limit errors. The other entries look reasonable.

No env var integration for API key

The existing presets like moonshotai don't reference env vars in their defaults either, so this is consistent, but it does mean users have to paste the key every time. Since DashScope uses DASHSCOPE_API_KEY as its standard env var, it might be nice to pick it up automatically like how the OpenAI preset reads OPENAI_API_KEY. Not a blocker though, just a quality-of-life thing.

Provider ordering in the UI

DashScope entries are placed right before "Custom" at the bottom of the preset list. Makes sense alphabetically, but the list is getting long. Might be worth grouping by category at some point (local, cloud, etc.) but that's a separate concern.

LGTM otherwise.

Copy link
Copy Markdown
Collaborator

@gnanam1990 gnanam1990 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR — the DashScope preset direction makes sense, and the provider wiring itself looks straightforward. I do have one blocking concern before merge:

  1. Several of the new context window / max output values in src/utils/model/openaiContextWindows.ts do not match the current models.dev metadata, and MiniMax-M2.7 is also missing entirely even though the PR summary says DashScope gives access to MiniMax models.

The ones I checked and found mismatches on were:

  • kimi-k2.5
  • MiniMax-M2.5
  • glm-5
  • glm-4.7

Since these values feed runtime warning/compaction behavior, I’d want the metadata corrected before merge. It would also be good to add focused tests for the new DashScope model entries so this stays accurate over time.

Happy to re-review once the model metadata is corrected.

Copy link
Copy Markdown
Collaborator

@auriti auriti left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks — the preset wiring looks fine, but I’m still seeing blocking issues in the model metadata table.

In src/utils/model/openaiContextWindows.ts, these values feed runtime compaction and max-token clamping via src/utils/context.ts, so they need to be exact. I rechecked the current metadata and found at least these mismatches:

  • qwen3-coder-plus: context should be 1_048_576, not 1_000_000
  • qwen3-max: max output should be 65_536, not 32_768
  • glm-4.7: limits appear to be 200_000 context / 131_072 output, not 202_752 / 16_384

Also, this PR should add focused regression coverage in src/utils/context.test.ts for the new DashScope entries, since these tables directly control warning/compaction behavior and max-output clamping.

The provider/UI additions look straightforward otherwise, and the MiniMax casing duplicates are fine because lookup is case-sensitive here.

@auriti
Copy link
Copy Markdown
Collaborator

auriti commented Apr 8, 2026

To unblock this one cleanly, I’d suggest keeping the fix very tight:

  • update src/utils/model/openaiContextWindows.ts so the DashScope entries match current metadata exactly (at minimum qwen3-coder-plus, qwen3-max, and glm-4.7; plus recheck kimi-k2.5, MiniMax-M2.5, and glm-5)
  • decide explicitly whether DashScope exposes MiniMax-M2.7: if yes, add it; if not, trim the PR summary so it doesn’t overclaim MiniMax coverage
  • add focused regression tests in src/utils/context.test.ts for the new DashScope models, since these values feed warning/compaction and max-output clamping

A good reference for the expected test scope here is #494, which handled the same kind of model-metadata/runtime-behavior coupling.

Once those are in, the provider/UI side looks straightforward to re-review.

@regisksc
Copy link
Copy Markdown
Contributor Author

regisksc commented Apr 8, 2026

@Vasanthdev2004

  • Regarding case:

Looking at lookupByModel() in openaiContextWindows.ts:172-182, the function
does exact match first, then falls back to prefix matching. It does not
normalize case. So having both MiniMax-M2.5 and minimax-m2.5 is actually
necessary if DashScope returns different casing at different times. The same
pattern exists for MiniMax-M2.7/minimax-m2.7 (lines 48-49, 131-132)

  • Concerning context window: context window is indeed big for these 2 models, reaching 1M. It's one of the biggest advantages of the provider, besides 90k requests per month on a 50 dollar plan.

  • I've added DASHSCOPE_API_KEY support in a recent commit

  • About provider ordering I don't think it would fit for this PR although concern is valid.

@Vasanthdev2004
Copy link
Copy Markdown
Collaborator

@regisksc good one fix remaings and request a review

@regisksc
Copy link
Copy Markdown
Contributor Author

regisksc commented Apr 8, 2026

Thanks for the thorough review @gnanam1990 and @auriti. I want to make sure we have the correct values before fixing.

I verified against the official DashScope Coding Plan documentation here:
https://www.alibabacloud.com/help/en/model-studio/openclaw-coding-plan

The docs show these values for the bailian provider:

Model Context Window Max Output
qwen3.5-plus 1,000,000 65,536
qwen3-coder-plus 1,000,000 65,536
qwen3-coder-next 262,144 65,536
qwen3-max-2026-01-23 262,144 65,536
MiniMax-M2.5 196,608 32,768
glm-5 202,752 16,384
glm-4.7 202,752 16,384
kimi-k2.5 262,144 32,768

Note: MiniMax-M2.7 is not listed in the official DashScope model list — only M2.5 is available.

@gnanam1990 you mentioned mismatches on kimi-k2.5, MiniMax-M2.5, glm-5, and glm-4.7 — could you share which source you're referencing? The official docs above show these values already match what we have (except MiniMax-M2.5 max output which is 24,576 in our code vs 32,768 in docs).

@auriti you mentioned:

  • qwen3-coder-plus context should be 1,048,576 — the docs show 1,000,000, is there a newer source?
  • qwen3-max max output 65,536 — confirmed, our code has 32,768, will fix
  • glm-4.7 as 200,000/131,072 — the docs show 202,752/16,384

Also, should these values come from models.dev or directly from the DashScope API/docs? Want to make sure we're using the authoritative source.

Happy to fix once we confirm the correct values!

@regisksc regisksc requested review from auriti and gnanam1990 April 8, 2026 18:16
@regisksc
Copy link
Copy Markdown
Contributor Author

Hey guys, a friendly reminder of a re-review so we can integrate this.

@gnanam1990 @auriti @Vasanthdev2004 @kevincodex1

Copy link
Copy Markdown
Collaborator

@Vasanthdev2004 Vasanthdev2004 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR. I rechecked the current head 055b53c0baf48d7791cfb55fcabe38f4cfe1efb6 against the actual GitHub PR surface and the existing review thread.

The provider preset wiring itself still looks straightforward, and I like that you followed up by wiring DASHSCOPE_API_KEY into the preset defaults.

I still can’t call this merge-ready yet, though, because one blocker remains on the current head:

  1. The model-metadata changes still need focused verification coverage before merge.
    This PR adds new DashScope entries to src/utils/model/openaiContextWindows.ts, and those values feed runtime behavior like context warnings, compaction, and max-output clamping. Right now the branch still updates that table without adding focused regression coverage in src/utils/context.test.ts (or equivalent targeted tests around the lookup/clamping path).

    Given the review thread already surfaced active disagreement about some of the exact values, I don’t think we should merge the metadata table on trust alone. Even if most of the entries are correct, this is the kind of surface where a wrong number quietly turns into confusing runtime behavior for users.

Non-blocking notes:

  • The provider/UI additions in src/utils/providerProfiles.ts and src/components/ProviderManager.tsx look consistent with the existing OpenAI-compatible provider pattern.
  • Reading DASHSCOPE_API_KEY automatically is a nice quality-of-life improvement.
  • I’m not using provider ordering as a blocker here.

Verdict: Needs changes

If you add the focused regression coverage for the DashScope metadata path and tighten any values that still need confirmation, I’m happy to re-review.

@regisksc
Copy link
Copy Markdown
Contributor Author

@Vasanthdev2004 I just pushed a commit with more tests

Copy link
Copy Markdown
Collaborator

@Vasanthdev2004 Vasanthdev2004 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the follow-up here. I rechecked the current head de51375204488e72701fae39da4ebbba9b201363 against the actual GitHub PR surface, the latest review thread, and the current check state.

This is a targeted re-review of the earlier blocker around the DashScope metadata path.

The earlier blocker looks addressed now:

  • the PR now adds focused regression coverage in src/utils/context.test.ts
  • the tests exercise the new DashScope entries through the actual runtime lookup/clamping path, not just the raw table
  • current smoke-and-tests is green
  • the provider preset/UI wiring still looks straightforward and consistent with the existing OpenAI-compatible provider pattern

I do not see a remaining blocker on the current head.

Non-blocking note:

  • Since these model limits come from provider metadata that can drift over time, it would still be nice to keep the PR description or a code comment explicit about the source used for the DashScope values, especially for the entries that were debated in-thread.

Verdict: Approve-ready

I do not see a current blocker from my side.

Vasanthdev2004
Vasanthdev2004 previously approved these changes Apr 10, 2026
@regisksc
Copy link
Copy Markdown
Contributor Author

I just need to more approves @kevincodex1 @gnanam1990 @auriti

gnanam1990
gnanam1990 previously approved these changes Apr 10, 2026
Copy link
Copy Markdown
Collaborator

@gnanam1990 gnanam1990 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the follow-up here. I rechecked the current head and the earlier blocker looks addressed now.

What I verified:

  • the PR now adds focused regression coverage in src/utils/context.test.ts
  • the tests exercise the new DashScope model entries through the actual runtime lookup/clamping path, not just the raw metadata table
  • the provider preset and ProviderManager wiring still look straightforward and consistent with the existing OpenAI-compatible provider pattern

Local verification on my side:

  • bun test ./src/utils/context.test.ts
  • bun run build

Both passed.

Maintainer summary: the earlier blocker around untested model metadata is addressed, and I do not see a remaining blocker on the current head.

@regisksc
Copy link
Copy Markdown
Contributor Author

@auriti addressed all comments. Ready for re-review.

Copy link
Copy Markdown
Collaborator

@Vasanthdev2004 Vasanthdev2004 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review: PR #509 — Add Alibaba Coding Plan (DashScope) provider support

Reviewed on head a10d82c. CI green ✅. 4 files, +196/-0.

This is a well-structured addition following the existing OpenAI-compatible provider pattern. The provider preset wiring, context window entries, max output entries, and focused test coverage are all in place. The contributor was responsive to earlier review feedback and iterated well.

However, there's a critical merge-safety issue that needs to be resolved before this ships.


🔧 Blocker: Duplicate MiniMax-M2.5 / minimax-m2.5 keys will overwrite existing values

This PR adds MiniMax-M2.5 and minimax-m2.5 to both OPENAI_CONTEXT_WINDOWS and OPENAI_MAX_OUTPUT_TOKENS with DashScope-specific values:

  • Context: 196,608 (DashScope limit for MiniMax-M2.5 via their platform)
  • Max output: 24,576 (DashScope limit)

But main already has these same keys with direct MiniMax API values (added in PR #636):

  • Context: 204,800
  • Max output: 131,072

Since OPENAI_CONTEXT_WINDOWS is a plain Record<string, number>, JavaScript's last-key-wins semantics mean whichever entry appears later in the object literal takes precedence. On merge, the DashScope entries (positioned after the existing MiniMax section) will silently overwrite the direct MiniMax values for all users — including those using MiniMax directly via api.minimax.io, not via DashScope.

This is a real regression:

  • Direct MiniMax API users lose 8,192 tokens of context (204K → 192K)
  • Direct MiniMax API users lose 106,496 tokens of max output (131K → 24K) — a 5x reduction

Fix options:

  1. Don't add DashScope-specific values for models already in the table. Remove the MiniMax-M2.5 / minimax-m2.5 entries from the DashScope section. The existing 204K/131K values are the generous end and will work on DashScope too (DashScope will cap at its own limit server-side if exceeded). This is the simplest approach.
  2. Prefix the model names with a DashScope qualifier (e.g., dashscope/MiniMax-M2.5) — but this only works if DashScope actually accepts that model ID format, which it doesn't.
  3. Make the lookup provider-aware so different providers can return different limits for the same model name — a larger refactor that shouldn't block this PR.

Option 1 is the right call here. The existing entries are generous enough to work across both providers, and DashScope will enforce its own server-side limits if a request exceeds them.


🟡 Non-blocking observations

1. glm-5 and glm-4.7 are also shared model names

These are currently unique to this PR (not on main yet), but if PR #552 (NVIDIA NIM) or any future GLM provider PR merges, the same duplicate-key conflict will arise. GLM-5 via direct Zhipu API has 200K context, while via DashScope it's 202,752 — different limits for the same model ID. Worth noting as a design concern for the future, but not blocking this PR since the entries don't conflict yet.

2. qwen3.6-plus not mentioned in PR description

The PR description and DashScope docs reference qwen3.5-plus, but the context table and tests include qwen3.6-plus. Is this a newer model variant? Just want to confirm this isn't a typo.

3. qwen3-max max output is 32,768 — verify against latest docs

The in-thread debate had auriti saying 65,536 and the contributor saying 32,768 based on DashScope docs. If DashScope docs say 32,768, that's authoritative for this provider. Just flagging that this was contested.

4. Test coverage is solid

13 focused tests covering context windows, max output tokens, prefix-match fallback for dated variants, lowercase resolution, and max-output clamping. Good coverage of the runtime behavior path.


Verdict: Needs changes 🔧

One blocker: remove the MiniMax-M2.5 and minimax-m2.5 entries from the DashScope section of both OPENAI_CONTEXT_WINDOWS and OPENAI_MAX_OUTPUT_TOKENS (and the corresponding test assertions for those keys). The existing direct-MiniMax entries (204,800 / 131,072) are already in the table and will work for DashScope users too — DashScope enforces its own server-side limits. Adding provider-specific lower values for the same model name silently regresses direct MiniMax API users.

@regisksc
Copy link
Copy Markdown
Contributor Author

@auriti @Vasanthdev2004 solved latest

regarding qwen3.6-plus that was added after this PR's creation, it's a new model.

Can re-review

@kevincodex1
Copy link
Copy Markdown
Contributor

hello @regisksc thank you for your time on this, just one more thing please fix conflict

@regisksc regisksc force-pushed the feat/dashscope-provider branch from 1102269 to 54eb789 Compare April 17, 2026 07:07
@regisksc regisksc force-pushed the feat/dashscope-provider branch from 54eb789 to 0db7ab1 Compare April 17, 2026 07:11
@regisksc
Copy link
Copy Markdown
Contributor Author

@kevincodex1 @kevincodex1 @auriti @gnanam1990 solved the conflict

Copy link
Copy Markdown
Contributor

@kevincodex1 kevincodex1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great to me

Copy link
Copy Markdown
Collaborator

@gnanam1990 gnanam1990 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me. The current diff is narrow and focused, CI is green, and the provider/profile wiring plus context-window coverage are aligned with the intended DashScope support. I don’t see a trust-boundary blocker in the current changeset. Approving.

Copy link
Copy Markdown
Collaborator

@Vasanthdev2004 Vasanthdev2004 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Re-review: PR #509 — Add Alibaba DashScope provider (head 0db7ab1)

CI green ✅. 4 files, +174/-0. Thanks for the follow-up, @regisksc.

✅ Previous blocker fixed

The duplicate MiniMax-M2.5 / minimax-m2.5 entries have been removed from the DashScope section. The existing direct-MiniMax entries (204,800 / 131,072) on main are no longer at risk of being silently overwritten. This was the critical merge-safety issue.


✅ What looks good

  • Provider presets are clean — dashscope-cn and dashscope-intl follow the existing pattern with correct base URLs, default model (qwen3.6-plus), and DASHSCOPE_API_KEY env var
  • Context window values are sourced from DashScope API docs and look correct: qwen3.6-plus/3.5-plus/coder-plus at 1M, coder-next/max at 262K, kimi-k2.5 at 262K, glm-5/glm-4.7 at 202,752
  • Max output tokens are conservative and match DashScope limits: Qwen variants at 64K/32K, GLM at 16K
  • Dated variant qwen3-max-2026-01-23 has its own entry, avoiding prefix-match ambiguity
  • Test coverage is solid — 11 tests covering context windows, max output, dated variant resolution, lowercase, and clamping
  • No duplicate bare model keys conflict with existing main entries (only moonshotai/kimi-k2.5 exists, the bare kimi-k2.5 is new)

🟡 Non-blocking

1. Unrelated max-output entries mixed in

The diff adds max output entries for local/Ollama models that already had context window entries but no max output entries: gemma2:27b, codellama:13b, llama3.2:1b, qwen3:8b, codestral. These are correct values but unrelated to DashScope — they should ideally be a separate PR. Not blocking since they're harmless and fill a real gap.

2. DASHSCOPE_API_KEY not in providerProfile cleanup/restore lists

Unlike NVIDIA_API_KEY and MINIMAX_API_KEY (which are in PROVIDER_ENV_KEYS and clearProviderProfileEnvFromProcessEnv), DASHSCOPE_API_KEY isn't registered for cleanup when switching providers. The key will work via the preset's apiKey field and the OpenAI-compatible flow, but it won't be cleared on provider switch. Same class of issue as initially reported on PRs #552 and #623. Should be added in a follow-up.


Verdict: Approve-ready

The critical blocker is fixed. The provider preset, context windows, max output tokens, and tests are all solid. Kevin and gnanam have already approved. Ready to merge.

@kevincodex1 kevincodex1 merged commit 43ac6db into Gitlawb:main Apr 17, 2026
1 check passed
C1ph3r404 pushed a commit to C1ph3r404/openclaude that referenced this pull request Apr 29, 2026
* feat: add Alibaba Coding Plan provider presets

* fix: add DashScope presets to ProviderManager UI selection list

* feat: read DASHSCOPE_API_KEY env var for DashScope provider presets

* adds regression testing for alibaba models

* docs: add time descriptive comment

* feat(dashscope): add qwen3.6-plus model support

* fix(dashscope): remove MiniMax-M2.5 entries to prevent future key conflicts
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants