Skip to content

Commit bcb8c81

Browse files
hannesrudolphroomote-v0[bot]roomotedaniel-lxscte
authored
Reapply Batch 2: 9 minor-conflict non-AI-SDK cherry-picks (#11474)
* fix: correct Bedrock model ID for Claude Opus 4.6 (#11232) Remove the :0 suffix from the Claude Opus 4.6 model ID to match the correct AWS Bedrock model identifier. The model ID was "anthropic.claude-opus-4-6-v1:0" but should be "anthropic.claude-opus-4-6-v1" per AWS Bedrock documentation. Fixes #11231 Co-authored-by: Roo Code <roomote@roocode.com> * fix: guard against empty-string baseURL in provider constructors (#11233) When the 'custom base URL' checkbox is unchecked in the UI, the setting is set to '' (empty string). Providers that passed this directly to their SDK constructors caused 'Failed to parse URL' errors because the SDK treated '' as a valid but broken base URL override. - gemini.ts: use || undefined (was passing raw option) - openai-native.ts: use || undefined (was passing raw option) - openai.ts: change ?? to || for fallback default - deepseek.ts: change ?? to || for fallback default - moonshot.ts: change ?? to || for fallback default Adds test coverage for Gemini and OpenAI Native constructors verifying empty-string baseURL is coerced to undefined. * fix: make defaultTemperature required in getModelParams to prevent silent temperature overrides (#11218) * fix: DeepSeek temperature defaulting to 0 instead of 0.3 Pass defaultTemperature: DEEP_SEEK_DEFAULT_TEMPERATURE to getModelParams() in DeepSeekHandler.getModel() to ensure the correct default temperature (0.3) is used when no user configuration is provided. Closes #11194 * refactor: make defaultTemperature required in getModelParams Make the defaultTemperature parameter required in getModelParams() instead of defaulting to 0. This prevents providers with their own non-zero default temperature (like DeepSeek's 0.3) from being silently overridden by the implicit 0 default. Every provider now explicitly declares its temperature default, making the temperature resolution chain clear: user setting → model default → provider default --------- Co-authored-by: Roo Code <roomote@roocode.com> Co-authored-by: daniel-lxs <ricciodaniel98@gmail.com> * feat: batch consecutive tool calls in chat UI with shared utility (#11245) * feat: group consecutive list_files tool calls into single UI block Consolidate consecutive listFilesTopLevel/listFilesRecursive ask messages into a single 'Roo wants to view multiple directories' block, matching the existing read_file batching pattern. * chore: add missing translation keys for all locales * refactor: consolidate duplicate listFiles batch-handling blocks in ChatRow Merge the separate listFilesTopLevel and listFilesRecursive case blocks into a single combined case with shared batch-detection logic, selecting the icon and translation key based on the tool type. This removes the duplicated isBatchDirRequest check and BatchListFilesPermission render. * feat: batch consecutive file-edit tool calls into single UI block Add edit-file batching in ChatView groupedMessages that consolidates consecutive editedExistingFile, appliedDiff, newFileCreated, insertContent, and searchAndReplace asks into a single BatchDiffApproval block. Move batchDiffs detection in ChatRow above the switch statement so it applies to any file-edit tool type. * refactor: extract batchConsecutive utility, fix batch UI issues - Extract generic batchConsecutive() utility from 3 identical while-loops - Fix React key collisions in BatchListFilesPermission, BatchFilePermission, BatchDiffApproval - Normalize language prop to "shellsession" (was "shell-session" for top-level) - Remove unused _batchedMessages property from synthetic messages - Remove dead didViewMultipleDirectories i18n key from all 18 locale files - Add batch button text for listFilesTopLevel/listFilesRecursive - Add batchConsecutive utility tests (6 cases) * fix: audit improvements for batch tool-call UI - Make batchConsecutive() generic instead of ClineMessage-specific - Add batch-aware button text for edit-file batches ("Save All"/"Deny All") - Add dedicated list-batch/edit-batch i18n keys (stop reusing read-batch) - Add JSON.parse defense-in-depth in all three synthesizers - Fix mixed list_files batch icon to default to FolderTree - Add 6 missing test cases (all-match, immutability, spy, single-dir) * chore: minor type cleanup (out-of-scope housekeeping) - Trim unused recursive/isOutsideWorkspace from DirPermissionItem interface - Remove 4 pre-existing `as any` casts in ChatView.tsx: - window cast → precise inline type - checkpoint bracket access → removed unnecessary casts - condensing message → `as ClineMessage` - debounce cancel → `.clear()` (correct API) - Update BatchListFilesPermission test data to match trimmed interface * i18n: add list-batch and edit-batch translations for all locales * feat: add IPC query handlers for commands, modes, and models (#11279) Add GetCommands, GetModes, and GetModels to the IPC protocol so external clients can fetch slash commands, available modes, and Roo provider models without going through the internal webview message channel. Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * feat: add lock toggle to pin API config across all modes in workspace (#11295) * feat: add lock toggle to pin API config across all modes in workspace Add a lock/unlock toggle inside the API config selector popover (next to the settings gear) that, when enabled, applies the selected API configuration to all modes in the current workspace. - Add lockApiConfigAcrossModes to ExtensionState and WebviewMessage types - Store setting in workspaceState (per-workspace, not global) - When locked, activateProviderProfile sets config for all modes - Lock icon in ApiConfigSelector popover bottom bar next to gear - Full i18n: English + 17 locale translations (all mention workspace scope) - 9 new tests: 2 ClineProvider, 2 handler, 5 UI (77 total pass) * refactor: replace write-fan-out with read-time override for lock API config The original lock implementation used setModeConfig() fan-out to write the locked config to ALL modes globally. Since the lock flag lives in workspace- scoped workspaceState but modeApiConfigs are in global secrets, this caused cross-workspace data destruction. Replaced with read-time guards: - handleModeSwitch: early return when lock is on (skip per-mode config load) - createTaskWithHistoryItem: skip mode-based config restoration under lock - activateProviderProfile: removed fan-out block - lockApiConfigAcrossModes handler: simplified to flag + state post only - Fixed pre-existing workspaceState mock gap in ClineProvider.spec.ts and ClineProvider.sticky-profile.spec.ts * fix: validate Gemini thinkingLevel against model capabilities and handle empty streams (#11303) * fix: validate Gemini thinkingLevel against model capabilities and handle empty streams getGeminiReasoning() now validates the selected effort against the model's supportsReasoningEffort array before sending it as thinkingLevel. When a stale settings value (e.g. 'medium' from a different model) is not in the supported set, it falls back to the model's default reasoningEffort. GeminiHandler.createMessage() now tracks whether any text content was yielded during streaming and handles NoOutputGeneratedError gracefully instead of surfacing the cryptic 'No output generated' error. * fix: guard thinkingLevel fallback against 'none' effort and add i18n TODO The array validation fallback in getGeminiReasoning() now only triggers when the selected effort IS a valid Gemini thinking level but not in the model's supported set. Values like 'none' (explicit no-reasoning signal) are no longer overridden by the model default. Also adds a TODO for moving the empty-stream message to i18n. * fix: track tool_call_start in hasContent to avoid false empty-stream warning Tool-only responses (no text) are valid content. Without this, agentic tool-call responses would incorrectly trigger the empty response warning message. * chore(cli): prepare release v0.0.53 (#11425) * feat: add GLM-5 model support to Z.ai provider (#11440) * chore: regenerate pnpm-lock.yaml * fix: resolve type errors and remove AI SDK test contamination * docs: update progress.txt with rebuilt Batch 2 status --------- Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> Co-authored-by: Roo Code <roomote@roocode.com> Co-authored-by: daniel-lxs <ricciodaniel98@gmail.com> Co-authored-by: Chris Estreich <cestreich@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
1 parent b2b7780 commit bcb8c81

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

68 files changed

+1996
-188
lines changed

apps/cli/CHANGELOG.md

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,29 @@ All notable changes to the `@roo-code/cli` package will be documented in this fi
55
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
66
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
77

8+
## [0.0.53] - 2026-02-12
9+
10+
### Changed
11+
12+
- **Auto-Approve by Default**: The CLI now auto-approves all actions (tools, commands, browser, MCP) by default. Followup questions auto-select the first suggestion after a 60-second timeout.
13+
- **New `--require-approval` Flag**: Replaced `-y`/`--yes`/`--dangerously-skip-permissions` flags with a new `-a, --require-approval` flag for users who want manual approval prompts before actions execute.
14+
15+
### Fixed
16+
17+
- Spamming the escape key to cancel a running task no longer crashes the cli.
18+
19+
## [0.0.52] - 2026-02-09
20+
21+
### Added
22+
23+
- **Linux Support**: Added support for `linux-arm64`.
24+
25+
## [0.0.51] - 2026-02-06
26+
27+
### Changed
28+
29+
- **Default Model Update**: Changed the default model from Opus 4.5 to Opus 4.6 for improved performance and capabilities
30+
831
## [0.0.50] - 2026-02-05
932

1033
### Added

apps/cli/package.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
{
22
"name": "@roo-code/cli",
3-
"version": "0.0.50",
3+
"version": "0.0.53",
44
"description": "Roo Code CLI - Run the Roo Code agent from the command line",
55
"private": true,
66
"type": "module",

packages/types/src/events.ts

Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
import { z } from "zod"
22

33
import { clineMessageSchema, queuedMessageSchema, tokenUsageSchema } from "./message.js"
4+
import { modelInfoSchema } from "./model.js"
45
import { toolNamesSchema, toolUsageSchema } from "./tool.js"
56

67
/**
@@ -45,6 +46,11 @@ export enum RooCodeEventName {
4546
ModeChanged = "modeChanged",
4647
ProviderProfileChanged = "providerProfileChanged",
4748

49+
// Query Responses
50+
CommandsResponse = "commandsResponse",
51+
ModesResponse = "modesResponse",
52+
ModelsResponse = "modelsResponse",
53+
4854
// Evals
4955
EvalPass = "evalPass",
5056
EvalFail = "evalFail",
@@ -108,6 +114,20 @@ export const rooCodeEventsSchema = z.object({
108114

109115
[RooCodeEventName.ModeChanged]: z.tuple([z.string()]),
110116
[RooCodeEventName.ProviderProfileChanged]: z.tuple([z.object({ name: z.string(), provider: z.string() })]),
117+
118+
[RooCodeEventName.CommandsResponse]: z.tuple([
119+
z.array(
120+
z.object({
121+
name: z.string(),
122+
source: z.enum(["global", "project", "built-in"]),
123+
filePath: z.string().optional(),
124+
description: z.string().optional(),
125+
argumentHint: z.string().optional(),
126+
}),
127+
),
128+
]),
129+
[RooCodeEventName.ModesResponse]: z.tuple([z.array(z.object({ slug: z.string(), name: z.string() }))]),
130+
[RooCodeEventName.ModelsResponse]: z.tuple([z.record(z.string(), modelInfoSchema)]),
111131
})
112132

113133
export type RooCodeEvents = z.infer<typeof rooCodeEventsSchema>
@@ -237,6 +257,23 @@ export const taskEventSchema = z.discriminatedUnion("eventName", [
237257
taskId: z.number().optional(),
238258
}),
239259

260+
// Query Responses
261+
z.object({
262+
eventName: z.literal(RooCodeEventName.CommandsResponse),
263+
payload: rooCodeEventsSchema.shape[RooCodeEventName.CommandsResponse],
264+
taskId: z.number().optional(),
265+
}),
266+
z.object({
267+
eventName: z.literal(RooCodeEventName.ModesResponse),
268+
payload: rooCodeEventsSchema.shape[RooCodeEventName.ModesResponse],
269+
taskId: z.number().optional(),
270+
}),
271+
z.object({
272+
eventName: z.literal(RooCodeEventName.ModelsResponse),
273+
payload: rooCodeEventsSchema.shape[RooCodeEventName.ModelsResponse],
274+
taskId: z.number().optional(),
275+
}),
276+
240277
// Evals
241278
z.object({
242279
eventName: z.literal(RooCodeEventName.EvalPass),

packages/types/src/ipc.ts

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -46,6 +46,9 @@ export enum TaskCommandName {
4646
CloseTask = "CloseTask",
4747
ResumeTask = "ResumeTask",
4848
SendMessage = "SendMessage",
49+
GetCommands = "GetCommands",
50+
GetModes = "GetModes",
51+
GetModels = "GetModels",
4952
}
5053

5154
/**
@@ -79,6 +82,15 @@ export const taskCommandSchema = z.discriminatedUnion("commandName", [
7982
images: z.array(z.string()).optional(),
8083
}),
8184
}),
85+
z.object({
86+
commandName: z.literal(TaskCommandName.GetCommands),
87+
}),
88+
z.object({
89+
commandName: z.literal(TaskCommandName.GetModes),
90+
}),
91+
z.object({
92+
commandName: z.literal(TaskCommandName.GetModels),
93+
}),
8294
])
8395

8496
export type TaskCommand = z.infer<typeof taskCommandSchema>

packages/types/src/providers/bedrock.ts

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -119,7 +119,7 @@ export const bedrockModels = {
119119
maxCachePoints: 4,
120120
cachableFields: ["system", "messages", "tools"],
121121
},
122-
"anthropic.claude-opus-4-6-v1:0": {
122+
"anthropic.claude-opus-4-6-v1": {
123123
maxTokens: 8192,
124124
contextWindow: 200_000, // Default 200K, extendable to 1M with beta flag 'context-1m-2025-08-07'
125125
supportsImages: true,
@@ -499,7 +499,7 @@ export const BEDROCK_REGIONS = [
499499
export const BEDROCK_1M_CONTEXT_MODEL_IDS = [
500500
"anthropic.claude-sonnet-4-20250514-v1:0",
501501
"anthropic.claude-sonnet-4-5-20250929-v1:0",
502-
"anthropic.claude-opus-4-6-v1:0",
502+
"anthropic.claude-opus-4-6-v1",
503503
] as const
504504

505505
// Amazon Bedrock models that support Global Inference profiles
@@ -514,7 +514,7 @@ export const BEDROCK_GLOBAL_INFERENCE_MODEL_IDS = [
514514
"anthropic.claude-sonnet-4-5-20250929-v1:0",
515515
"anthropic.claude-haiku-4-5-20251001-v1:0",
516516
"anthropic.claude-opus-4-5-20251101-v1:0",
517-
"anthropic.claude-opus-4-6-v1:0",
517+
"anthropic.claude-opus-4-6-v1",
518518
] as const
519519

520520
// Amazon Bedrock Service Tier types

packages/types/src/providers/zai.ts

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -120,6 +120,21 @@ export const internationalZAiModels = {
120120
description:
121121
"GLM-4.7 is Zhipu's latest model with built-in thinking capabilities enabled by default. It provides enhanced reasoning for complex tasks while maintaining fast response times.",
122122
},
123+
"glm-5": {
124+
maxTokens: 16_384,
125+
contextWindow: 202_752,
126+
supportsImages: false,
127+
supportsPromptCache: true,
128+
supportsReasoningEffort: ["disable", "medium"],
129+
reasoningEffort: "medium",
130+
preserveReasoning: true,
131+
inputPrice: 0.6,
132+
outputPrice: 2.2,
133+
cacheWritesPrice: 0,
134+
cacheReadsPrice: 0.11,
135+
description:
136+
"GLM-5 is Zhipu's next-generation model with a 202k context window and built-in thinking capabilities. It delivers state-of-the-art reasoning, coding, and agentic performance.",
137+
},
123138
"glm-4.7-flash": {
124139
maxTokens: 16_384,
125140
contextWindow: 200_000,
@@ -281,6 +296,21 @@ export const mainlandZAiModels = {
281296
description:
282297
"GLM-4.7 is Zhipu's latest model with built-in thinking capabilities enabled by default. It provides enhanced reasoning for complex tasks while maintaining fast response times.",
283298
},
299+
"glm-5": {
300+
maxTokens: 16_384,
301+
contextWindow: 202_752,
302+
supportsImages: false,
303+
supportsPromptCache: true,
304+
supportsReasoningEffort: ["disable", "medium"],
305+
reasoningEffort: "medium",
306+
preserveReasoning: true,
307+
inputPrice: 0.29,
308+
outputPrice: 1.14,
309+
cacheWritesPrice: 0,
310+
cacheReadsPrice: 0.057,
311+
description:
312+
"GLM-5 is Zhipu's next-generation model with a 202k context window and built-in thinking capabilities. It delivers state-of-the-art reasoning, coding, and agentic performance.",
313+
},
284314
"glm-4.7-flash": {
285315
maxTokens: 16_384,
286316
contextWindow: 204_800,

packages/types/src/vscode-extension-host.ts

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -333,6 +333,7 @@ export type ExtensionState = Pick<
333333
| "showWorktreesInHomeScreen"
334334
| "disabledTools"
335335
> & {
336+
lockApiConfigAcrossModes?: boolean
336337
version: string
337338
clineMessages: ClineMessage[]
338339
currentTaskItem?: HistoryItem
@@ -529,6 +530,7 @@ export interface WebviewMessage {
529530
| "searchFiles"
530531
| "toggleApiConfigPin"
531532
| "hasOpenedModeSelector"
533+
| "lockApiConfigAcrossModes"
532534
| "clearCloudAuthSkipModel"
533535
| "cloudButtonClicked"
534536
| "rooCloudSignIn"
@@ -833,6 +835,12 @@ export interface ClineSayTool {
833835
startLine?: number
834836
}>
835837
}>
838+
batchDirs?: Array<{
839+
path: string
840+
recursive: boolean
841+
isOutsideWorkspace?: boolean
842+
key: string
843+
}>
836844
question?: string
837845
imageData?: string // Base64 encoded image data for generated images
838846
// Properties for runSlashCommand tool

0 commit comments

Comments
 (0)