This file is the secret sauce for working effectively in this codebase. It captures tribal knowledge—the nuanced, non-obvious patterns that make the difference between a quick fix and hours of back-and-forth & human intervention.
When to add to this file:
- User had to intervene, correct, or hand-hold
- Multiple back-and-forth attempts were needed to get something working
- You discovered something that required reading many files to understand
- A change touched files you wouldn't have guessed
- Something worked differently than you expected
- User explicitly asks to "add this to CLAUDE.md"
Proactively suggest additions when any of the above happen—don't wait to be asked.
What NOT to add: Stuff you can figure out from reading a few files, obvious patterns, or standard practices. This file should be high-signal, not comprehensive.
The extension and webview communicate via gRPC-like protocol over VS Code message passing.
Proto files live in proto/ (e.g., proto/cline/task.proto, proto/cline/ui.proto)
- Each feature domain has its own
.protofile - For simple data, use shared types in
proto/cline/common.proto(StringRequest,Empty,Int64Request) - For complex data, define custom messages in the feature's
.protofile - Naming: Services
PascalCaseService, RPCscamelCase, MessagesPascalCase - For streaming responses, use
streamkeyword (seesubscribeToAuthCallbackinaccount.proto)
Run npm run protos after any proto changes—generates types in:
src/shared/proto/- Shared type definitionssrc/generated/grpc-js/- Service implementationssrc/generated/nice-grpc/- Promise-based clientssrc/generated/hosts/- Generated handlers
Adding new enum values (like a new ClineSay type) requires updating conversion mappings in src/shared/proto-conversions/cline-message.ts
Adding new RPC methods requires:
- Handler in
src/core/controller/<domain>/ - Call from webview via generated client:
UiServiceClient.scrollToSettings(StringRequest.create({ value: "browser" }))
Example—the explain-changes feature touched:
proto/cline/task.proto- AddedExplainChangesRequestmessage andexplainChangesRPCproto/cline/ui.proto- AddedGENERATE_EXPLANATION = 29toClineSayenumsrc/shared/ExtensionMessage.ts- AddedClineSayGenerateExplanationtypesrc/shared/proto-conversions/cline-message.ts- Added mapping for new say typesrc/core/controller/task/explainChanges.ts- Handler implementationwebview-ui/src/components/chat/ChatRow.tsx- UI rendering
This is tricky—multiple prompt variants and configs. Always search for existing similar tools first and follow their pattern. Look at the full chain from prompt definition → variant configs → handler → UI before implementing.
- Add to
ClineDefaultToolenum insrc/shared/tools.ts - Tool definition in
src/core/prompts/system-prompt/tools/(create file likegenerate_explanation.ts)- Define variants for each
ModelFamily(generic, next-gen, xs, etc.) - Export variants array (e.g.,
export const my_tool_variants = [GENERIC, NATIVE_NEXT_GEN, XS]) - Fallback behavior: If a variant isn't defined for a model family,
ClineToolSet.getToolByNameWithFallback()automatically falls back to GENERIC. So you only need to export[GENERIC]unless the tool needs model-specific behavior.
- Define variants for each
- Register in
src/core/prompts/system-prompt/tools/init.ts- Import and spread intoallToolVariants - Add to variant configs - Each model family has its own config in
src/core/prompts/system-prompt/variants/*/config.ts. Add your tool's enum to the.tools()list:generic/config.ts,next-gen/config.ts,gpt-5/config.ts,native-gpt-5/config.ts,native-gpt-5-1/config.ts,native-next-gen/config.ts,gemini-3/config.ts,glm/config.ts,hermes/config.ts,xs/config.ts- Important: If you add to a variant's config, make sure the tool spec exports a variant for that ModelFamily (or relies on GENERIC fallback)
- Create handler in
src/core/task/tools/handlers/ - Wire up in
ToolExecutor.tsif needed for execution flow - Add to tool parsing in
src/core/assistant-message/index.tsif needed - If tool has UI feedback: add
ClineSayenum in proto, updatesrc/shared/ExtensionMessage.ts, updatesrc/shared/proto-conversions/cline-message.ts, updatewebview-ui/src/components/chat/ChatRow.tsx
Read these first: src/core/prompts/system-prompt/README.md, tools/README.md, __tests__/README.md
System prompt is modular: components (reusable sections) + variants (model-specific configs) + templates (with {{PLACEHOLDER}} resolution).
Key directories:
components/- Shared sections:rules.ts,capabilities.ts,editing_files.ts, etc.variants/- Model-specific:generic/,next-gen/,xs/,gpt-5/,gemini-3/,hermes/,glm/, etc.templates/- Template engine and placeholder definitions
Variant tiers (ask user which to modify):
- Next-gen (Claude 4, GPT-5, Gemini 2.5):
next-gen/,native-next-gen/,native-gpt-5/,native-gpt-5-1/,gemini-3/,gpt-5/ - Standard (default fallback):
generic/ - Local/small models:
xs/,hermes/,glm/
How overrides work: Variants can override components via componentOverrides in their config.ts, or provide a custom template in template.ts (e.g., next-gen/template.ts exports rules_template). If no override, the shared component from components/ is used.
Example: Adding a rule to RULES section
- Check if variant overrides rules: look for
rules_templateinvariants/*/template.tsorcomponentOverrides.RULESinconfig.ts - If shared: modify
components/rules.ts - If overridden: modify that variant's template
- XS variant is special—has heavily condensed inline content in
template.ts
After any changes, regenerate snapshots:
UPDATE_SNAPSHOTS=true npm run test:unitSnapshots live in __tests__/__snapshots__/. Tests validate across model families and context variations (browser, MCP, focus chain).
Three places need updates:
src/core/slash-commands/index.ts- Command definitionssrc/core/prompts/commands.ts- System prompt integrationwebview-ui/src/utils/slash-commands.ts- Webview autocomplete
When a ChatRow displays a loading/in-progress state (spinner), you must handle what happens when the task is cancelled. This is non-obvious because cancellation doesn't update the message content—you have to infer it from context.
The pattern:
- A message has a
statusfield (e.g.,"generating","complete","error") stored inmessage.textas JSON - When cancelled mid-operation, the status stays
"generating"forever—no one updates it - To detect cancellation, check TWO conditions:
!isLast— if this message is no longer the last message, something else happened after it (interrupted)lastModifiedMessage?.ask === "resume_task" || "resume_completed_task"— task was just cancelled and is waiting to resume
Example from generate_explanation:
const wasCancelled =
explanationInfo.status === "generating" &&
(!isLast ||
lastModifiedMessage?.ask === "resume_task" ||
lastModifiedMessage?.ask === "resume_completed_task")
const isGenerating = explanationInfo.status === "generating" && !wasCancelledWhy both checks?
!isLastcatches: cancelled → resumed → did other stuff → this old message is stalelastModifiedMessage?.ask === "resume_task"catches: just cancelled, hasn't resumed yet, this message is still technically "last"
See also: BrowserSessionRow.tsx uses similar pattern with isLastApiReqInterrupted and isLastMessageResume.
Backend side: When streaming is cancelled, clean up properly (close tabs, clear comments, etc.) by checking taskState.abort after the streaming function returns.