feat(opencode): rich handoff with tool calls, diffs, model, and token usage#37
feat(opencode): rich handoff with tool calls, diffs, model, and token usage#37dhruvkej9 wants to merge 112 commits intoyigitkonur:mainfrom
Conversation
Updated video source link and added badges.
Add full session parsing for Factory AI's Droid CLI (factory.ai). Reads JSONL sessions from ~/.factory/sessions/ with companion .settings.json for model/token metadata. Parses Create/Read/Edit/Execute/Bash/LS tool calls, MCP tools, thinking blocks, and todo_state events. Supports native resume via `droid -s <id>` and cross-tool handoff in both directions. All 7 integration points wired: types, parser, barrel export, session index, resume commands, markdown labels, and CLI UI.
Add createDroidFixture() with realistic JSONL + settings.json data. Add Droid-specific parser tests (message extraction, tool_use filtering, session_start/todo_state skipping). Update all test files to include droid in Record<SessionSource, ...> maps. Conversion paths: 5×4=20 → 6×5=30.
- Bump version 2.6.7 → 2.7.0 - Add postinstall message announcing Droid support - Update README: add Droid to feature list, extraction table, storage table, quick-resume examples - Create CHANGELOG.md with 2.7.0 release notes - Update CLAUDE.md to reflect 6 supported tools
Add Factory Droid CLI support
Add Cursor IDE as the 7th supported platform. Parses agent-transcript JSONL files from ~/.cursor/projects/*/agent-transcripts/, extracts conversation history, tool usage (Anthropic-style tool_use/tool_result blocks), and thinking/reasoning highlights. - New parser: src/parsers/cursor.ts - Strips <user_query> tags from Cursor's wrapped user messages - Derives working directory from project slug path - Full integration: types, index, resume, CLI quick-command, colors - Test fixtures and 42 cross-tool conversion paths (7×6) all passing
…ror-message fix: show clear error when only one CLI tool is installed
feat: add Cursor AI support
Cursor replaces both `/` and `.` with `-` in project slugs, making path reconstruction ambiguous. The previous greedy left-to-right approach failed for names like `dzcm.test` (resolved as `dzcm/test`). New approach uses recursive backtracking: at each dash position, tries treating it as `/`, `.`, or literal `-`, and checks fs.existsSync only on the final complete path. This correctly resolves all three cases: - dzcm-test → dzcm.test - readybyte-test → readybyte.test - laravel-contentai → laravel/contentai
feat: add Cursor AI support with smart slug-to-path resolution
Replace 40+ manual wiring points across 8 files with a centralized ToolAdapter registry. Adding a new CLI tool now requires only 3 files instead of 8. Key changes: - New src/parsers/registry.ts with ToolAdapter interface and 7 registrations - New src/utils/parser-helpers.ts with shared utilities (cleanSummary, extractRepoFromCwd, homeDir) extracted from duplicated parser code - Rewired index.ts, resume.ts, cli.ts, markdown.ts to use registry lookups instead of explicit switch/import per tool - Promise.allSettled replaces Promise.all in buildIndex (one broken parser no longer crashes the CLI) - Fixed missing 'cursor' in two hardcoded allTools arrays - SQLite close() moved to finally blocks in opencode.ts - TOCTOU race fix in indexNeedsRebuild() - Corrupted cache line protection in loadIndex() - Removed dead SessionParser and ResumeOptions interfaces - Removed unused 'tool' role from ConversationMessage - Standardized message window to slice(-20) across parsers
- New beta-publish.yml: runs tests then publishes to npm with --tag beta on every push to develop - CI now also runs tests on develop branch pushes - Version bumped to 2.8.0-beta.0
The workflow now checks the current beta version on npm and automatically bumps the prerelease number before publishing. No manual version edits needed — just push to develop.
The 't' and 'i' letters were narrower than the other 4-char-wide letters, causing a visible gap on the bottom row. Now all 9 letters use centered double-width stems (▀██▀ for t, ▄▄ dot for i) at exactly 4 characters wide with uniform spacing.
Replace flat white/cyan banner with per-letter hex gradient from soft indigo (#9b8ec9) through sky blue to bright cyan, with the 's' brand mark in bold mint (#00ffc8). Also dropped the half-block dot on 'i' in favor of a clean thick bar matching 't' width.
BREAKING CHANGES:
- SessionSource type is now derived from TOOL_NAMES const array
- SOURCE_LABELS proxy removed (use getSourceLabels() instead)
- package.json main changed from dist/cli.js to dist/index.js
New features:
- Public library API via `import { getAllSessions } from 'continues'`
- Zod runtime validation for all JSON.parse calls (zero as-any in src/)
- Typed SQLite interface for OpenCode parser
- Biome linting/formatting with zero errors
Architecture:
- CLI split into src/commands/ and src/display/ (was 776-line monolith)
- Shared parser infrastructure (fs-helpers, jsonl, content, tool-extraction)
- Structured error hierarchy (ContinuesError base + 6 subclasses)
- Logger with configurable levels (--verbose/--debug/CONTINUES_DEBUG)
- Registry completeness assertion at module load
- ContentBlock discriminated union shared across all parsers
Cleanup:
- Removed dead constants.ts (never imported)
- Removed superseded test files (conversions.test.ts, parsers.test.ts)
- Removed unused function parameter (buildReferencePrompt filePath)
- Added .gitignore entries for local config files
- Net -1003 lines (4749 added, 3038 removed, + new modules)
The auto-increment script was always incrementing from the npm beta base version (2.8.0-beta.3 → 2.8.0-beta.4), ignoring that package.json had been bumped to 3.0.0-beta.0. Now compares the base versions and uses package.json as-is when they differ.
Accept v3 develop-side values (version 3.0.0-beta.0, library exports, Cursor in description) over the 2.7.5 release bump on main.
…warding (yigitkonur#9) feat!: land v3 refactor on main + registry-driven cross-tool flag forwarding
Redesign the handoff markdown pipeline so each tool category gets optimized extraction (capture what matters) and display (show what matters) instead of flat one-liner summaries. Pipeline changes: - Add StructuredToolSample discriminated union (11 categories) with per-type data shapes (ShellSampleData, WriteSampleData, etc.) - Refactor SummaryCollector to options-object API with per-category sample limits and error tracking - Rewrite extractAnthropicToolData to produce structured samples: shell commands get exitCode + stdoutTail, writes/edits get unified diffs, grep/glob get match/file counts, MCP gets truncated params - Add minimal diff utility (formatNewFileDiff, formatEditDiff, extractStdoutTail) with no external dependencies Rendering changes: - Category-aware markdown renderer with per-type templates: shell → blockquote with exit code + stdout in console blocks write/edit → fenced diff blocks with +/- line stats read → bullet list with optional line ranges grep/glob → bullet list with match/file counts - Inline mode display caps (shell: 5, write/edit: 3, read: 15, etc.) - Fixed category ordering: Shell→Write→Edit→Read→Grep→Glob→… Parser updates: - Codex: structured shell/patch/search/task data - Gemini: capture fileDiff and diffStats from resultDisplay - Copilot: extract tool data from toolRequests (was empty) - Droid/Cursor/Claude: get rich data for free via shared extraction Tests: 258 passing (20 new tests for classifyToolName, diff utils, structured data extraction, and category-aware rendering)
Add full changelog entries covering: - v3.1.0: smart context display with visual before/after examples showing exactly how the handoff markdown output changed - v3.0.0: adapter registry, library exports, cursor support, typed schemas, CLI modularization, 7 parser rewrites - v2.7.0 and v2.6.7 carried forward from previous changelog
…gitkonur#28) closes yigitkonur#18 - store env fingerprint as first line of session index cache - invalidate cache when env vars like CLAUDE_CONFIG_DIR change - deduplicate env vars (XDG_DATA_HOME, GEMINI_CLI_HOME appear in multiple adapters) - hash fingerprint with sha256 for privacy - regression tests for all fingerprint behaviors Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
…fety (yigitkonur#22) (yigitkonur#29) fixes yigitkonur#22 — opencode handoff, gemini resume, and windows spawn safety
Commit c527292 migrated qwen-code types to schemas.ts but never added the types there, breaking the build with 20+ TS2304 errors. This restores the inline type definitions (QwenPart, QwenContent, QwenToolCallResult, QwenFileDiff, QwenTodoResult, QwenUsageMetadata, QwenSystemPayload, QwenChatRecord) and reverts the QwenChatRecordSchema.safeParse() calls back to JSON.parse() as QwenChatRecord since the schema never existed. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
…ensive timestamps, dedup, tree path (yigitkonur#32) - replace unsafe `as QwenChatRecord` casts with `QwenChatRecordSchema.safeParse()` - add zod schemas for all qwen code types in schemas.ts - remove duplicate local interfaces (restored by 1f2d89e, now properly in schemas) - defensive timestamp parsing with mtime fallback for invalid dates - deduplicate tool_result vs functionCall entries via parentUuid tracking - reconstruct main conversation path from uuid/parentUuid tree - rename antigravity fixture session.json → session.jsonl for accuracy all 694 tests pass. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
…-flags-more-clis feat: expand handoff auto-approval defaults to more CLIs
…-presets-tab-cycle improve handoff parsing, chaining, and banner quality
Ensure Cursor session slugs with Windows drive prefixes resolve to valid drive-letter paths instead of /D/... fallbacks, preventing ENOENT chdir failures when resuming. Adds regression tests for drive-path resolution and fallback behavior. Made-with: Cursor
…ession The fallback path in cwdFromSlug() produced Windows-style `D:/...` paths on Unix for slugs starting with a single letter (e.g. `D-Workspace-...`). Gate the drive-letter fallback behind `IS_WINDOWS` so Unix correctly returns `/D/Workspace/...`. Also fix the test: make the Windows drive-letter fallback assertion Windows-only (itWindows) and add a Unix counterpart asserting the correct `/D/Workspace/project/alpha` result.
…-slug fix: resolve Windows Cursor cwd slug path conversion
… usage
The OpenCode parser previously only extracted plain text messages,
producing handoffs that lacked context about what actually happened
in the session. This rewrite extracts all available data from
OpenCode's SQLite database and JSON storage.
What's new in the handoff:
* Model and provider info (e.g. gpt-5.3-codex, openai)
* Token usage breakdown (input, output, reasoning, cache read/write)
* Thinking tokens from OpenAI o-series models
* Active time tracking from first-to-last message timestamps
* Shell commands with exit codes and stdout output
* Full apply_patch diffs with file stats (+N -M)
* Glob results with match counts
* Read operations with file paths
* Generic tool calls (MCP, web_search, web_fetch)
* File modification tracking from patch parts and shell redirects
* Todo extraction from the todo table (pending tasks)
* Reasoning highlights from reasoning parts
Technical changes:
* Rewrite extractToolDataFromParts() with SQLite two-pass extraction
(collect parts, then classify and summarize each tool call)
* Add extractToolDataFromJsonParts() for legacy JSON file storage
* Add extractSessionNotesFromSqlite() for model/tokens/reasoning
* Add extractSessionNotesFromJson() for legacy fallback
* Add extractTodos() for pending task extraction
* Add readConversationMessages{Sqlite,Json}() for clean message reading
* Fix z.record() calls for Zod 4 (requires key + value schemas)
* Update session list to show model from assistant messages
* Use SummaryCollector pattern matching Claude/Codex parsers
Tested end-to-end:
- Ran 5 messages in OpenCode (glob, read, apply_patch, bash)
- Handoff to Codex produces full context with 4 tool categories,
2 file modifications, model info, and token usage
| let totalCost = 0; | ||
|
|
||
| const msgRows = db | ||
| .prepare('SELECT data FROM message WHERE session_id = ? ORDER BY time_created ASC') | ||
| .all(sessionId) as { data: string }[]; | ||
|
|
||
| for (const row of msgRows) { | ||
| try { | ||
| const msgData = SqliteMsgDataSchema.parse(JSON.parse(row.data)); | ||
|
|
||
| // Model info (take from first assistant message that has it) | ||
| if (msgData.role === 'assistant' && msgData.modelID && !notes.model) { | ||
| notes.model = msgData.modelID; | ||
| } | ||
|
|
||
| // Token usage (accumulate — take last value for display) | ||
| if (msgData.tokens) { | ||
| notes.tokenUsage = { | ||
| input: (notes.tokenUsage?.input || 0) + (msgData.tokens.input || 0), | ||
| output: (notes.tokenUsage?.output || 0) + (msgData.tokens.output || 0), | ||
| }; | ||
| if (msgData.tokens.reasoning && msgData.tokens.reasoning > 0) { | ||
| notes.thinkingTokens = (notes.thinkingTokens || 0) + msgData.tokens.reasoning; | ||
| } | ||
| if (msgData.tokens.cache) { | ||
| notes.cacheTokens = { | ||
| read: (notes.cacheTokens?.read || 0) + (msgData.tokens.cache.read || 0), | ||
| creation: (notes.cacheTokens?.creation || 0) + (msgData.tokens.cache.write || 0), | ||
| }; | ||
| } | ||
| } | ||
|
|
||
| // Cost tracking | ||
| if (msgData.cost && msgData.cost > 0) { | ||
| totalCost += msgData.cost; | ||
| } |
There was a problem hiding this comment.
🔴 totalCost is accumulated but never stored in the returned SessionNotes object
In extractSessionNotesFromSqlite, totalCost is declared (line 813) and accumulated from each message's msgData.cost (line 847), but it is never assigned to the notes object that is returned. The cost data is silently discarded. The SessionNotes type at src/types/index.ts:230 also has no cost field, so this appears to be a partially-implemented feature where both the type update and the assignment were forgotten.
Prompt for agents
In src/parsers/opencode.ts, the extractSessionNotesFromSqlite function accumulates totalCost at line 813 and 847 but never assigns it to the notes object. Either:
1. Add a `cost?: number` field to the `SessionNotes` interface in src/types/index.ts (around line 242), and add `if (totalCost > 0) notes.cost = totalCost;` before the return statement at line 886 in src/parsers/opencode.ts.
2. Or remove the dead totalCost variable entirely (lines 813, 846-848) if cost tracking is not needed.
Was this helpful? React with 👍 or 👎 to provide feedback.
| } catch { | ||
| /* ignore */ | ||
| } |
There was a problem hiding this comment.
🔴 Empty catch blocks with only comments violate the mandatory logger.debug convention
REVIEW.md and AGENTS.md both mandate: "Empty catch {} blocks fail the linter; use catch (err) { logger.debug(...) } instead." The PR introduces 9 catch blocks that contain only /* ignore */ comments and no logger.debug call (lines 359, 376, 501, 849, 867, 935, 939, 995, 1007). The existing code in the same file consistently uses logger.debug in catch blocks (e.g., src/parsers/opencode.ts:282, src/parsers/opencode.ts:300), so this is also inconsistent with the local code style.
Prompt for agents
In src/parsers/opencode.ts, replace all empty catch blocks that only have comments with catch blocks that call logger.debug. There are 9 instances at lines 359, 376, 501, 849, 867, 935, 939, 995, and 1007. Each should follow the pattern:
catch (err) { logger.debug('opencode: <description of what failed>', err); }
For example, line 849 should become: catch (err) { logger.debug('opencode: failed to parse message data', err); }
Was this helpful? React with 👍 or 👎 to provide feedback.
| const exitCode = metadata.exit !== undefined ? Number(metadata.exit) : extractExitCode(output); | ||
| const errored = exitCode !== undefined && exitCode !== 0; | ||
| const stdoutTail = metadata.output ? extractStdoutTail(String(metadata.output), 5) : output ? extractStdoutTail(output, 5) : undefined; | ||
| const description = String(metadata.description || input.description || ''); |
There was a problem hiding this comment.
🟡 Unused variable description assigned but never referenced
At line 530, const description = String(metadata.description || input.description || '') is assigned within the bash case of extractToolDataFromParts, but the variable is never used in the summary, data object, or anywhere else. This is likely a bug where the description was intended to be included in the shell summary or data payload but was forgotten.
| const description = String(metadata.description || input.description || ''); |
Was this helpful? React with 👍 or 👎 to provide feedback.
| function trackPatchFiles(patchText: string, collector: SummaryCollector): void { | ||
| const fileMatches = patchText.match(/\*\*\* (?:Add|Update|Delete) File: (.+)/g) || []; | ||
| for (const match of fileMatches) { | ||
| const filePath = match.replace(/^\*\*\* (?:Add|Update|Delete) File: /, ''); | ||
| collector.trackFile(filePath); | ||
| } | ||
| } |
There was a problem hiding this comment.
🟡 Function trackPatchFiles is defined but never called (dead code)
The function trackPatchFiles at line 464 extracts file paths from patch text and tracks them via collector.trackFile(). However, it is never called anywhere in the file. The same logic is implemented inline within both extractToolDataFromParts (lines 576-577) and extractToolDataFromJsonParts (lines 765-766). This dead function adds confusion about the code's intent.
| function trackPatchFiles(patchText: string, collector: SummaryCollector): void { | |
| const fileMatches = patchText.match(/\*\*\* (?:Add|Update|Delete) File: (.+)/g) || []; | |
| for (const match of fileMatches) { | |
| const filePath = match.replace(/^\*\*\* (?:Add|Update|Delete) File: /, ''); | |
| collector.trackFile(filePath); | |
| } | |
| } |
Was this helpful? React with 👍 or 👎 to provide feedback.
| function extractSessionNotesFromJson(sessionId: string): SessionNotes { | ||
| const notes: SessionNotes = {}; | ||
| const reasoning: string[] = []; | ||
|
|
||
| const messageDir = path.join(OPENCODE_STORAGE_DIR, 'message', sessionId); | ||
| if (!fs.existsSync(messageDir)) return notes; | ||
|
|
||
| const msgFiles = fs | ||
| .readdirSync(messageDir) | ||
| .filter((f) => f.startsWith('msg_') && f.endsWith('.json')) | ||
| .sort(); | ||
|
|
||
| for (const msgFile of msgFiles) { | ||
| try { | ||
| const msgPath = path.join(messageDir, msgFile); | ||
| const msgContent = fs.readFileSync(msgPath, 'utf8'); | ||
| const msgResult = OpenCodeMessageSchema.safeParse(JSON.parse(msgContent)); | ||
| if (!msgResult.success) continue; | ||
| const msg = msgResult.data; | ||
|
|
||
| if (msg.role === 'assistant' && (msg as Record<string, unknown>).modelID && !notes.model) { | ||
| notes.model = (msg as Record<string, unknown>).modelID as string; | ||
| } | ||
|
|
||
| // Read parts for reasoning | ||
| const partDir = path.join(OPENCODE_STORAGE_DIR, 'part', msg.id); | ||
| if (!fs.existsSync(partDir)) continue; | ||
|
|
||
| const partFiles = fs | ||
| .readdirSync(partDir) | ||
| .filter((f) => f.startsWith('prt_') && f.endsWith('.json')) | ||
| .sort(); | ||
|
|
||
| for (const partFile of partFiles) { | ||
| if (reasoning.length >= 10) break; | ||
| const partPath = path.join(partDir, partFile); | ||
| const partContent = fs.readFileSync(partPath, 'utf8'); | ||
| try { | ||
| const partData = JSON.parse(partContent); | ||
| if (partData.type === 'reasoning' && partData.text && partData.text.length > 20) { | ||
| const firstLine = partData.text.split(/[.\n]/)[0]?.trim(); | ||
| if (firstLine) reasoning.push(truncate(firstLine, 200)); | ||
| } | ||
| } catch { | ||
| /* ignore */ | ||
| } | ||
| } | ||
| } catch { | ||
| /* ignore */ | ||
| } | ||
| } | ||
|
|
||
| if (reasoning.length > 0) notes.reasoning = reasoning; | ||
| return notes; |
There was a problem hiding this comment.
🚩 JSON fallback extractSessionNotesFromJson does not extract token usage or cost
The SQLite path extractSessionNotesFromSqlite accumulates tokenUsage, thinkingTokens, cacheTokens, and cost from message data. However, the JSON fallback extractSessionNotesFromJson (line 892) only extracts model and reasoning — it does not attempt to parse token or cost data from JSON message files. This means sessions parsed via the JSON fallback will always have incomplete SessionNotes compared to SQLite sessions. This may be intentional (JSON format may not include token data), but it's an asymmetry worth noting.
Was this helpful? React with 👍 or 👎 to provide feedback.
| default: { | ||
| const params = JSON.stringify(input).slice(0, 100); | ||
| const partTool = String(part.tool); | ||
| collector.add(partTool, mcpSummary(partTool, params, output), { | ||
| data: { | ||
| category: 'mcp', | ||
| toolName: partTool, | ||
| params, | ||
| ...(output ? { result: output.slice(0, 100) } : {}), | ||
| }, | ||
| }); | ||
| } |
There was a problem hiding this comment.
📝 Info: JSON fallback switch statement missing web_search and web_fetch cases
The SQLite tool extraction path (extractToolDataFromParts, lines 595-613) has explicit case 'web_search' and case 'web_fetch' handlers with proper category tagging. The JSON fallback extractToolDataFromJsonParts (line 725) does not have these cases — web search/fetch tools fall through to the default MCP handler, which tags them with category: 'mcp' instead of 'search'/'fetch'. This creates an inconsistency in how the same tools are categorized depending on the storage backend. If downstream rendering uses the category field for display, web tools from JSON sessions will look different.
Was this helpful? React with 👍 or 👎 to provide feedback.
| const partDataResult = SqlitePartDataSchema.safeParse(JSON.parse(row.data)); | ||
| if (!partDataResult.success) continue; | ||
| // Cast to flexible type for tool data access | ||
| const part = JSON.parse(row.data) as Record<string, unknown>; |
There was a problem hiding this comment.
📝 Info: Double JSON.parse on each part row in tool extraction
In both extractToolDataFromParts (lines 509-512) and extractToolDataFromJsonParts (lines 712-715), each part row's data is parsed twice: once for SqlitePartDataSchema.safeParse(JSON.parse(row.data)) validation, and then again with JSON.parse(row.data) as Record<string, unknown> for flexible access. The validated result from safeParse is discarded after the success check. This is a minor performance issue — the validated partDataResult.data could be used directly instead of re-parsing.
Was this helpful? React with 👍 or 👎 to provide feedback.
| if (hasSqliteDb()) { | ||
| const handle = openDb(); | ||
| if (handle) { | ||
| const { db, close } = handle; | ||
| try { | ||
| // Extract rich tool data from parts | ||
| const toolData = extractToolDataFromParts(session.id, db, resolvedConfig); | ||
| toolSummaries = toolData.summaries; | ||
| filesModified = toolData.filesModified; | ||
|
|
||
| // Extract session notes (model, tokens, reasoning) | ||
| sessionNotes = extractSessionNotesFromSqlite(session.id, db); | ||
|
|
||
| // Extract pending todos | ||
| pendingTasks = extractTodos(session.id, db); | ||
|
|
||
| // Read conversation messages | ||
| allMessages = readConversationMessagesFromSqlite(session.id, db); | ||
| } finally { | ||
| close(); | ||
| } | ||
| break; // found the session file | ||
| } | ||
| } catch { | ||
| // Silently skip — tool summaries are optional | ||
| } |
There was a problem hiding this comment.
📝 Info: SQLite context extraction opens only one DB handle for all operations
The refactored extractOpenCodeContext (line 1085) opens a single database handle and passes it to extractToolDataFromParts, extractSessionNotesFromSqlite, extractTodos, and readConversationMessagesFromSqlite. This is a good improvement over the old readMessagesFromSqlite which opened its own DB handle separately. The single-handle approach avoids multiple open/close cycles and ensures consistent reads within the same connection.
Was this helpful? React with 👍 or 👎 to provide feedback.
| import type { ConversationMessage, SessionContext, ToolUsageSummary, UnifiedSession } from '../types/index.js'; | ||
| import type { | ||
| ConversationMessage, | ||
| ReasoningStep, |
There was a problem hiding this comment.
📝 Info: Unused imports ReasoningStep, withResult, and type SqlitePartRow
ReasoningStep (line 8), withResult (line 41), and SqlitePartRow (line 18) are imported but never used in the file. While noUnusedImports is set to "error" in biome.json:26 and will be caught by the linter, these indicate possible incomplete implementation — e.g., withResult may have been intended for tool summary formatting, and ReasoningStep for structured reasoning data.
Was this helpful? React with 👍 or 👎 to provide feedback.
| const partDataResult = SqlitePartDataSchema.safeParse(JSON.parse(row.data)); | ||
| if (!partDataResult.success) continue; | ||
| // Cast to flexible type for tool data access | ||
| const part = JSON.parse(row.data) as Record<string, unknown>; |
There was a problem hiding this comment.
Zod validation discarded — double JSON.parse
row.data is parsed and validated through SqlitePartDataSchema.safeParse, but then immediately re-parsed with a raw cast on line 512, completely discarding the validated result. This makes the Zod check a no-op and re-introduces the risk of malformed data crashing execution downstream.
| const partDataResult = SqlitePartDataSchema.safeParse(JSON.parse(row.data)); | |
| if (!partDataResult.success) continue; | |
| // Cast to flexible type for tool data access | |
| const part = JSON.parse(row.data) as Record<string, unknown>; | |
| const partDataResult = SqlitePartDataSchema.safeParse(JSON.parse(row.data)); | |
| if (!partDataResult.success) continue; | |
| const part = partDataResult.data as Record<string, unknown>; |
Same pattern is repeated in extractToolDataFromJsonParts at line ~715.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/parsers/opencode.ts
Line: 509-512
Comment:
**Zod validation discarded — double JSON.parse**
`row.data` is parsed and validated through `SqlitePartDataSchema.safeParse`, but then immediately re-parsed with a raw cast on line 512, completely discarding the validated result. This makes the Zod check a no-op and re-introduces the risk of malformed data crashing execution downstream.
```suggestion
const partDataResult = SqlitePartDataSchema.safeParse(JSON.parse(row.data));
if (!partDataResult.success) continue;
const part = partDataResult.data as Record<string, unknown>;
```
Same pattern is repeated in `extractToolDataFromJsonParts` at line ~715.
How can I resolve this? If you propose a fix, please make it concise.| const partResult = OpenCodePartSchema.safeParse(JSON.parse(partContent)); | ||
| if (!partResult.success) continue; | ||
| // Cast to flexible type for tool data access | ||
| const part = JSON.parse(partContent) as Record<string, unknown>; |
There was a problem hiding this comment.
Same double JSON.parse in JSON path
Same bug as in the SQLite path at line 509–512. partContent is validated through OpenCodePartSchema.safeParse but then re-parsed with a raw cast. Use the validated result:
| const partResult = OpenCodePartSchema.safeParse(JSON.parse(partContent)); | |
| if (!partResult.success) continue; | |
| // Cast to flexible type for tool data access | |
| const part = JSON.parse(partContent) as Record<string, unknown>; | |
| const partResult = OpenCodePartSchema.safeParse(JSON.parse(partContent)); | |
| if (!partResult.success) continue; | |
| const part = partResult.data as Record<string, unknown>; |
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/parsers/opencode.ts
Line: 712-715
Comment:
**Same double JSON.parse in JSON path**
Same bug as in the SQLite path at line 509–512. `partContent` is validated through `OpenCodePartSchema.safeParse` but then re-parsed with a raw cast. Use the validated result:
```suggestion
const partResult = OpenCodePartSchema.safeParse(JSON.parse(partContent));
if (!partResult.success) continue;
const part = partResult.data as Record<string, unknown>;
```
How can I resolve this? If you propose a fix, please make it concise.|
|
||
| const trimmed = recentMessages.slice(-resolvedConfig.recentMessages); | ||
| // Trim messages to configured limit | ||
| const trimmed = allMessages.slice(-resolvedConfig.recentMessages); |
There was a problem hiding this comment.
trimMessages() not used — handoff may have zero user messages
This uses a raw .slice() instead of the required trimMessages() helper from parser-helpers.ts. If the last N messages are all assistant messages, the handoff document loses all user context. The trimMessages() helper guarantees at least one user message is included.
| const trimmed = allMessages.slice(-resolvedConfig.recentMessages); | |
| const trimmed = trimMessages(allMessages, resolvedConfig.recentMessages); |
Also requires adding the import: import { extractRepoFromCwd, homeDir, trimMessages } from '../utils/parser-helpers.js';
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/parsers/opencode.ts
Line: 1123
Comment:
**`trimMessages()` not used — handoff may have zero user messages**
This uses a raw `.slice()` instead of the required `trimMessages()` helper from `parser-helpers.ts`. If the last N messages are all assistant messages, the handoff document loses all user context. The `trimMessages()` helper guarantees at least one user message is included.
```suggestion
const trimmed = trimMessages(allMessages, resolvedConfig.recentMessages);
```
Also requires adding the import: `import { extractRepoFromCwd, homeDir, trimMessages } from '../utils/parser-helpers.js';`
How can I resolve this? If you propose a fix, please make it concise.
Problem
The OpenCode parser produced bare-bones handoffs — only plain text messages, no tool calls, no file modifications, no model info, no token usage. Starting in OpenCode and handing off to Codex (or any other tool) lost almost all context about what actually happened.
Before (old handoff):
After (this PR):
What changed
Rewrote
src/parsers/opencode.tsto extract all available data from OpenCode's SQLite database and JSON storage.New data extracted:
gpt-5.3-codex,openai)apply_patchdiffs with file stats (+N -M)todotableTechnical approach:
SummaryCollectorpattern as Claude and Codex parsersShellSampleData,EditSampleData,ReadSampleData,GlobSampleData) for category-aware renderingz.record(key, value)instead ofz.record(value))Testing
Ran 5 messages in OpenCode (
opencode run) with mixed tool calls:glob— find filesread— read file contentsapply_patch— create/edit filesbash— run node to verifyHandoff to Codex via
continues inspect <session-id>produces the full rich context shown above.735 lines added, 119 removed in
src/parsers/opencode.ts.Review all of them with eye of John Carmack-like simplicity with elegeance approach and apply the one only if required
Greptile Summary
This PR rewrites
src/parsers/opencode.tsto extract rich context from OpenCode sessions — tool calls (bash, glob, read, apply_patch, web_search), session metadata (model, token usage, cache, thinking tokens), todos, and reasoning highlights — producing the kind of handoff document that actually helps the receiving tool resume work intelligently. The core approach (two-pass SQLite extraction +SummaryCollector) fits the established pattern. There are three correctness issues that need fixes before merge:extractToolDataFromPartsandextractToolDataFromJsonPartsvalidate raw data throughsafeParse, then immediately re-parse the same string withas Record<string, unknown>, discarding the validated result entirely. The Zod check becomes a gate that passes/fails but its output is thrown away.trimMessages()(P1):extractOpenCodeContextcallsallMessages.slice(-N)directly instead oftrimMessages()fromparser-helpers.ts. A session ending with a burst of assistant messages will produce a handoff with zero user context.TOOL_NAME_MAP(1-to-1 identity map, only used in thedefaultbranch),trackPatchFiles(defined but never called — its logic is duplicated inline in bothapply_patchcases), an unuseddescriptionvariable in the bash handler, and an unusedwithResultimport.Confidence Score: 3/5
Important Files Changed
Flowchart
%%{init: {'theme': 'neutral'}}%% flowchart TD A[extractOpenCodeContext] --> B{hasSqliteDb?} B -- yes --> C[openDb] C --> D[extractToolDataFromParts] C --> E[extractSessionNotesFromSqlite] C --> F[extractTodos] C --> G[readConversationMessagesFromSqlite] D --> D1[patchParts loop - file tracking] D --> D2[partRows loop - tool switch] D2 --> D3{tool type} D3 -- bash --> D4[shellSummary + trackShellFileWrites] D3 -- glob --> D5[globSummary] D3 -- read --> D6[fileSummary] D3 -- apply_patch --> D7[countDiffStats + trackFile] D3 -- web_search --> D8[searchSummary] D3 -- web_fetch --> D9[fetchSummary] D3 -- default --> D10[mcpSummary] B -- no / empty --> H[readConversationMessagesFromJson] H --> I[extractToolDataFromJsonParts] H --> J[extractSessionNotesFromJson] G --> K{allMessages empty?} K -- yes --> H K -- no --> L[allMessages.slice - SHOULD USE trimMessages] L --> M[generateHandoffMarkdown] M --> N[SessionContext]Prompt To Fix All With AI
Reviews (1): Last reviewed commit: "feat(opencode): rich handoff with tool c..." | Re-trigger Greptile