Skip to content

feat(ai): AI backbone + inline chat for code review#363

Merged
backnotprop merged 21 commits into
mainfrom
feat/ai-backbone
Mar 23, 2026
Merged

feat(ai): AI backbone + inline chat for code review#363
backnotprop merged 21 commits into
mainfrom
feat/ai-backbone

Conversation

@backnotprop
Copy link
Copy Markdown
Owner

@backnotprop backnotprop commented Mar 21, 2026

Summary

  • Introduces @plannotator/ai — a pluggable provider layer powering AI features across all Plannotator surfaces
  • Two providers: Claude Agent SDK (fork, resume, streaming, permission approval) and OpenAI Codex SDK (streaming, reasoning effort control)
  • Full code review integration: inline "Ask AI" on selected lines, general questions, file-scoped Q&A
  • Provider and model selection via Settings tab and inline config bar with per-provider persistence

What the user sees

  • AI tab in the code review sidebar with file-grouped Q&A, streaming responses, and markdown rendering
  • "Ask AI" button on line selections in the diff viewer — scoped questions about specific code
  • General input at the bottom of the AI tab for questions about the overall changes (auto-grows on multi-line)
  • Config bar above the input showing active provider + model with custom dropdown menus (provider icons, checkmarks)
  • Provider/model settings in the Settings dialog AI tab, persisted per-provider via cookies
  • Reasoning effort control for Codex (Low/Medium/High/Max)
  • Permission cards for Claude tool approvals (inline approve/deny UI)
  • Streaming indicators — cursor animation during responses, disabled input while streaming

Architecture

packages/ai/
├── types.ts                        # AIProvider, AISession, AIMessage, AIContext
├── context.ts                      # System prompt builders for plan/review/annotate modes
├── provider.ts                     # ProviderRegistry + factory pattern
├── session-manager.ts              # In-memory session tracking, eviction, ID aliasing
├── endpoints.ts                    # SSE-streaming HTTP handlers (provider-agnostic)
├── providers/
│   ├── claude-agent-sdk.ts         # Claude provider (fork, resume, permission mode)
│   └── codex-sdk.ts                # Codex provider (thread-based, reasoning effort)
└── ai.test.ts                      # 54 unit tests

packages/review-editor/
├── hooks/useAIChat.ts              # React hook: session lifecycle, streaming, state
├── components/
│   ├── AITab.tsx                    # Sidebar tab with Q&A pairs, file grouping
│   ├── AIConfigBar.tsx             # Provider/model/effort selector (custom dropdowns)
│   ├── AskAIInput.tsx              # Inline input on diff line selections
│   ├── InlineAIMarker.tsx          # Visual marker on diff lines with AI questions
│   ├── PermissionCard.tsx          # Approve/deny UI for tool permission requests
│   └── SparklesIcon.tsx            # AI tab icon
└── utils/renderMarkdown.tsx        # Safe markdown rendering for AI responses

packages/server/review.ts           # Mounts AI endpoints, worktree-aware cwd resolution
packages/ui/components/
├── ProviderIcons.tsx               # Claude SVG + Codex PNG icons
└── AISettingsTab.tsx               # Settings dialog AI tab with model dropdowns
packages/ui/utils/aiProvider.ts     # Cookie-based provider/model preference storage

Key design decisions

  • ProviderRegistry is a class — no shared mutable state between servers
  • Session ID aliasing — SDK resolves real IDs after first query; alias map keeps client-facing ID stable
  • Worktree-aware cwd — AI sessions resolve working directory from current diff type (worktree path, gitContext.cwd, or process.cwd)
  • Per-query AbortController — abort doesn't poison the session for future queries
  • Read-only by default — inline chat limited to Read, Glob, Grep, WebSearch tools
  • Graceful provider degradation — SDK imports wrapped in try/catch; providers only advertised if SDK is available
  • Cookie-based settings — per-provider model preferences stored as cookies (not localStorage) since each invocation runs on a random port
  • Input blocking while streaming — prevents concurrent queries instead of complex server-side abort

Dependencies added

  • @anthropic-ai/claude-agent-sdk ^0.2.81
  • @openai/codex-sdk ^0.116.0

Test plan

  • 54 unit tests passing (bun test packages/ai/)
  • Full build chain: apps/review buildbuild:hook → compiled binary
  • Manual: select lines → Ask AI → streaming response appears in sidebar
  • Manual: switch provider/model in config bar → "New chat session" flash → next question uses new config
  • Manual: Codex reasoning effort selector appears only when Codex is selected
  • Integration test with live Claude Agent SDK

Introduces @plannotator/ai — a pluggable provider layer that powers
AI features (inline chat, plan Q&A, code review assistance) across
all Plannotator surfaces.

- ProviderRegistry: class-based, multi-instance provider management
- SessionManager: in-memory session tracking with eviction, ID aliasing
- Context builders: system prompts and fork preambles for all three modes
- SSE streaming endpoints: session create/fork, query, abort, list
- Claude Agent SDK provider: fork from parent sessions, read-only tools,
  per-query abort, cached SDK import, concurrent query guard
- 34 passing tests covering all layers

For provenance purposes, this commit was AI assisted.
- Concurrent query race: reject with session_busy error instead of
  racing; add query generation counter so stale finally blocks don't
  clobber newer query state
- Text dropped before tool_use: mapSDKMessage now returns AIMessage[]
  and flushes accumulated text before each tool_use block
- defaultProvider returns instance ID: ProviderRegistry.getDefault()
  returns { id, provider } so capabilities endpoint sends the correct
  key for multi-instance setups
- Fork preamble re-sent on error: use _firstQuerySent boolean instead
  of _resolvedId to guard preamble injection
- Dead abortController params: remove unused abortController from
  CreateSessionOptions and AIProvider.resumeSession
- Unused context in memory: remove context from SessionConfig and
  baseConfig — already consumed to build systemPrompt/forkPreamble
- Fix tool_result mapping: SDK wraps tool results in SDKUserMessage
  (type "user"), not a "tool_result" type

For provenance purposes, this commit was AI assisted.
Add "Ask AI" to the code review toolbar — select lines, ask questions
about agent-written code, get streaming responses. Conversation persists
across file switches with inline markers on the diff showing where
questions were asked.

- useAIChat hook: SSE streaming, lazy session creation, multi-turn
- AskAIInput: question input with per-selection AI history
- InlineAIMarker: compact sparkle markers on diff lines via @pierre
- AITab: sidebar conversation grouped by file with collapsible sections
- SparklesIcon: shared 3-star sparkle from automations branch
- ReviewPanel: AI tab with message count badge, file group flash highlight
- Server: mount AI endpoints in review server with graceful fallback
- Support three question scopes: line, file, and general
- Toolbar ergonomics: typed text submits as AI question, no double popover

For provenance purposes, this commit was AI assisted.
- X button closes entire toolbar in both modes (no ghost popover)
- Comment/Ask AI toggle is inline in the action row, same position
  in both modes (left side, mirrored layout)
- AI history appears below the input, not above
- File group flash highlight uses --muted to match file tree hover

For provenance purposes, this commit was AI assisted.
Resolves import conflict in packages/server/review.ts — main renamed
getGhUser to getUser and added GitLab MR helpers (prRefFromMetadata,
getDisplayRepo, getMRLabel, getMRNumberLabel). Kept both main's updated
PR imports and our AI endpoint import.

For provenance purposes, this commit was AI assisted.
Visual consistency:
- AI tab file group headers match FileTree pattern (sizing, hover, badges)
- Q&A bubbles match annotation card padding and border-radius
- Empty state uses circle-bg icon (same as annotations)
- General input matches search input styling
- Removed redundant CSS classes, moved to Tailwind

Shared primitives:
- Extract formatTimestamp to shared utility (was duplicated)
- CountBadge component for consistent badge styling
- CopyButton component with hover-reveal + "Copied" flash

Affordances:
- Copy button on AI responses (hover-reveal)
- Send button on general input with keyboard shortcut hint
- Timestamps on Q&A pairs
- Keyboard shortcut title on Ask button

Markdown rendering:
- Add marked dependency for full block-level markdown in AI responses
- renderMarkdown utility using marked + DOMPurify
- Scoped .ai-markdown CSS for headings, lists, code blocks, blockquotes

For provenance purposes, this commit was AI assisted.
The inline AI agent now inherits the user's existing Claude Code
permission rules via settingSources: ['user', 'project']. Tools
already approved in ~/.claude/settings.json auto-approve silently.

For tools not covered by existing rules, the SDK emits control_request
messages. These are mapped to a new AIPermissionRequestMessage type and
surfaced as inline approval cards in the AI tab — showing the tool name,
command/input, and Allow/Deny buttons.

Backend:
- Map control_request to AIPermissionRequestMessage in provider
- Store Query object on session for sending control_response back
- respondToPermission() method on AISession sends allow/deny
- POST /api/ai/permission endpoint for frontend → backend decisions
- Add settingSources to ClaudeSDKQueryOptions

Frontend:
- PermissionCard component with Allow/Deny buttons + resolved state
- useAIChat tracks permissionRequests array
- respondToPermission() POSTs decision to server
- AITab renders pending permission cards inline in conversation

For provenance purposes, this commit was AI assisted.
Streaming fixes:
- State updates now create new object references via prev.map()
  instead of mutating objects inside the array — fixes React not
  re-rendering the sidebar during streaming
- Complete assistant text messages only used as fallback when no
  streaming deltas received (prevents duplicate text)
- Result text used as fallback when response is empty (agent spent
  all turns on tools)

Config:
- maxTurns raised to 99 (was 3 — agent ran out of turns reading
  files and never got to write its answer)

Popover → sidebar linking:
- Clicking AI history items in comment popover now passes questionId
  through to handleViewAIResponse, triggering the file group flash
  highlight in the sidebar (same behavior as clicking inline markers)

Debug logging (temporary):
- Server-side: [AI STREAM] logs every message the provider yields
- Client-side: [AI SSE] logs every message received in the browser

For provenance purposes, this commit was AI assisted.
From the three-agent code review analysis:

Quality:
- Rename AIMessage → AIChatEntry in review-editor to avoid collision
  with the wire-protocol AIMessage from packages/ai
- Type permissionMode as union (not string) in SessionConfig + QueryOptions
- Type _activeQuery with narrow interface instead of any
- Make toolUseId optional on AIToolResultMessage (SDK doesn't populate it)
- Remove redundant sessionIdRef double-write in useAIChat
- Fix handleAskGeneral useCallback dep: [aiChat] → [aiChat.ask]

Performance:
- Wrap QAPair in React.memo to prevent re-renders on sibling updates
- Memoize renderMarkdown output via useMemo (avoid re-parsing on every token)
- Fix auto-scroll useEffect: depend on messages.length not messages ref
  (prevents querySelectorAll on every streaming token)

Cleanup:
- Remove debug console.log from SSE endpoint and browser parser
- Extract isMac/submitHint to shared utils/platform.ts
- Extract generateId to shared utils/generateId.ts

Net: -2 lines. No behavior changes.

For provenance purposes, this commit was AI assisted.
- Use `import type` where only types are consumed (endpoints, CountBadge,
  CopyButton, renderMarkdown)
- Remove unused AIContext import from session-manager
- Template literal instead of string concat in context.ts truncate()
- Add type="button" + aria-hidden on CopyButton SVGs

For provenance purposes, this commit was AI assisted.
- Add CodexSDKProvider implementing AIProvider interface
  - Maps Codex ThreadEvents to AIMessage types
  - Delta tracking for streaming text from cumulative item.updated events
  - Supports resume, abort, sandbox modes
  - Factory self-registration pattern matching Claude provider
- Enhance capabilities endpoint with provider models + metadata
- Register both providers in review server (graceful degradation)
- Add CodexSDKConfig type, models list (GPT-5.4 family)
- Add reasoningEffort to CreateSessionOptions (Codex only)
- Extract DiffFile interface to shared types.ts
- Update CLAUDE.md with build order documentation
- 54 tests passing (20+ new for Codex event mapping)

For provenance purposes, this commit was AI assisted.
- Settings → AI tab: provider cards with per-provider model dropdowns
- Inline config bar in AI chat sidebar (provider, model, reasoning effort)
- Provider icons: Claude SVG + Codex PNG from marketing assets
- Cookie-based persistence for provider + per-provider model preferences
- useAIChat accepts providerId, model, reasoningEffort from props
- Session reset on provider/model switch (next question starts fresh)
- Reasoning effort selector (Codex only: Low/Medium/High/Max)
- Update code review docs with AI provider documentation

For provenance purposes, this commit was AI assisted.
…on, codex-sdk dep

- Add missing selectedReasoningEffort + onReasoningEffortChange props to
  populated-state AIConfigBar (was only on empty-state, causing crash)
- Disable GeneralInput while streaming to prevent concurrent queries
- Thread cwd through CreateSessionOptions so AI sessions use the correct
  working directory (worktree-aware via getCwd resolver)
- Add @openai/codex-sdk as a real dependency (was only an optional peer)
- Eagerly verify codex-sdk importability at startup to avoid advertising
  a broken provider

For provenance purposes, this commit was AI assisted.
- Replace native <select> dropdowns with custom popover menus that show
  provider icons, model names, and checkmarks on active selection
- Remove disabled gear placeholder button
- Auto-grow textarea upward on multi-line input (capped at 6 lines)
- Push tab icons to far right in sidebar header (w-full on flex container)
- Add centered divider between file-scoped and general questions

For provenance purposes, this commit was AI assisted.
@backnotprop backnotprop changed the title feat(ai): provider-agnostic AI backbone for inline chat feat(ai): AI backbone + inline chat for code review Mar 23, 2026
- Endpoint now checks capabilities.fork before calling forkSession(),
  falling back to createSession() for providers that can't truly fork
- Codex forkSession() now throws as the interface contract requires
  (was silently faking a fork with no history — misleading UX)
- Fixed permissionMode JSDoc: said "plan" but code defaults to "default"

For provenance purposes, this commit was AI assisted.
Count now appears inline to the left of the X button instead of
squeezing the input from outside. Dynamic right padding accommodates
the count + clear button when a query is active.

For provenance purposes, this commit was AI assisted.
…d/ listing

For provenance purposes, this commit was AI assisted.
- Claude provider now uses { preset: "claude_code", append } instead of
  a raw system prompt, so inline chat inherits Claude Code's full
  built-in capabilities
- Stripped custom role/guidelines preamble from context builders — just
  emit the review context (diff, plan, document)
- handleAskAI now extracts selected code via extractLinesFromPatch and
  sends it with the query, fixing hallucinations on large PRs where the
  file's diff was truncated out of the system prompt

For provenance purposes, this commit was AI assisted.
Both @anthropic-ai/claude-agent-sdk and @openai/codex-sdk are now
direct imports (no string indirection) so bun build --compile bundles
them into the binary. Users don't need to install SDKs separately.

Removed peer dependency declarations from packages/ai/package.json —
meaningless for a compiled binary.

For provenance purposes, this commit was AI assisted.
- Created packages/ui/utils/platform.ts as the single source for
  isMac, modKey, altKey, submitHint, isWindows
- Consolidated inline navigator.platform checks from 6 files
- Renamed renderMarkdown → renderChatMarkdown (distinct from plan editor)
- Renamed formatTimestamp → formatRelativeTime (clearer purpose)
- Added file-level comments to InlineAIMarker, PermissionCard, generateId

For provenance purposes, this commit was AI assisted.
New docs page at guides/ai-features covering providers, models,
configuration, session behavior, permissions, and available settings.

AISettingsTab now links to the docs in both empty and populated states.

For provenance purposes, this commit was AI assisted.
@backnotprop backnotprop merged commit 4b09719 into main Mar 23, 2026
5 checks passed
stk-code pushed a commit to stk-code/plannotator that referenced this pull request Mar 31, 2026
* feat(ai): add provider-agnostic AI backbone for inline chat

Introduces @plannotator/ai — a pluggable provider layer that powers
AI features (inline chat, plan Q&A, code review assistance) across
all Plannotator surfaces.

- ProviderRegistry: class-based, multi-instance provider management
- SessionManager: in-memory session tracking with eviction, ID aliasing
- Context builders: system prompts and fork preambles for all three modes
- SSE streaming endpoints: session create/fork, query, abort, list
- Claude Agent SDK provider: fork from parent sessions, read-only tools,
  per-query abort, cached SDK import, concurrent query guard
- 34 passing tests covering all layers

For provenance purposes, this commit was AI assisted.

* fix(ai): address code review findings from PR backnotprop#363

- Concurrent query race: reject with session_busy error instead of
  racing; add query generation counter so stale finally blocks don't
  clobber newer query state
- Text dropped before tool_use: mapSDKMessage now returns AIMessage[]
  and flushes accumulated text before each tool_use block
- defaultProvider returns instance ID: ProviderRegistry.getDefault()
  returns { id, provider } so capabilities endpoint sends the correct
  key for multi-instance setups
- Fork preamble re-sent on error: use _firstQuerySent boolean instead
  of _resolvedId to guard preamble injection
- Dead abortController params: remove unused abortController from
  CreateSessionOptions and AIProvider.resumeSession
- Unused context in memory: remove context from SessionConfig and
  baseConfig — already consumed to build systemPrompt/forkPreamble
- Fix tool_result mapping: SDK wraps tool results in SDKUserMessage
  (type "user"), not a "tool_result" type

For provenance purposes, this commit was AI assisted.

* feat(ai): inline AI chat on code review diffs

Add "Ask AI" to the code review toolbar — select lines, ask questions
about agent-written code, get streaming responses. Conversation persists
across file switches with inline markers on the diff showing where
questions were asked.

- useAIChat hook: SSE streaming, lazy session creation, multi-turn
- AskAIInput: question input with per-selection AI history
- InlineAIMarker: compact sparkle markers on diff lines via @pierre
- AITab: sidebar conversation grouped by file with collapsible sections
- SparklesIcon: shared 3-star sparkle from automations branch
- ReviewPanel: AI tab with message count badge, file group flash highlight
- Server: mount AI endpoints in review server with graceful fallback
- Support three question scopes: line, file, and general
- Toolbar ergonomics: typed text submits as AI question, no double popover

For provenance purposes, this commit was AI assisted.

* fix(ai): toolbar popover ergonomics and visual consistency

- X button closes entire toolbar in both modes (no ghost popover)
- Comment/Ask AI toggle is inline in the action row, same position
  in both modes (left side, mirrored layout)
- AI history appears below the input, not above
- File group flash highlight uses --muted to match file tree hover

For provenance purposes, this commit was AI assisted.

* fix(ai): visual consistency, shared primitives, markdown rendering

Visual consistency:
- AI tab file group headers match FileTree pattern (sizing, hover, badges)
- Q&A bubbles match annotation card padding and border-radius
- Empty state uses circle-bg icon (same as annotations)
- General input matches search input styling
- Removed redundant CSS classes, moved to Tailwind

Shared primitives:
- Extract formatTimestamp to shared utility (was duplicated)
- CountBadge component for consistent badge styling
- CopyButton component with hover-reveal + "Copied" flash

Affordances:
- Copy button on AI responses (hover-reveal)
- Send button on general input with keyboard shortcut hint
- Timestamps on Q&A pairs
- Keyboard shortcut title on Ask button

Markdown rendering:
- Add marked dependency for full block-level markdown in AI responses
- renderMarkdown utility using marked + DOMPurify
- Scoped .ai-markdown CSS for headings, lists, code blocks, blockquotes

For provenance purposes, this commit was AI assisted.

* feat(ai): permission inheritance + inline approval UI

The inline AI agent now inherits the user's existing Claude Code
permission rules via settingSources: ['user', 'project']. Tools
already approved in ~/.claude/settings.json auto-approve silently.

For tools not covered by existing rules, the SDK emits control_request
messages. These are mapped to a new AIPermissionRequestMessage type and
surfaced as inline approval cards in the AI tab — showing the tool name,
command/input, and Allow/Deny buttons.

Backend:
- Map control_request to AIPermissionRequestMessage in provider
- Store Query object on session for sending control_response back
- respondToPermission() method on AISession sends allow/deny
- POST /api/ai/permission endpoint for frontend → backend decisions
- Add settingSources to ClaudeSDKQueryOptions

Frontend:
- PermissionCard component with Allow/Deny buttons + resolved state
- useAIChat tracks permissionRequests array
- respondToPermission() POSTs decision to server
- AITab renders pending permission cards inline in conversation

For provenance purposes, this commit was AI assisted.

* fix(ai): streaming reactivity, maxTurns, popover→sidebar linking

Streaming fixes:
- State updates now create new object references via prev.map()
  instead of mutating objects inside the array — fixes React not
  re-rendering the sidebar during streaming
- Complete assistant text messages only used as fallback when no
  streaming deltas received (prevents duplicate text)
- Result text used as fallback when response is empty (agent spent
  all turns on tools)

Config:
- maxTurns raised to 99 (was 3 — agent ran out of turns reading
  files and never got to write its answer)

Popover → sidebar linking:
- Clicking AI history items in comment popover now passes questionId
  through to handleViewAIResponse, triggering the file group flash
  highlight in the sidebar (same behavior as clicking inline markers)

Debug logging (temporary):
- Server-side: [AI STREAM] logs every message the provider yields
- Client-side: [AI SSE] logs every message received in the browser

For provenance purposes, this commit was AI assisted.

* refactor(ai): simplification pass — types, perf, cleanup

From the three-agent code review analysis:

Quality:
- Rename AIMessage → AIChatEntry in review-editor to avoid collision
  with the wire-protocol AIMessage from packages/ai
- Type permissionMode as union (not string) in SessionConfig + QueryOptions
- Type _activeQuery with narrow interface instead of any
- Make toolUseId optional on AIToolResultMessage (SDK doesn't populate it)
- Remove redundant sessionIdRef double-write in useAIChat
- Fix handleAskGeneral useCallback dep: [aiChat] → [aiChat.ask]

Performance:
- Wrap QAPair in React.memo to prevent re-renders on sibling updates
- Memoize renderMarkdown output via useMemo (avoid re-parsing on every token)
- Fix auto-scroll useEffect: depend on messages.length not messages ref
  (prevents querySelectorAll on every streaming token)

Cleanup:
- Remove debug console.log from SSE endpoint and browser parser
- Extract isMac/submitHint to shared utils/platform.ts
- Extract generateId to shared utils/generateId.ts

Net: -2 lines. No behavior changes.

For provenance purposes, this commit was AI assisted.

* chore: biome lint fixes — import types, button a11y, unused import

- Use `import type` where only types are consumed (endpoints, CountBadge,
  CopyButton, renderMarkdown)
- Remove unused AIContext import from session-manager
- Template literal instead of string concat in context.ts truncate()
- Add type="button" + aria-hidden on CopyButton SVGs

For provenance purposes, this commit was AI assisted.

* feat(ai): Codex SDK provider + multi-provider backend

- Add CodexSDKProvider implementing AIProvider interface
  - Maps Codex ThreadEvents to AIMessage types
  - Delta tracking for streaming text from cumulative item.updated events
  - Supports resume, abort, sandbox modes
  - Factory self-registration pattern matching Claude provider
- Enhance capabilities endpoint with provider models + metadata
- Register both providers in review server (graceful degradation)
- Add CodexSDKConfig type, models list (GPT-5.4 family)
- Add reasoningEffort to CreateSessionOptions (Codex only)
- Extract DiffFile interface to shared types.ts
- Update CLAUDE.md with build order documentation
- 54 tests passing (20+ new for Codex event mapping)

For provenance purposes, this commit was AI assisted.

* feat(ai): provider settings UI + inline config bar

- Settings → AI tab: provider cards with per-provider model dropdowns
- Inline config bar in AI chat sidebar (provider, model, reasoning effort)
- Provider icons: Claude SVG + Codex PNG from marketing assets
- Cookie-based persistence for provider + per-provider model preferences
- useAIChat accepts providerId, model, reasoningEffort from props
- Session reset on provider/model switch (next question starts fresh)
- Reasoning effort selector (Codex only: Low/Medium/High/Max)
- Update code review docs with AI provider documentation

For provenance purposes, this commit was AI assisted.

* fix(ai): review fixes — reasoning props, input blocking, cwd resolution, codex-sdk dep

- Add missing selectedReasoningEffort + onReasoningEffortChange props to
  populated-state AIConfigBar (was only on empty-state, causing crash)
- Disable GeneralInput while streaming to prevent concurrent queries
- Thread cwd through CreateSessionOptions so AI sessions use the correct
  working directory (worktree-aware via getCwd resolver)
- Add @openai/codex-sdk as a real dependency (was only an optional peer)
- Eagerly verify codex-sdk importability at startup to avoid advertising
  a broken provider

For provenance purposes, this commit was AI assisted.

* fix(ai): polish config bar, input, and sidebar header

- Replace native <select> dropdowns with custom popover menus that show
  provider icons, model names, and checkmarks on active selection
- Remove disabled gear placeholder button
- Auto-grow textarea upward on multi-line input (capped at 6 lines)
- Push tab icons to far right in sidebar header (w-full on flex container)
- Add centered divider between file-scoped and general questions

For provenance purposes, this commit was AI assisted.

* fix(ai): honest fork capability + permissionMode doc accuracy

- Endpoint now checks capabilities.fork before calling forkSession(),
  falling back to createSession() for providers that can't truly fork
- Codex forkSession() now throws as the interface contract requires
  (was silently faking a fork with no history — misleading UX)
- Fixed permissionMode JSDoc: said "plan" but code defaults to "default"

For provenance purposes, this commit was AI assisted.

* fix(review): move search result count inside input field

Count now appears inline to the left of the X button instead of
squeezing the input from outside. Dynamic right padding accommodates
the count + clear button when a query is active.

For provenance purposes, this commit was AI assisted.

* fix(ai): use claude_code preset, pass selected code to queries

- Claude provider now uses { preset: "claude_code", append } instead of
  a raw system prompt, so inline chat inherits Claude Code's full
  built-in capabilities
- Stripped custom role/guidelines preamble from context builders — just
  emit the review context (diff, plan, document)
- handleAskAI now extracts selected code via extractLinesFromPatch and
  sends it with the query, fixing hallucinations on large PRs where the
  file's diff was truncated out of the system prompt

For provenance purposes, this commit was AI assisted.

* fix(ai): bundle both SDKs, remove peer dependencies

Both @anthropic-ai/claude-agent-sdk and @openai/codex-sdk are now
direct imports (no string indirection) so bun build --compile bundles
them into the binary. Users don't need to install SDKs separately.

Removed peer dependency declarations from packages/ai/package.json —
meaningless for a compiled binary.

For provenance purposes, this commit was AI assisted.

* refactor: canonical platform.ts, file renames, file-level comments

- Created packages/ui/utils/platform.ts as the single source for
  isMac, modKey, altKey, submitHint, isWindows
- Consolidated inline navigator.platform checks from 6 files
- Renamed renderMarkdown → renderChatMarkdown (distinct from plan editor)
- Renamed formatTimestamp → formatRelativeTime (clearer purpose)
- Added file-level comments to InlineAIMarker, PermissionCard, generateId

For provenance purposes, this commit was AI assisted.

* docs(ai): add AI features guide + link from settings

New docs page at guides/ai-features covering providers, models,
configuration, session behavior, permissions, and available settings.

AISettingsTab now links to the docs in both empty and populated states.

For provenance purposes, this commit was AI assisted.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant