Skip to content

Commit ca33432

Browse files
jwaldripclaude
andcommitted
feat(plugin): extract elaborate phases into fork subagent skills
Extract three autonomous phases from the elaborate skill into isolated context:fork subagent skills that communicate via brief files on disk: - elaborate-discover: Phase 2.5 domain discovery & technical exploration - elaborate-wireframes: Phase 6.25 frontend wireframe generation - elaborate-ticket-sync: Phase 6.5+6.75 ticket sync & validation Each subagent reads a YAML+markdown brief, does its work autonomously (no AskUserQuestion), and writes structured results to disk. The main elaborate skill serializes state into briefs, invokes via Skill(), and reads results back. Additional changes to the main elaborate skill: - Add per-unit review loop in Phase 6 (full markdown display + wireframe auto-open in browser + AskUserQuestion approval gate per unit) - Simplify Phase 5.75 to high-level alignment check (detailed review moved to per-unit loop) - Remove mockup_format setting — discovery mockups always ASCII, unit wireframes always HTML - Add Skill to allowed-tools frontmatter Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
1 parent f190b56 commit ca33432

File tree

4 files changed

+1079
-532
lines changed

4 files changed

+1079
-532
lines changed
Lines changed: 314 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,314 @@
1+
---
2+
description: (Internal) Autonomous domain discovery and technical exploration for AI-DLC elaboration
3+
context: fork
4+
agent: general-purpose
5+
user-invocable: false
6+
allowed-tools:
7+
- Read
8+
- Write
9+
- Glob
10+
- Grep
11+
- Bash
12+
- Agent
13+
- WebSearch
14+
- WebFetch
15+
- ToolSearch
16+
- ListMcpResourcesTool
17+
- ReadMcpResourceTool
18+
# MCP read-only tool patterns
19+
- "mcp__*__read*"
20+
- "mcp__*__get*"
21+
- "mcp__*__list*"
22+
- "mcp__*__search*"
23+
- "mcp__*__query*"
24+
- "mcp__*__ask*"
25+
- "mcp__*__resolve*"
26+
- "mcp__*__fetch*"
27+
- "mcp__*__lookup*"
28+
- "mcp__*__analyze*"
29+
- "mcp__*__describe*"
30+
- "mcp__*__explain*"
31+
- "mcp__*__memory"
32+
---
33+
34+
# Elaborate: Domain Discovery
35+
36+
Autonomous domain discovery and technical exploration for AI-DLC elaboration. This skill runs as a forked subagent — it reads a brief file from disk, performs deep exploration, and writes results to disk.
37+
38+
**You have NO access to `AskUserQuestion`.** All work is fully autonomous. Persist findings to disk — the main elaboration skill will present results to the user.
39+
40+
---
41+
42+
## Step 1: Read Brief
43+
44+
Read the brief file passed as the first argument. The brief is at the path provided (e.g., `.ai-dlc/{intent-slug}/.briefs/elaborate-discover.md`).
45+
46+
Parse YAML frontmatter for structured inputs:
47+
48+
```yaml
49+
intent_slug: my-feature
50+
worktree_path: /path/to/.ai-dlc/worktrees/my-feature
51+
project_maturity: established # greenfield | early | established
52+
provider_config:
53+
design:
54+
type: figma
55+
spec:
56+
type: notion
57+
ticketing:
58+
type: jira
59+
comms:
60+
type: slack
61+
```
62+
63+
The markdown body contains:
64+
- Intent description (what the user wants to build)
65+
- Clarification answers (Q&A from requirements gathering)
66+
- Discovery file path (path to the initialized `discovery.md`)
67+
68+
**Change directory to the worktree** before any file operations:
69+
70+
```bash
71+
cd "{worktree_path}"
72+
```
73+
74+
---
75+
76+
## Step 2: Domain Discovery & Technical Exploration
77+
78+
**This phase is mandatory.** Deeply understand the technical landscape. Shallow understanding here causes builders to build the wrong thing.
79+
80+
### Greenfield Adaptation
81+
82+
Gate exploration based on `project_maturity` from the brief:
83+
84+
- **Greenfield** (`greenfield`):
85+
- **Skip** items 2 (Existing Codebases) and 5 (Existing Implementations) — there is no codebase to explore. Do NOT spawn Explore subagents for codebase research.
86+
- **Keep** items 1 (APIs/Schemas), 3 (Data Sources), 4 (Domain Model — from user input + external research), 6 (External Docs/Libraries), 7 (Providers).
87+
- Focus domain discovery on external research, API introspection, and user input rather than codebase analysis.
88+
- **Early** (`early`):
89+
- Use `Glob` and `Read` directly instead of Explore subagents — the codebase is small enough to read directly without subagent overhead.
90+
- All items apply, but codebase exploration should be lightweight.
91+
- **Established** (`established`):
92+
- Full exploration as described below. Use Explore subagents for deep codebase research.
93+
94+
### What to Explore
95+
96+
Based on the intent description and clarification answers in the brief, identify every relevant technical surface and explore it thoroughly. Use ALL available research tools — codebase exploration, API introspection, web searches, and documentation fetching:
97+
98+
1. **APIs and Schemas**: If the intent involves an API, query it. Run introspection queries. Read the actual schema. Map every type, field, query, mutation, and subscription. Don't guess what data is available — verify it.
99+
100+
2. **Existing Codebases** *(skip for greenfield)*: If the intent builds on or integrates with existing code, explore it via Explore subagents (or `Glob`/`Read` for early-maturity projects). Have them find relevant files, read source code, and report back on existing patterns, conventions, and architecture.
101+
102+
3. **Data Sources**: If the intent involves data, understand where it lives. Query for real sample data. Understand what fields are populated, what's empty, what's missing. Identify gaps between what's available and what's needed.
103+
104+
4. **Domain Model**: From your exploration, build a domain model — the key entities, their relationships, and their lifecycle. This is not a database schema; it's a conceptual map of the problem space.
105+
106+
5. **Existing Implementations** *(skip for greenfield)*: If there are related features, similar tools, or reference implementations, read them. Understand what already exists so you don't build duplicates or miss integration points.
107+
108+
6. **External Documentation and Libraries**: Use `WebSearch` and `WebFetch` to research relevant libraries, frameworks, APIs, standards, or prior art. If the intent involves a third-party system, find its documentation and understand its capabilities. If the intent involves a design pattern or technique, research best practices and common pitfalls.
109+
110+
7. **Configured Providers**: If providers are configured in `provider_config`:
111+
- **Spec providers** (Notion, Confluence, Google Docs): Search for requirements docs, PRDs, or technical specs related to the intent
112+
- **Ticketing providers** (Jira, Linear): Search for existing tickets, epics, or stories that relate to or duplicate this work
113+
- **Design providers** (Figma, Sketch, Adobe XD): Delegate to design analysis subagents (see item 4 in "How to Explore" below) to avoid flooding your context with design data. **Important:** Designers often annotate mockups with callouts, arrows, measurement labels, sticky notes, and descriptive text that convey UX behavior or implementation details. These annotations are **guidance, not part of the design itself** — extract the guidance (interaction notes, spacing rules, state descriptions, edge cases) and incorporate it into findings, but do not treat annotation visuals as UI elements to build.
114+
- **Comms providers** (Slack, Teams): Search for relevant discussions or decisions in channels
115+
Use `ToolSearch` to discover available MCP tools matching provider types, then use read-only MCP tools for research.
116+
117+
### How to Explore
118+
119+
Use every research tool available. Spawn multiple explorations in parallel for independent concerns:
120+
121+
1. **Subagents for deep codebase/API exploration** *(established projects only)*: Use `Agent` with `subagent_type: "Explore"` for multi-step research that requires reading many files, querying APIs, and synthesizing findings. **If greenfield: do NOT spawn Explore subagents for codebase research — there is no codebase to explore.** If early: use `Glob`/`Read` directly instead of Explore subagents.
122+
123+
```
124+
Agent({
125+
description: "Explore {specific system}",
126+
subagent_type: "Explore",
127+
prompt: "I need to deeply understand {system}. Read source code, query APIs, map the data model. Report back with: every entity and its fields, every query/endpoint available, sample data showing what's actually populated, and any gaps or limitations discovered."
128+
})
129+
```
130+
131+
2. **MCP tools for domain knowledge**: Use `ToolSearch` to discover available MCP tools, then use read-only MCP tools for domain research. Examples:
132+
- Repository documentation (DeepWiki): `mcp__*__read_wiki*`, `mcp__*__ask_question`
133+
- Library docs (Context7): `mcp__*__resolve*`, `mcp__*__query*`
134+
- Project memory (han): `mcp__*__memory`
135+
- Any other MCP servers available in the environment
136+
- Provider MCP tools: If providers are configured, use their MCP tools for research (e.g., `mcp__*jira*__search*` for Jira tickets, `mcp__*notion*__search*` for Notion pages)
137+
138+
3. **Web research for external context**: Use `WebSearch` for library docs, design patterns, API references, prior art. Use `WebFetch` to read specific documentation pages.
139+
140+
4. **Design analysis subagents**: If a design provider is configured (`provider_config.design.type` is set), spawn a `general-purpose` subagent (NOT `Explore` — it needs MCP tool access via `ToolSearch`) for each design file:
141+
142+
```
143+
Agent({
144+
description: "Analyze design: {file name}",
145+
subagent_type: "general-purpose",
146+
prompt: "Analyze a design file for AI-DLC elaboration.
147+
148+
## Instructions
149+
1. Use ToolSearch to discover design MCP tools (e.g., 'figma', 'sketch', 'design')
150+
2. Use discovered tools to fetch design metadata, screenshots, and component trees
151+
3. Extract and return ONLY a structured summary:
152+
- Component hierarchy (parent/child tree of design elements)
153+
- Design tokens: colors (hex values), spacing values, typography (font families, sizes, weights)
154+
- Interactions and states (hover, active, disabled, error states)
155+
- Annotations and designer notes (text callouts, sticky notes, measurement labels)
156+
157+
## CRITICAL
158+
- Return structured text ONLY — no raw screenshots or binary data in your response
159+
- Focus on information builders need to implement the design accurately
160+
- Note any ambiguities or missing states that builders should ask about
161+
162+
## Design File
163+
{design file URL or identifier}"
164+
})
165+
```
166+
167+
Spawn one subagent per design file, in parallel with codebase Explore agents. When results return:
168+
- Append to `discovery.md` under `## Design Analysis: {file name}`
169+
- If no design MCP tools are discoverable, the subagent reports unavailability — log a warning and continue without design analysis
170+
171+
5. **UI Mockups**: If the intent involves user-facing interfaces (frontend, CLI, TUI, etc.), generate mockups for every distinct screen or view. This step is **mandatory** for any intent with a UI component:
172+
173+
- **Designs exist** (item 4 returned design analysis): Translate the design analysis into mockups that demonstrate understanding of the designs. This is *verification*.
174+
- **No designs exist**: Generate mockups as *pre-build visual design*. This is where layout, information hierarchy, and interaction flow get decided.
175+
176+
Discovery mockups are always ASCII — they exist to facilitate discussion, not to serve as spec artifacts. Unit wireframes (Phase 6.25) handle the structured HTML spec.
177+
178+
#### Per-View Mockup Process
179+
180+
For each distinct screen or view identified in the domain model:
181+
- Create a mockup showing layout structure, key UI elements, and data placement
182+
- Annotate with interaction notes (what happens on click, hover, submit, error states)
183+
- Show which domain entities map to which UI regions
184+
- If working from designs: note where your interpretation might diverge from the source
185+
186+
Append each mockup to `discovery.md`:
187+
```
188+
## UI Mockup: {View Name}
189+
190+
**Source:** {design provider analysis | collaborative}
191+
192+
### Layout
193+
```
194+
{ASCII mockup}
195+
```
196+
197+
### Interactions
198+
- {element}: {behavior on click/hover/submit}
199+
- {element}: {error states, loading states}
200+
201+
### Data Mapping
202+
- {UI region} ← {domain entity}.{field}
203+
```
204+
205+
**Skip mockups only if:** the intent has no user-facing interface (pure backend, API, data pipeline, infrastructure, etc.).
206+
207+
**Spawn multiple research paths in parallel.** Don't serialize explorations that are independent — launch all of them at once and synthesize when results return.
208+
209+
If a VCS MCP is available (e.g., GitHub MCP), use it for code browsing alongside or instead of local file tools.
210+
211+
### Persist Findings to Discovery Log
212+
213+
After each significant finding (API schema mapped, codebase pattern identified, design analyzed, external research completed), **append a section to `discovery.md`**. This offloads detailed findings from context to disk, keeping your context window lean while preserving full details for builders.
214+
215+
**Use standardized section headers** so builders can quickly scan the file:
216+
- `## API Schema: {name}` — For API introspection results (types, fields, queries, mutations)
217+
- `## Codebase Pattern: {area}` — For architecture patterns discovered in existing code
218+
- `## Design Analysis: {file}` — For design file findings (components, tokens, interactions)
219+
- `## External Research: {topic}` — For web research, library docs, prior art
220+
- `## Data Source: {name}` — For data source exploration (what's available, what's missing)
221+
- `## Provider Context: {type}` — For ticketing, spec, or comms provider findings
222+
- `## UI Mockup: {view}` — ASCII mockups of user-facing views with interaction notes and data mapping
223+
- `## Architecture Decision: {topic}` — For greenfield/early projects: key architecture choices (frameworks, patterns, structure)
224+
- `## Technology Choice: {name}` — For greenfield/early projects: technology selection rationale
225+
- `## Reference Implementation: {name}` — For greenfield/early projects: external reference implementations or prior art informing the design
226+
227+
**After appending to discovery.md, keep only a brief summary in your context** — the full details are safely on disk and will be available to builders. This is the key benefit: your context stays lean for continued exploration while nothing is lost.
228+
229+
**CRITICAL**: Do not summarize or skip exploration. The exploration results directly determine whether the spec is accurate. If you explore a GraphQL API, report every type. If you read source code, report the actual architecture, not your guess about it.
230+
231+
---
232+
233+
## Step 3: Build Domain Model
234+
235+
After all exploration is complete, synthesize your findings into a comprehensive domain model. This is the foundation that all units will build on.
236+
237+
Structure the domain model as:
238+
239+
### Entities
240+
- **{Entity1}**: {description} — Fields: {field1}, {field2}, ...
241+
- **{Entity2}**: {description} — Fields: ...
242+
243+
### Relationships
244+
- {Entity1} has many {Entity2}
245+
- {Entity2} belongs to {Entity3}
246+
247+
### Data Sources
248+
- **{Source1}** ({type: GraphQL API / REST API / filesystem / etc.}):
249+
- Available: {what data can be queried}
250+
- Missing: {what data is NOT available from this source}
251+
- Real sample: {abbreviated real data showing what's populated}
252+
253+
### Data Gaps
254+
- {description of any gap between what's needed and what's available}
255+
- {proposed solution for each gap}
256+
257+
---
258+
259+
## Step 4: Write Results
260+
261+
Write the results file to `.ai-dlc/{intent-slug}/.briefs/elaborate-discover-results.md`:
262+
263+
```markdown
264+
---
265+
status: success
266+
error_message: ""
267+
---
268+
269+
# Discovery Results
270+
271+
## Domain Model Summary
272+
273+
### Entities
274+
- **{Entity1}**: {description} — Fields: {field1}, {field2}, ...
275+
- **{Entity2}**: {description} — Fields: ...
276+
277+
### Relationships
278+
- {Entity1} has many {Entity2}
279+
- {Entity2} belongs to {Entity3}
280+
281+
### Data Sources
282+
- **{Source1}** ({type}):
283+
- Available: {what data can be queried}
284+
- Missing: {what data is NOT available}
285+
- Real sample: {abbreviated real data}
286+
287+
### Data Gaps
288+
- {description of gap and proposed solution}
289+
290+
## Key Findings
291+
292+
- {Important finding 1}
293+
- {Important finding 2}
294+
295+
## Open Questions
296+
297+
- {Question needing user validation 1}
298+
- {Question needing user validation 2}
299+
300+
## Mockups Generated
301+
302+
- {path to mockup 1} — {description}
303+
- {path to mockup 2} — {description}
304+
```
305+
306+
---
307+
308+
## Error Handling
309+
310+
If any critical error occurs during exploration (e.g., worktree path doesn't exist, discovery.md not found):
311+
312+
1. Write the results file with `status: error` and `error_message` describing what went wrong
313+
2. Include any partial findings that were gathered before the error
314+
3. Exit — the main elaborate skill will read the error status and handle it

0 commit comments

Comments
 (0)