|
| 1 | +# Generating a Task List from a PRD |
| 2 | + |
| 3 | +## Goal |
| 4 | + |
| 5 | +To guide an AI assistant in creating a detailed, step-by-step task list in Markdown format based on an existing Product Requirements Document (PRD). The task list should guide a developer (or weaker AI model) through implementation. |
| 6 | + |
| 7 | +## Output |
| 8 | + |
| 9 | +- **Format:** Markdown (`.md`) |
| 10 | +- **Location:** `/.cursor/dev-planning/tasks/` |
| 11 | +- **Filename:** `tasks-[prd-file-name].md` (e.g., `tasks-prd-user-profile-editing.md`) |
| 12 | +- **Template:** See [task-template.md](../dev-planning/templates/task-template.md) for detailed format |
| 13 | + |
| 14 | +## Process |
| 15 | + |
| 16 | +1. **Receive PRD Reference:** The user points the AI to a specific PRD file. |
| 17 | +2. **Analyze PRD:** The AI reads and analyzes the functional requirements, user stories, and other sections of the specified PRD. |
| 18 | +3. **Phase 1: Generate Parent Tasks:** Based on the PRD analysis, create the file and generate the main, high-level tasks required to implement the feature. Use your judgement on how many high-level tasks to use. It's likely to be about 5. Present these tasks to the user in the specified format (without sub-tasks yet). Inform the user: "I have generated the high-level tasks based on the PRD. Ready to generate the sub-tasks? Respond with 'LGTM' to proceed." |
| 19 | +4. **Wait for Confirmation:** Pause and wait for the user to respond with "LGTM". |
| 20 | +5. **Phase 2: Generate Sub-Tasks:** Once the user confirms, break down each parent task into smaller, actionable sub-tasks. **Use the detailed format** (see below) to ensure weaker AI models can execute correctly. For tasks that are part of a flow or have dependencies, add **Trigger/entry point**, **Enables**, and **Depends on** at the task level. Give each parent task its own **acceptance criteria** (verifiable, and specific to that task). |
| 21 | +6. **Identify Relevant Files:** Based on the tasks and PRD, identify potential files that will need to be created or modified. List these under the `Relevant Files` section, including corresponding test files if applicable. |
| 22 | +7. **Make dependencies explicit:** For any task that is part of a larger flow or has dependencies (user journey, pipeline step, API consumer, script that reads another task's output), add **Trigger/entry point**, **Enables**, and **Depends on** (see "Dependencies and integration" below). Ensure acceptance criteria belong to the task that delivers them—no AC from another task. |
| 23 | +8. **Generate Final Output:** Combine the parent tasks, sub-tasks, relevant files, dependency notes, and acceptance criteria into the final Markdown structure. |
| 24 | +9. **Post-generation checklist:** Before saving, verify: (a) tasks with dependencies have Trigger/Enables/Depends on where relevant; (b) each task has its own acceptance criteria and none describe another task's outcome; (c) integration points (where one task's output is another's input) are stated in sub-tasks or task notes. |
| 25 | +10. **Save Task List:** Save the generated document in the `/.cursor/dev-planning/tasks/` directory with the filename `tasks-[prd-file-name].md`. For large PRDs (e.g. multi-sprint releases), organize output in a versioned folder (e.g. `v2.1.0/`) with a roadmap file (`tasks-v2.1.0-roadmap.md`) and per-sprint files (`sprint-E1-*.md`, etc.); the PRD's Related Documents should link to the roadmap. |
| 26 | + |
| 27 | +## Output Format |
| 28 | + |
| 29 | +The generated task list _must_ follow this structure: |
| 30 | + |
| 31 | +```markdown |
| 32 | +## Relevant Files |
| 33 | + |
| 34 | +- `path/to/potential/file1.py` - Brief description of why this file is relevant. |
| 35 | +- `tests/path/to/test_file1.py` - Unit tests for `file1.py`. |
| 36 | + |
| 37 | +### Notes |
| 38 | + |
| 39 | +- Unit tests should typically be placed in `tests/` mirroring the `src/` structure. |
| 40 | +- Use `pytest tests/[path] -v` to run tests. |
| 41 | + |
| 42 | +## Tasks |
| 43 | + |
| 44 | +- [ ] 1.0 Parent Task Title |
| 45 | + - [ ] 1.1 [Sub-task description 1.1] |
| 46 | + - [ ] 1.2 [Sub-task description 1.2] |
| 47 | +- [ ] 2.0 Parent Task Title |
| 48 | + - [ ] 2.1 [Sub-task description 2.1] |
| 49 | +``` |
| 50 | + |
| 51 | +### Dependencies and integration (when applicable) |
| 52 | + |
| 53 | +For any task that is part of a larger flow or has dependencies—whether a user journey, a pipeline step, an API consumer, or a script that reads another task's output—make the following explicit at the start of that task (or parent task): |
| 54 | + |
| 55 | +- **Trigger / entry point:** What invokes or reaches this work (e.g. user action, cron job, webhook, call from another service, previous pipeline step). |
| 56 | +- **Enables:** What this task unblocks for other tasks, services, or features (e.g. new API for a client, new field in a schema, next step in a workflow). |
| 57 | +- **Depends on:** What must already exist before this task (other tasks, schema, endpoints, file format). |
| 58 | + |
| 59 | +Use neutral wording so the same rules apply to backend, frontend, scripts, and infrastructure. When one task's output is another's input, describe the **integration** in the sub-tasks or task description (e.g. API contract, payload shape, file format, URL, or artifact). |
| 60 | + |
| 61 | +Example of an explicit dependency block at the start of a task: |
| 62 | + |
| 63 | +```markdown |
| 64 | +## Task 2.3: Verification request form |
| 65 | + |
| 66 | +**Trigger:** User clicks "Apply for Verified" on a listed agent card (Task 2.2). |
| 67 | +**Enables:** Admins to process verification issues; dashboard to show Verified badge (Task 2.4) once schema is updated. |
| 68 | +**Depends on:** Task 2.2 (dashboard with listed agents); Task 2.5 (schema) for persisting verification in registry. |
| 69 | +``` |
| 70 | + |
| 71 | +### Acceptance criteria |
| 72 | + |
| 73 | +- Each parent task must have **acceptance criteria** that are specific to that task and **verifiable** (command, observable behaviour, or clear done condition). |
| 74 | +- No acceptance criterion may describe an outcome that is the responsibility of a different task. Check that AC are assigned to the task that actually delivers them. |
| 75 | + |
| 76 | +## Detailed Sub-task Format (for weaker AI models) |
| 77 | + |
| 78 | +When generating tasks that will be executed by less capable AI models, use this **detailed format** for each sub-task: |
| 79 | + |
| 80 | +```markdown |
| 81 | +- [ ] X.Y.Z [Action verb] [specific item] |
| 82 | + - **File**: `path/to/file.py` (create new | modify existing) |
| 83 | + - **What**: [Detailed description of what to create or modify] |
| 84 | + - **Why**: [Context - why this is needed, how it fits the bigger picture] |
| 85 | + - **Pattern**: [Reference to existing code to follow, e.g., "Follow src/asap/auth/oauth2.py"] |
| 86 | + - **Verify**: [How to confirm it works - test command or expected behavior] |
| 87 | +``` |
| 88 | + |
| 89 | +When the result of this sub-task (or task) is consumed by another task, add an **Integration** line so the link is explicit: |
| 90 | + |
| 91 | +- **Integration** (optional): [How this output is used elsewhere—e.g. "This endpoint is called by the dashboard (Task N) with query param `agent_id`"; "This script writes a file committed by the workflow in Task M"; "Schema consumed by TypeScript types in `apps/web`".] |
| 92 | + |
| 93 | +### Example: Good vs Bad Sub-task |
| 94 | + |
| 95 | +❌ **Bad** (too vague): |
| 96 | + |
| 97 | +```markdown |
| 98 | +- [ ] 1.1 Add OAuth2 client |
| 99 | +``` |
| 100 | + |
| 101 | +✅ **Good** (explicit and contextual): |
| 102 | + |
| 103 | +```markdown |
| 104 | +- [ ] 1.1 Create OAuth2 client credentials class |
| 105 | + - **File**: `src/asap/auth/oauth2.py` (create new) |
| 106 | + - **What**: Create `OAuth2ClientCredentials` class with `get_access_token()` and `refresh_token()` methods |
| 107 | + - **Why**: Enables agent-to-agent authentication using client_credentials grant |
| 108 | + - **Pattern**: Use Authlib's AsyncOAuth2Client internally, expose ASAP-specific models (see ADR-12) |
| 109 | + - **Verify**: `pytest tests/auth/test_oauth2.py -k "test_get_token"` passes |
| 110 | +``` |
| 111 | + |
| 112 | +## Interaction Model |
| 113 | + |
| 114 | +The process explicitly requires a pause after generating parent tasks to get user confirmation ("Go") before proceeding to generate the detailed sub-tasks. This ensures the high-level plan aligns with user expectations before diving into details. |
| 115 | + |
| 116 | +## Target Audience |
| 117 | + |
| 118 | +Assume the primary reader of the task list is: |
| 119 | + |
| 120 | +1. A **junior developer** who will implement the feature |
| 121 | +2. A **weaker AI model** that needs explicit context and verification steps |
| 122 | + |
| 123 | +Both require clear, unambiguous instructions with sufficient context to understand not just WHAT to do, but WHY. |
| 124 | + |
| 125 | +## Related Templates |
| 126 | + |
| 127 | +- **Task Template**: [task-template.md](../dev-planning/templates/task-template.md) - Full template with examples |
| 128 | +- **PRD Template**: [create-prd.md](./create-prd.md) - How to create PRDs |
0 commit comments