Skip to content

Commit d28a59f

Browse files
committed
chore: add Cursor workspace config, rules, skills, and product specs
1 parent 64ea378 commit d28a59f

32 files changed

+4325
-28
lines changed

.cursor/commands/create-prd.md

Lines changed: 60 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,60 @@
1+
# Generating a Product Requirements Document (PRD)
2+
3+
## Goal
4+
5+
To guide an AI assistant in creating a detailed Product Requirements Document (PRD) in Markdown format, based on an initial user prompt. The PRD should be clear, actionable, and suitable for a junior developer to understand and implement the feature.
6+
7+
## Process
8+
9+
1. **Receive Initial Prompt:** The user provides a brief description or request for a new feature or functionality.
10+
2. **Ask Clarifying Questions:** Before writing the PRD, the AI _must_ ask clarifying questions to gather sufficient detail. The goal is to understand the "what" and "why" of the feature, not necessarily the "how" (which the developer will figure out).
11+
3. **Generate PRD:** Based on the initial prompt and the user's answers to the clarifying questions, generate a PRD using the structure outlined below.
12+
4. **Save PRD:** Save the generated document as `prd-[feature-name].md` inside the `/.cursor/product-specs/prd` directory.
13+
14+
## Clarifying Questions (Examples)
15+
16+
The AI should adapt its questions based on the prompt, but here are some common areas to explore:
17+
18+
- **Problem/Goal:** "What problem does this feature solve for the user?" or "What is the main goal we want to achieve with this feature?"
19+
- **Target User:** "Who is the primary user of this feature?"
20+
- **Core Functionality:** "Can you describe the key actions a user should be able to perform with this feature?"
21+
- **User Stories:** "Could you provide a few user stories? (e.g., As a [type of user], I want to [perform an action] so that [benefit].)"
22+
- **Acceptance Criteria:** "How will we know when this feature is successfully implemented? What are the key success criteria?"
23+
- **Scope/Boundaries:** "Are there any specific things this feature _should not_ do (non-goals)?"
24+
- **Data Requirements:** "What kind of data does this feature need to display or manipulate?"
25+
- **Design/UI:** "Are there any existing design mockups or UI guidelines to follow?" or "Can you describe the desired look and feel?"
26+
- **Edge Cases:** "Are there any potential edge cases or error conditions we should consider?"
27+
- **Tech Stack:** "Does this feature require specific Frontend (e.g., Shadcn component) or Backend (e.g., Worker task) implementation?"
28+
29+
## PRD Structure
30+
31+
The generated PRD should include the following sections:
32+
33+
1. **Introduction/Overview:** Briefly describe the feature and the problem it solves. State the goal.
34+
2. **Goals:** List the specific, measurable objectives for this feature.
35+
3. **User Stories:** Detail the user narratives describing feature usage and benefits.
36+
4. **Functional Requirements:** List the specific functionalities the feature must have. Use clear, concise language (e.g., "The system must allow users to upload a profile picture."). Number these requirements.
37+
5. **Non-Goals (Out of Scope):** Clearly state what this feature will _not_ include to manage scope.
38+
6. **Design Considerations (Optional):** Link to mockups, describe UI/UX requirements, or mention relevant components/styles if applicable.
39+
7. **Technical Considerations (Optional):**
40+
- Mention any known technical constraints, dependencies, or suggestions.
41+
- **Consult**: `.cursor/dev-planning/architecture/tech-stack-decisions.md` for approved stack choices (e.g., Next.js 15, Pydantic v2).
42+
- **Consult**: `.cursor/rules/frontend-best-practices.mdc` if UI is involved.
43+
8. **Success Metrics:** How will the success of this feature be measured? (e.g., "Increase user engagement by 10%", "Reduce support tickets related to X").
44+
9. **Open Questions:** List any remaining questions or areas needing further clarification.
45+
46+
## Target Audience
47+
48+
Assume the primary reader of the PRD is a **junior developer**. Therefore, requirements should be explicit, unambiguous, and avoid jargon where possible. Provide enough detail for them to understand the feature's purpose and core logic.
49+
50+
## Output
51+
52+
- **Format:** Markdown (`.md`)
53+
- **Location:** `/.cursor/product-specs/prd/`
54+
- **Filename:** `prd-[feature-name].md`
55+
56+
## Final instructions
57+
58+
1. Do NOT start implementing the PRD
59+
2. Make sure to ask the user clarifying questions
60+
3. Take the user's answers to the clarifying questions and improve the PRD

.cursor/commands/generate-tasks.md

Lines changed: 128 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,128 @@
1+
# Generating a Task List from a PRD
2+
3+
## Goal
4+
5+
To guide an AI assistant in creating a detailed, step-by-step task list in Markdown format based on an existing Product Requirements Document (PRD). The task list should guide a developer (or weaker AI model) through implementation.
6+
7+
## Output
8+
9+
- **Format:** Markdown (`.md`)
10+
- **Location:** `/.cursor/dev-planning/tasks/`
11+
- **Filename:** `tasks-[prd-file-name].md` (e.g., `tasks-prd-user-profile-editing.md`)
12+
- **Template:** See [task-template.md](../dev-planning/templates/task-template.md) for detailed format
13+
14+
## Process
15+
16+
1. **Receive PRD Reference:** The user points the AI to a specific PRD file.
17+
2. **Analyze PRD:** The AI reads and analyzes the functional requirements, user stories, and other sections of the specified PRD.
18+
3. **Phase 1: Generate Parent Tasks:** Based on the PRD analysis, create the file and generate the main, high-level tasks required to implement the feature. Use your judgement on how many high-level tasks to use. It's likely to be about 5. Present these tasks to the user in the specified format (without sub-tasks yet). Inform the user: "I have generated the high-level tasks based on the PRD. Ready to generate the sub-tasks? Respond with 'LGTM' to proceed."
19+
4. **Wait for Confirmation:** Pause and wait for the user to respond with "LGTM".
20+
5. **Phase 2: Generate Sub-Tasks:** Once the user confirms, break down each parent task into smaller, actionable sub-tasks. **Use the detailed format** (see below) to ensure weaker AI models can execute correctly. For tasks that are part of a flow or have dependencies, add **Trigger/entry point**, **Enables**, and **Depends on** at the task level. Give each parent task its own **acceptance criteria** (verifiable, and specific to that task).
21+
6. **Identify Relevant Files:** Based on the tasks and PRD, identify potential files that will need to be created or modified. List these under the `Relevant Files` section, including corresponding test files if applicable.
22+
7. **Make dependencies explicit:** For any task that is part of a larger flow or has dependencies (user journey, pipeline step, API consumer, script that reads another task's output), add **Trigger/entry point**, **Enables**, and **Depends on** (see "Dependencies and integration" below). Ensure acceptance criteria belong to the task that delivers them—no AC from another task.
23+
8. **Generate Final Output:** Combine the parent tasks, sub-tasks, relevant files, dependency notes, and acceptance criteria into the final Markdown structure.
24+
9. **Post-generation checklist:** Before saving, verify: (a) tasks with dependencies have Trigger/Enables/Depends on where relevant; (b) each task has its own acceptance criteria and none describe another task's outcome; (c) integration points (where one task's output is another's input) are stated in sub-tasks or task notes.
25+
10. **Save Task List:** Save the generated document in the `/.cursor/dev-planning/tasks/` directory with the filename `tasks-[prd-file-name].md`. For large PRDs (e.g. multi-sprint releases), organize output in a versioned folder (e.g. `v2.1.0/`) with a roadmap file (`tasks-v2.1.0-roadmap.md`) and per-sprint files (`sprint-E1-*.md`, etc.); the PRD's Related Documents should link to the roadmap.
26+
27+
## Output Format
28+
29+
The generated task list _must_ follow this structure:
30+
31+
```markdown
32+
## Relevant Files
33+
34+
- `path/to/potential/file1.py` - Brief description of why this file is relevant.
35+
- `tests/path/to/test_file1.py` - Unit tests for `file1.py`.
36+
37+
### Notes
38+
39+
- Unit tests should typically be placed in `tests/` mirroring the `src/` structure.
40+
- Use `pytest tests/[path] -v` to run tests.
41+
42+
## Tasks
43+
44+
- [ ] 1.0 Parent Task Title
45+
- [ ] 1.1 [Sub-task description 1.1]
46+
- [ ] 1.2 [Sub-task description 1.2]
47+
- [ ] 2.0 Parent Task Title
48+
- [ ] 2.1 [Sub-task description 2.1]
49+
```
50+
51+
### Dependencies and integration (when applicable)
52+
53+
For any task that is part of a larger flow or has dependencies—whether a user journey, a pipeline step, an API consumer, or a script that reads another task's output—make the following explicit at the start of that task (or parent task):
54+
55+
- **Trigger / entry point:** What invokes or reaches this work (e.g. user action, cron job, webhook, call from another service, previous pipeline step).
56+
- **Enables:** What this task unblocks for other tasks, services, or features (e.g. new API for a client, new field in a schema, next step in a workflow).
57+
- **Depends on:** What must already exist before this task (other tasks, schema, endpoints, file format).
58+
59+
Use neutral wording so the same rules apply to backend, frontend, scripts, and infrastructure. When one task's output is another's input, describe the **integration** in the sub-tasks or task description (e.g. API contract, payload shape, file format, URL, or artifact).
60+
61+
Example of an explicit dependency block at the start of a task:
62+
63+
```markdown
64+
## Task 2.3: Verification request form
65+
66+
**Trigger:** User clicks "Apply for Verified" on a listed agent card (Task 2.2).
67+
**Enables:** Admins to process verification issues; dashboard to show Verified badge (Task 2.4) once schema is updated.
68+
**Depends on:** Task 2.2 (dashboard with listed agents); Task 2.5 (schema) for persisting verification in registry.
69+
```
70+
71+
### Acceptance criteria
72+
73+
- Each parent task must have **acceptance criteria** that are specific to that task and **verifiable** (command, observable behaviour, or clear done condition).
74+
- No acceptance criterion may describe an outcome that is the responsibility of a different task. Check that AC are assigned to the task that actually delivers them.
75+
76+
## Detailed Sub-task Format (for weaker AI models)
77+
78+
When generating tasks that will be executed by less capable AI models, use this **detailed format** for each sub-task:
79+
80+
```markdown
81+
- [ ] X.Y.Z [Action verb] [specific item]
82+
- **File**: `path/to/file.py` (create new | modify existing)
83+
- **What**: [Detailed description of what to create or modify]
84+
- **Why**: [Context - why this is needed, how it fits the bigger picture]
85+
- **Pattern**: [Reference to existing code to follow, e.g., "Follow src/asap/auth/oauth2.py"]
86+
- **Verify**: [How to confirm it works - test command or expected behavior]
87+
```
88+
89+
When the result of this sub-task (or task) is consumed by another task, add an **Integration** line so the link is explicit:
90+
91+
- **Integration** (optional): [How this output is used elsewhere—e.g. "This endpoint is called by the dashboard (Task N) with query param `agent_id`"; "This script writes a file committed by the workflow in Task M"; "Schema consumed by TypeScript types in `apps/web`".]
92+
93+
### Example: Good vs Bad Sub-task
94+
95+
**Bad** (too vague):
96+
97+
```markdown
98+
- [ ] 1.1 Add OAuth2 client
99+
```
100+
101+
**Good** (explicit and contextual):
102+
103+
```markdown
104+
- [ ] 1.1 Create OAuth2 client credentials class
105+
- **File**: `src/asap/auth/oauth2.py` (create new)
106+
- **What**: Create `OAuth2ClientCredentials` class with `get_access_token()` and `refresh_token()` methods
107+
- **Why**: Enables agent-to-agent authentication using client_credentials grant
108+
- **Pattern**: Use Authlib's AsyncOAuth2Client internally, expose ASAP-specific models (see ADR-12)
109+
- **Verify**: `pytest tests/auth/test_oauth2.py -k "test_get_token"` passes
110+
```
111+
112+
## Interaction Model
113+
114+
The process explicitly requires a pause after generating parent tasks to get user confirmation ("Go") before proceeding to generate the detailed sub-tasks. This ensures the high-level plan aligns with user expectations before diving into details.
115+
116+
## Target Audience
117+
118+
Assume the primary reader of the task list is:
119+
120+
1. A **junior developer** who will implement the feature
121+
2. A **weaker AI model** that needs explicit context and verification steps
122+
123+
Both require clear, unambiguous instructions with sufficient context to understand not just WHAT to do, but WHY.
124+
125+
## Related Templates
126+
127+
- **Task Template**: [task-template.md](../dev-planning/templates/task-template.md) - Full template with examples
128+
- **PRD Template**: [create-prd.md](./create-prd.md) - How to create PRDs

.cursor/commands/remove-ai-slop.md

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
# Remove AI code slop
2+
3+
Check the diff against main to purge AI verbosity (AI generated slop introduced) and restore Pythonic density.
4+
5+
This includes:
6+
7+
- Redundant Docstrings: Delete docstrings that merely restate the function signature (e.g., """Initializes the class.""" for **init**).
8+
9+
- Exception Swallowing: Remove over-cautious try/except Exception: pass or generic blocks that mask errors on internal, validated paths.
10+
11+
- Un-Pythonic Logic: Replace manual loops, heavy if/else ladders, or unnecessary isinstance checks with idiomatic comprehensions or duck typing.
12+
13+
- Type Escapes: Replace Any hints or redundant Optional checks on variables guaranteed by the local context.
14+
15+
- Generic Naming: Rename sterile AI variables (e.g., result_list, data_dict, temp_var) to specific domain terms from the codebase.
16+
17+
Report at the end with only a 1-3 sentence summary of what you changed.

0 commit comments

Comments
 (0)