Quick Reference for AI Agents
The PRD skill generates Product Requirements Documents through an interactive Q&A workflow. This guide covers critical rules, workflow patterns, and quality standards.
The PRD skill creates structured requirements documents by:
- Asking 3-5 clarifying questions with lettered options
- Generating user stories with acceptance criteria
- Saving to
.ralph/PRD-N/prd.md(auto-incremented)
Purpose: Define WHAT to build (not HOW) in a format suitable for implementation agents.
When Claude Code (or another AI agent) executes ralph prd, it invokes a nested agent (Claude, Codex, or Droid). Without --headless, both agents try to interact with the same TTY, causing:
- Deadlocks - Both agents waiting for input
- TTY conflicts - Overlapping I/O streams
- Process hangs - Commands never complete
- Unpredictable output - Garbled or missing responses
ralph prd "Feature description" --headlessralph prd "Feature description" # Missing --headless flag- ✅ Claude Code agent calling ralph
- ✅ UI server triggering PRD generation
- ✅ CI/CD pipelines and automation scripts
- ✅ Background jobs and daemons
- ✅ Any context where stdin is not an interactive terminal
Note: Human users running ralph prd in a terminal do NOT need --headless (interactive mode is default).
Ask 3-5 essential questions with lettered options (A, B, C, D). Focus on:
- Problem/Goal: What problem does this solve?
- Core Functionality: What are the key actions?
- Scope/Boundaries: What should it NOT do?
- Success Criteria: How do we know it's done?
Example Format:
1. What is the primary goal of this feature?
A. Improve user onboarding experience
B. Increase user retention
C. Reduce support burden
D. Other: [please specify]
2. Who is the target user?
A. New users only
B. Existing users only
C. All users
D. Admin users only
This lets users respond with "1A, 2C" for quick iteration.
- Introduction/Overview - Brief description and problem statement
- Goals - Specific, measurable objectives
- User Stories - Sized appropriately (see below)
- Functional Requirements - Optional (only for cross-cutting concerns)
- Non-Goals - What's explicitly out of scope
- Success Metrics - How success is measured
- Context - Q&A trail and assumptions made
- Format: Markdown (
.md) - Location:
.ralph/PRD-N/prd.md(provided automatically) - Isolation: Each PRD in its own folder (PRD-1, PRD-2, etc.)
Each user story should be implementable in one focused session.
Heuristics:
- 3-5 acceptance criteria max per story (split if more)
- Single concern - one file or tightly coupled set of files
- ~100-200 lines of code typical upper bound
- No more than 2 integration points (e.g., API + database, not API + database + cache + queue)
- Independently testable - can verify without completing other stories
Split stories when:
- More than 5 acceptance criteria
- Multiple layers involved (backend + frontend + database)
- Multiple integration points (API + cache + queue + auth)
Story Format:
### US-001: [Title]
**Description:** As a [user], I want [feature] so that [benefit].
**Acceptance Criteria:**
- [ ] Specific verifiable criterion
- [ ] Another criterion with example: <input> -> <expected output>
- [ ] Negative case: <bad input> -> <expected error>
- [ ] Canonical form (if URLs/IDs produced): <exact format>
- [ ] Typecheck/lint passes
- [ ] **[UI stories only]** Verify in browser using agent-browser✅ GOOD (Specific & Verifiable):
- "Button shows confirmation dialog before deleting"
- "API returns 404 for non-existent resource"
- "Search results update within 200ms"
❌ BAD (Vague):
- "Works correctly"
- "User can search"
- "Handles errors properly"
Ralph works across Python, JavaScript, Go, Rust, Java, etc. PRDs should be:
- ✅ Tech-agnostic - No prescriptive stacks ("Use TypeScript interfaces" → "Define data types per project language")
- ✅ Auto-detect context - Agent reads project files (package.json, Cargo.toml, etc.)
- ✅ Project-specific commands - "npm test" for JS, "pytest" for Python, not generic "run tests"
- ✅ PRD = WHAT, Plan = HOW - Keep implementation details out of PRDs
Always include in acceptance criteria:
- Explicit examples:
creating task without priority -> defaults to 'medium' - Negative cases:
invalid priority 'urgent' -> validation error - Canonical forms:
URL format: /tasks/{uuid}(when producing IDs/links)
For any story with UI changes, ALWAYS include:
- [ ] Verify in browser using agent-browserThis ensures visual verification of frontend work.
Problem: User keeps adding requirements during Q&A.
Solution:
- Capture everything mentioned
- Prioritize into "v1" (this PRD) and "v2" (future PRD)
- Add v2 items to Non-Goals with note: "Planned for future iteration"
Problem: PRD specifies HOW instead of WHAT.
Examples:
- ❌ "Use Redux for state management"
- ❌ "Create a React component with hooks"
- ✅ "Store user preferences persistently"
- ✅ "Display user settings in editable form"
Solution: Keep PRDs focused on requirements and outcomes. Let the plan specify implementation.
Problem: "Run tests" without project-specific context.
Solution:
- Detect tech stack from project files
- Use specific commands:
npm test -- --testPathPattern=auth(JS),pytest tests/test_auth.py(Python)
Problem: Future readers don't know why decisions were made.
Solution: Always append Context section with Q&A trail and assumptions:
## Context
### Clarifying Questions & Answers
1. **What is the primary goal?** → B. Increase user retention
2. **Who is the target user?** → C. All users
### Assumptions Made
- Assumed existing auth system will be reused
- Assumed PostgreSQL database (based on existing schema)- Root Guide: /AGENTS.md - Core Ralph agent rules
- Full Skill Reference: SKILL.md - Complete PRD skill documentation
- CLAUDE.md: PRD Command Modes section - Interactive vs headless mode
- Agent Guide (Web): http://localhost:3000/docs/agent-guide.html - Interactive decision trees
Key Takeaways:
- ALWAYS use
--headlessflag when Claude Code agent executesralph prd - Ask 3-5 clarifying questions with lettered options for quick responses
- Size stories appropriately - 3-5 criteria, single concern, ~100-200 LOC
- Be tech-agnostic - PRD defines WHAT, plan defines HOW
- Include examples - Explicit examples, negative cases, canonical forms
- UI stories need browser verification - Add agent-browser criterion
- Document context - Append Q&A trail and assumptions to PRD