-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Open
Labels
area/backendThis is backend onlyThis is backend only
Description
Checklist
- I searched existing issues and this hasn't been reported
Area
Fullstack
Operating System
Windows
Version
develop @ 91bd240 (2026-01-09)
What happened?
The AI Planner agent generates implementation_plan.json with a non-standard schema that differs from what the backend code expects. This causes tasks to get stuck because get_next_subtask() cannot find any pending subtasks.
Root Cause Analysis:
The planner.md prompt clearly defines the expected schema:
{
"phases": [{
"id": "phase-1-backend",
"name": "Backend API",
"subtasks": [{
"id": "subtask-1-1",
"description": "Create data models",
"status": "pending"
}]
}]
}But the AI generates a different schema:
{
"phases": [{
"phase_id": "1", // ❌ Should be "id"
"title": "Research", // ❌ Should be "name"
"status": "not_started", // ❌ Should not exist at phase level
"subtasks": [{
"subtask_id": "1.1", // ❌ Should be "id"
"title": "Research...", // ❌ Should be "description"
"status": "not_started" // ❌ Should be "pending"
}]
}]
}Why this breaks the system:
apps/backend/core/progress.py:444checkssubtask.get("status") == "pending":
# Line 444 in get_next_subtask()
for subtask in phase.get("subtasks", []):
if subtask.get("status") == "pending": # ❌ "not_started" won't match!
return {...}apps/backend/spec/validate_pkg/schemas.py:54defines valid status values:
"subtask_schema": {
"required_fields": ["id", "description", "status"], # Not subtask_id, title
"status_values": ["pending", "in_progress", "completed", "blocked", "failed"],
# ❌ "not_started" is NOT a valid status!
}- The
ImplementationPlanValidatorexists but validation is not enforced after AI writes the file.
Related Issues:
- Task is not transitioning on it's own from Plan to Code #881 - Task is not transitioning from Plan to Code (likely same root cause)
- AI generates invalid JSON syntax causing infinite validation loop #842 - AI generates invalid JSON syntax causing infinite validation loop (same category of AI output validation issues)
Steps to reproduce
- Create a new task with any description
- Start the task - it enters Planning mode
- Wait for the AI Planner to complete and write
implementation_plan.json - Check the generated JSON file - it will have
status: "not_started"instead of"pending" - The task gets stuck because
get_next_subtask()returnsNone - Manual workaround: Stop task, manually edit JSON to change
not_started→pending, then Resume
Expected behavior
- AI should strictly follow the schema defined in
planner.mdprompt - System should validate AI output against
IMPLEMENTATION_PLAN_SCHEMAafter the Write tool saves the file - If validation fails, system should either:
- Auto-fix with
auto_fix_plan()(normalize field names and status values) - Or reject and ask AI to regenerate
- Auto-fix with
Logs / Screenshots
Actual JSON generated by AI:
{
"spec_id": "002-add-upstream-connection-test",
"phases": [
{
"phase_id": "1",
"title": "Research & Design",
"status": "not_started",
"subtasks": [
{
"subtask_id": "1.1",
"title": "Research provider-specific test endpoints",
"description": "Research lightweight API endpoints...",
"status": "not_started",
"files_to_modify": [],
"notes": ""
}
]
}
]
}Expected JSON per planner.md:
{
"feature": "Add Upstream Connection Test",
"phases": [
{
"id": "phase-1-research",
"name": "Research & Design",
"subtasks": [
{
"id": "subtask-1-1",
"description": "Research provider-specific test endpoints",
"status": "pending",
"files_to_modify": []
}
]
}
]
}Suggested Fix Locations:
- Immediate fix (fail-fast): Add validation in
apps/backend/agents/coder.py:226beforeget_next_subtask():
# Before entering coder loop
from spec.validate_pkg.validators import ImplementationPlanValidator
validator = ImplementationPlanValidator(spec_dir)
result = validator.validate()
if not result.valid:
raise ValueError(f"Invalid implementation plan: {result.errors}")- Auto-fix enhancement: Extend
apps/backend/spec/validate_pkg/auto_fix.pyto normalize:
STATUS_ALIASES = {"not_started": "pending"}
FIELD_ALIASES = {
"phase_id": "id",
"subtask_id": "id",
"title": "description" # for subtasks
}- Runtime tolerance (optional): Update
progress.py:444to accept both:
if subtask.get("status") in ("pending", "not_started"):Metadata
Metadata
Assignees
Labels
area/backendThis is backend onlyThis is backend only
Projects
Status
In progress