Skip to content

Bug: AI Planner generates non-standard schema (not_started instead of pending), causing tasks to get stuck #884

@g1331

Description

@g1331

Checklist

  • I searched existing issues and this hasn't been reported

Area

Fullstack

Operating System

Windows

Version

develop @ 91bd240 (2026-01-09)

What happened?

The AI Planner agent generates implementation_plan.json with a non-standard schema that differs from what the backend code expects. This causes tasks to get stuck because get_next_subtask() cannot find any pending subtasks.

Root Cause Analysis:

The planner.md prompt clearly defines the expected schema:

{
  "phases": [{
    "id": "phase-1-backend",
    "name": "Backend API",
    "subtasks": [{
      "id": "subtask-1-1",
      "description": "Create data models",
      "status": "pending"
    }]
  }]
}

But the AI generates a different schema:

{
  "phases": [{
    "phase_id": "1",           // ❌ Should be "id"
    "title": "Research",       // ❌ Should be "name"
    "status": "not_started",   // ❌ Should not exist at phase level
    "subtasks": [{
      "subtask_id": "1.1",     // ❌ Should be "id"
      "title": "Research...",  // ❌ Should be "description"
      "status": "not_started"  // ❌ Should be "pending"
    }]
  }]
}

Why this breaks the system:

  1. apps/backend/core/progress.py:444 checks subtask.get("status") == "pending":
# Line 444 in get_next_subtask()
for subtask in phase.get("subtasks", []):
    if subtask.get("status") == "pending":  # ❌ "not_started" won't match!
        return {...}
  1. apps/backend/spec/validate_pkg/schemas.py:54 defines valid status values:
"subtask_schema": {
    "required_fields": ["id", "description", "status"],  # Not subtask_id, title
    "status_values": ["pending", "in_progress", "completed", "blocked", "failed"],
    # ❌ "not_started" is NOT a valid status!
}
  1. The ImplementationPlanValidator exists but validation is not enforced after AI writes the file.

Related Issues:

Steps to reproduce

  1. Create a new task with any description
  2. Start the task - it enters Planning mode
  3. Wait for the AI Planner to complete and write implementation_plan.json
  4. Check the generated JSON file - it will have status: "not_started" instead of "pending"
  5. The task gets stuck because get_next_subtask() returns None
  6. Manual workaround: Stop task, manually edit JSON to change not_startedpending, then Resume

Expected behavior

  1. AI should strictly follow the schema defined in planner.md prompt
  2. System should validate AI output against IMPLEMENTATION_PLAN_SCHEMA after the Write tool saves the file
  3. If validation fails, system should either:
    • Auto-fix with auto_fix_plan() (normalize field names and status values)
    • Or reject and ask AI to regenerate

Logs / Screenshots

Actual JSON generated by AI:

{
  "spec_id": "002-add-upstream-connection-test",
  "phases": [
    {
      "phase_id": "1",
      "title": "Research & Design",
      "status": "not_started",
      "subtasks": [
        {
          "subtask_id": "1.1",
          "title": "Research provider-specific test endpoints",
          "description": "Research lightweight API endpoints...",
          "status": "not_started",
          "files_to_modify": [],
          "notes": ""
        }
      ]
    }
  ]
}

Expected JSON per planner.md:

{
  "feature": "Add Upstream Connection Test",
  "phases": [
    {
      "id": "phase-1-research",
      "name": "Research & Design",
      "subtasks": [
        {
          "id": "subtask-1-1",
          "description": "Research provider-specific test endpoints",
          "status": "pending",
          "files_to_modify": []
        }
      ]
    }
  ]
}

Suggested Fix Locations:

  1. Immediate fix (fail-fast): Add validation in apps/backend/agents/coder.py:226 before get_next_subtask():
# Before entering coder loop
from spec.validate_pkg.validators import ImplementationPlanValidator
validator = ImplementationPlanValidator(spec_dir)
result = validator.validate()
if not result.valid:
    raise ValueError(f"Invalid implementation plan: {result.errors}")
  1. Auto-fix enhancement: Extend apps/backend/spec/validate_pkg/auto_fix.py to normalize:
STATUS_ALIASES = {"not_started": "pending"}
FIELD_ALIASES = {
    "phase_id": "id",
    "subtask_id": "id", 
    "title": "description"  # for subtasks
}
  1. Runtime tolerance (optional): Update progress.py:444 to accept both:
if subtask.get("status") in ("pending", "not_started"):

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    Status

    In progress

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions