Skip to content

LastManStandingV2/SpecBeads

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

8 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

SpecBeads

SpecBeads

Connecting specification-driven development with agent memory

Spec Kit Beads Research-Backed AI Optimization Experimental MIT License

Spec Kit     ⟷     Beads


πŸ§ͺ This is an Experiment

This is an experimental project exploring how two incredible open-source tools can work together. The concepts here could be applied to any AI agent that supports GitHub Spec Kit β€” not just Claude Code.


πŸ™ Standing on the Shoulders of Giants

The Brain β€” Planning & Principles

  • πŸ“‹ Spec-first β€” Focus on what and why before how
  • βš–οΈ Constitution management β€” Define project principles once
  • πŸ”„ Multi-step refinement β€” Not one-shot code generation
  • πŸ€– Agent agnostic β€” Works with 15+ AI agents
  • πŸš€ 90+ releases β€” Active development by GitHub

πŸ”— Beads by Steve Yegge

The Memory β€” Execution & Tracking

  • 🧠 Agent memory β€” No more forgotten context between sessions
  • πŸ”— 4 dependency types β€” blocks, related, parent-child, discovered-from
  • ⚑ Ready work detection β€” bd ready finds unblocked tasks
  • πŸ“¦ Git-distributed β€” No server needed, syncs via git
  • πŸ›‘οΈ Multi-agent safe β€” Hash-based IDs prevent merge conflicts

What is SpecBeads?

SpecBeads is a thin orchestration layer that connects these two tools:

  • Spec Kit provides the planning β€” specs, plans, tasks, and constitutional principles
  • Beads provides the execution β€” issue tracking, dependencies, and agent memory
  • SpecBeads provides bidirectional sync β€” keeping both in harmony
flowchart LR
    subgraph SPECKIT["🌱 SPEC KIT"]
        direction TB
        SPEC["spec.md"]
        PLAN["plan.md"]
        TASKS["tasks.md"]
    end

    subgraph BRIDGE["πŸŒ‰ SPECBEADS"]
        direction TB
        SYNC["Bidirectional Sync"]
        MAP[".beads-mapping.json"]
        CONST["constitution.md"]
    end

    subgraph BEADS["πŸ”— BEADS"]
        direction TB
        ISSUES["issues.jsonl"]
        READY["bd ready"]
        CLOSE["bd close"]
    end

    TASKS <-->|"/speckit.taskstobeads"| SYNC
    SYNC <--> MAP
    MAP <-->|"Task ↔ Issue IDs"| ISSUES
    CONST -->|"Enforces principles"| IMPL
    READY --> IMPL["/speckit.implementwithbeads"]
    IMPL --> CLOSE
    CLOSE -->|"Status sync"| SYNC

    style BRIDGE fill:#2d4a22,stroke:#4ade80,stroke-width:2px
    style SPECKIT fill:#1e3a5f,stroke:#60a5fa,stroke-width:2px
    style BEADS fill:#5c2d1a,stroke:#fb923c,stroke-width:2px
Loading

πŸ”„ Bidirectional Sync: Create issues from tasks, OR sync completed Beads issues back to update tasks.md. Run /speckit.taskstobeads anytime to reconcile both directions.


Why Build This?

Gap SpecBeads Solution
Spec Kit lacks execution tracking taskstobeads converts tasks β†’ Beads issues
Beads lacks specification context implementwithbeads loads spec/plan/constitution
Neither enforces project principles Constitutional compliance checks after each task
Disconnected planning and execution taskstobeads provides bidirectional task ↔ issue sync

🧠 Research-Backed AI Optimization

SpecBeads uses psychological prompting techniques proven to improve AI performance by 40-115% on complex tasks:

Technique Impact Research
Detailed Personas 23% β†’ 84% accuracy ExpertPrompting (Xu et al., 2023)
Challenge Framing +115% on hard tasks EmotionPrompt (Li et al., 2023)
Stakes & Consequences +10% avg performance EmotionPrompt Study
Step-by-Step Reasoning 34% β†’ 80% accuracy OPRO (Google DeepMind, 2023)
Self-Evaluation Forces internal validation Confidence Scoring (Tian et al., 2023)

How it works:

  • Commands use senior engineer personas with 15+ years experience
  • Challenge prompts trigger competitive framing ("I bet you can't...")
  • Clear stakes with quantified consequences ("blocks ${N} downstream tasks")
  • Deep breath triggers for deliberate step-by-step reasoning
  • 0.9 confidence threshold with calibration examples prevents overconfident outputs

Example prompt enhancement:

You are a senior software architect with 15+ years in distributed systems.

**CRITICAL**: This blocks 5 downstream tasks if it fails.
**CHALLENGE**: Implement with 80%+ test coverage and <300 lines.

Take a deep breath and work through this step-by-step.

See PROMPTING_ENHANCEMENT_PLAN.md for implementation details.


✨ Key Features

πŸ”„ Bidirectional Synchronization

  • Convert tasks.md to Beads issues with full context
  • Sync completed Beads issues back to update tasks.md
  • Maintain .beads-mapping.json for task ↔ issue relationships

🧠 AI Performance Optimization

  • Research-backed prompting techniques (+40-115% performance)
  • Detailed expert personas (senior engineers with 15+ years experience)
  • Challenge framing and quantified stakes
  • Self-evaluation with 0.9 confidence threshold
  • Calibrated confidence ratings prevent overconfident outputs

πŸ“‹ Rich Context in Issues

Beads issues include:

  • Relevant spec & plan sections (agents don't need to read full docs)
  • Extracted file paths from task descriptions
  • Acceptance criteria from task markdown
  • Dependency counts ("blocks N downstream tasks")
  • Persona labels (persona:test, persona:feature, etc.)
  • Challenge framing to trigger higher-quality implementation

βš–οΈ Constitutional Compliance

  • Automatic validation after each task
  • TDD ordering enforcement (tests before implementation)
  • SOLID principles checking
  • Commit size validation (300-600 lines)

πŸ”— Dependency Management

  • Phase-based dependencies (Foundational β†’ User Stories β†’ Polish)
  • Explicit task dependencies from tasks.md
  • Related links for context (tasks in same user story)
  • bd ready shows only unblocked work

Prerequisites

1. Claude Code (or your preferred AI agent)

npm install -g @anthropic-ai/claude-code

Or use any supported agent

2. GitHub Spec Kit

uv tool install specify-cli --from git+https://github.com/github/spec-kit.git
specify init . --ai claude

3. Beads

curl -fsSL https://raw.githubusercontent.com/steveyegge/beads/main/scripts/install.sh | bash
bd init

Installation

Copy all three components to your project:

1. Slash Commands

cp -r .claude/commands/* /your/project/.claude/commands/

2. Bash Scripts (Required!)

The commands depend on these automation scripts:

cp -r .specify/scripts/bash/* /your/project/.specify/scripts/bash/

Scripts included:

  • check-prerequisites.sh β€” Validates feature directory and tasks.md
  • startup-checks.sh β€” Verifies Beads is installed and initialized
  • config-loader.sh β€” Loads automation preferences
  • check-constitutional-compliance.sh β€” Post-implementation compliance checks
  • detect-already-implemented.sh β€” Smart detection of existing implementations
  • common.sh β€” Shared utilities

3. Constitution & Prompt Templates (Optional but Recommended)

# Constitution template
cp .specify/memory/constitution.md /your/project/.specify/memory/

# Psychological prompting templates (reference documentation)
cp -r .specify/prompts /your/project/.specify/

Templates included:

  • personas.md β€” Role definitions (architect, test engineer, DevOps, reviewer)
  • framing.md β€” Stakes and challenge templates
  • self-eval.md β€” Confidence rating scales with calibration examples

Note: Prompts are inlined in command files for performance. These templates serve as reference documentation and customization starting points.

Restart your AI agent to load the new commands.


Quick Start

# 1. Create specification (Spec Kit)
/speckit.specify "Add user authentication"

# 2. Generate plan (Spec Kit)
/speckit.plan

# 3. Break into tasks (Spec Kit)
/speckit.tasks

# 4. Convert to Beads issues (SpecBeads!)
/speckit.taskstobeads

# 5. Implement with compliance (SpecBeads!)
/speckit.implementwithbeads

Commands

Command Description
/speckit.taskstobeads Convert tasks.md ↔ Beads issues (bidirectional)
/speckit.implementwithbeads Implement next ready task with constitutional compliance

Project Structure

your-project/
β”œβ”€β”€ .specify/
β”‚   β”œβ”€β”€ memory/
β”‚   β”‚   └── constitution.md          # Project principles
β”‚   β”œβ”€β”€ prompts/                       # Psychological prompting templates
β”‚   β”‚   β”œβ”€β”€ README.md                  # Template usage guide
β”‚   β”‚   β”œβ”€β”€ personas.md                # Role definitions
β”‚   β”‚   β”œβ”€β”€ framing.md                 # Stakes & challenge templates
β”‚   β”‚   └── self-eval.md               # Confidence calibration examples
β”‚   └── scripts/bash/                  # SpecBeads automation scripts
β”‚       β”œβ”€β”€ check-prerequisites.sh
β”‚       β”œβ”€β”€ startup-checks.sh
β”‚       β”œβ”€β”€ config-loader.sh
β”‚       β”œβ”€β”€ check-constitutional-compliance.sh
β”‚       β”œβ”€β”€ detect-already-implemented.sh
β”‚       └── common.sh
β”œβ”€β”€ .claude/commands/
β”‚   β”œβ”€β”€ speckit.taskstobeads.md       # Tasks ↔ Beads sync
β”‚   └── speckit.implementwithbeads.md # Implement with compliance
β”œβ”€β”€ .beads/
β”‚   └── issues.jsonl                  # Beads database
└── specs/[feature]/
    β”œβ”€β”€ spec.md
    β”œβ”€β”€ plan.md
    β”œβ”€β”€ tasks.md
    └── .beads-mapping.json           # Task ↔ Issue mapping

πŸ“š Research Foundation

SpecBeads' psychological prompting techniques are based on peer-reviewed research:

  • EmotionPrompt β€” Li et al. (2023). "Large Language Models Understand and Can be Enhanced by Emotional Stimuli." ICLR 2024. arXiv:2307.11760

  • OPRO ("Deep Breath") β€” Yang et al. (2023). "Large Language Models as Optimizers." Google DeepMind. arXiv:2309.03409

  • 26 Prompting Principles β€” Bsharat et al. (2023). "Principled Instructions Are All You Need for Questioning LLaMA-1/2, GPT-3.5/4." arXiv:2312.16171

  • ExpertPrompting β€” Xu et al. (2023). "ExpertPrompting: Instructing Large Language Models to be Distinguished Experts." arXiv:2305.14688

See PROMPTING_ENHANCEMENT_PLAN.md for full implementation details.


License

MIT License β€” see LICENSE


Specification-first. Memory-enabled. Constitution-governed. AI-optimized.

Built with appreciation for Spec Kit and Beads

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages