Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
50 changes: 50 additions & 0 deletions .agents/rules/ai-feedback-learning-loop.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
# AI Feedback Learning Loop

> Documents the process for AI agents to propose rule changes and await human approval. Prevents silent rule drift between sessions.

## When to Propose a Change

Propose a rule change when any of the following happen:

- A human corrects your approach during development and the correction is worth preserving project-wide
- You discover a pattern, constraint, or convention that recurs across multiple tasks and is not yet captured in `.agents/rules/`
- A tool, command, or workflow behaves differently than what the rules describe
- An accepted convention turns out to be incorrect or outdated

Do not propose a change for one-off situations specific to a single task.

## Proposal Format

When you identify a change worth proposing, output the following block in your response:

```
## Rule Change Proposal

**Triggered by**: [What correction or learning surfaced this]
**Proposed rule**: [Exact text to add/change/remove in base.md]
**Section**: [Which section of base.md this belongs in]
**Rationale**: [Why this is worth preserving project-wide]

Awaiting human approval before applying.
```

Include this block in-line in your response — do not create a file or modify any `.agents/rules/` file until the human explicitly approves.

## One-at-a-Time Constraint

Only propose one rule change per session. If multiple corrections surface, log them in `specs/<feature>/lessons-learned.md` and propose them one at a time in future sessions after the current proposal is resolved.

## Human Approval Required

AI agents MUST NOT self-apply rule changes. The rule files in `.agents/rules/` are governance documents — changes require human review and explicit approval. This prevents agents from silently shifting project conventions based on in-session learning that may be wrong or context-specific.

Approval means the human explicitly says something like "approved", "apply it", or "go ahead and update base.md". Implicit agreement or lack of objection is not approval.

## How to Apply After Approval

Once a human explicitly approves a rule change:

1. Open `.agents/rules/base.md` (or the relevant rule file)
2. Apply the exact change described in the proposal — no scope creep
3. Open a PR with the change; include `Rule-Change-Approval: <ref>` in the PR description (reference the conversation, issue, or comment where approval was given)
4. CI will verify the approval reference is present before allowing the merge
157 changes: 157 additions & 0 deletions .agents/rules/base.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,157 @@
# Base Agent Rules

> Single source of truth for AI agent conventions in this project. Read this file at the start of every session before doing anything else.

## Project Overview

This is a full-stack web application with a Python backend and TypeScript/Node 22 frontend.

**Backend**: Python 3.10+, FastAPI, SQLModel, PostgreSQL, managed with `uv`
**Frontend**: TypeScript, Node 22, ESM-only, managed with `bun`
**CI/CD**: GitHub Actions — automated tests, linting, Docker compose validation, Claude AI review
**Infrastructure**: Docker Compose for local development and CI

Architecture: HTTP API (backend) consumed by a single-page frontend. Services run in Docker containers locally and in staging/production via GitHub Actions deployments.

```
full-stack-agentic/
backend/ Python FastAPI application
frontend/ TypeScript/Node 22 frontend
.github/ CI workflows, PR templates, CODEOWNERS
.agents/rules/ AI agent conventions (this directory)
.claude/ Claude Code settings and commands
specs/ Feature specifications (SpecKit artifacts)
```

## Project Structure

```
.
├── backend/
│ ├── app/ Application source (routers, models, services)
│ ├── tests/ pytest test suite
│ └── pyproject.toml
├── frontend/
│ ├── src/ TypeScript source
│ └── package.json
├── .github/
│ ├── workflows/ CI workflow files
│ ├── CODEOWNERS Auto-review assignments
│ └── pull_request_template.md
├── .agents/
│ └── rules/ Agent rule files (base.md, coding-standards.md, ai-feedback-learning-loop.md)
├── .claude/
│ ├── commands/ SpecKit skill files
│ └── settings.json Shared Claude tool permissions
└── specs/ Per-feature SpecKit artifacts
```

## Available Commands

### Backend Tests

```bash
uv run pytest
```

```bash
uv run python -m pytest
```

### Backend Lint

```bash
uv run ruff check .
```

```bash
uv run ruff format .
```

### Backend Type Check

```bash
uv run ty check
```

### Frontend Tests

```bash
bun run test
```

### Frontend Lint

```bash
bun run lint
```

### Docker (local dev)

```bash
docker compose up
```

```bash
docker compose down
```

### SpecKit Scripts

```bash
.specify/scripts/bash/<script-name>
```

## Testing Conventions

- Backend tests live in `backend/tests/`, mirroring the `app/` package structure
- Run with `uv run pytest` from the repo root or `backend/` directory
- Frontend tests are colocated (`*.test.ts`) alongside source files
- Run with `bun run test` from `frontend/`
- Test behavior, not implementation — tests verify what code does, not how
- Mock only boundaries: network, filesystem, external services
- Every error path the code handles should have a test

## SpecKit Workflow

This project uses SpecKit for structured feature development. The phases are:
`/speckit.specify` → `/speckit.clarify` → `/speckit.plan` → `/speckit.tasks` → `/speckit.implement`

Artifacts live in `specs/<feature-id>-<feature-name>/`.

### Retro Gate (mandatory)

After every SpecKit phase command completes (`/speckit.specify`, `/speckit.plan`,
`/speckit.tasks`, `/speckit.implement`), run `/speckit.retro` before starting
the next phase. Do not proceed until the retro produces a "Ready" status.

**Micro-retro** (after each task in `/speckit.implement`):
1. Simplify the code just written
2. Log anything unexpected to `specs/<feature>/lessons-learned.md`
3. Check whether `tasks.md`, `plan.md`, or `spec.md` needs updating
4. Suggest `/clear` before the next task

**Phase retro** (after each phase command):
1. Summarize what was produced
2. Review `lessons-learned.md`
3. Check all earlier artifacts for drift
4. Propose constitution/rules updates (never self-apply — await human approval)
5. Confirm readiness gate (5 items)
6. Suggest `/clear` with specific files to re-read

## Rule Proposal Process

When you receive a correction or discover a pattern worth preserving project-wide, propose a rule change following the process in `.agents/rules/ai-feedback-learning-loop.md`.

Key constraint: **never self-apply rule changes**. Always propose and wait for explicit human approval before modifying any file in `.agents/rules/`.

## SDD Development Workflow

1. **Branch**: Create a feature branch from `master` — `git checkout -b <id>-<short-name>`
2. **Develop**: Implement the feature; run tests and linter locally before committing
3. **PR**: Open a pull request against `master` using the PR template in `.github/pull_request_template.md`
4. **CI**: GitHub Actions runs tests, lint, and automated Claude code review
5. **Review**: CODEOWNERS auto-requests designated reviewers for affected paths
6. **Merge**: Merge after CI passes and reviewers approve — no direct pushes to `master`

Changes to `.agents/rules/base.md` require a `Rule-Change-Approval: <ref>` line in the PR description. CI will block the PR if absent.
82 changes: 82 additions & 0 deletions .agents/rules/coding-standards.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
# Coding Standards

> Agreed-upon code quality standards for AI agents to apply consistently across this project.

## Python Standards

**Runtime**: Python 3.10+, managed with `uv`

**Linting and formatting**: `ruff` — run `uv run ruff check .` and `uv run ruff format .`
**Type checking**: `ty` — run `uv run ty check`

**Naming**:
- Modules, functions, variables: `snake_case`
- Classes: `PascalCase`
- Constants: `UPPER_SNAKE_CASE`

**Type annotations**: Required on all non-trivial public functions and methods. Use `from __future__ import annotations` for forward references.

**Import order** (enforced by ruff):
1. Standard library
2. Third-party packages
3. Local application imports

No relative imports (`..`) — use absolute imports only.

**Function size**: Max 100 lines, cyclomatic complexity ≤ 8, max 5 positional parameters.

**Docstrings**: Google-style on non-trivial public APIs. Code should be self-documenting — only add docstrings where the purpose isn't obvious from names and types.

## TypeScript Standards

**Runtime**: Node 22, ESM-only (`"type": "module"` in package.json)
**Package manager**: `bun`

**Linting and formatting**: `oxlint` and `oxfmt`

**tsconfig.json** — enable all strictness flags:
- `strict: true`
- `noUncheckedIndexedAccess: true`
- `exactOptionalPropertyTypes: true`
- `noImplicitOverride: true`
- `noPropertyAccessFromIndexSignature: true`
- `verbatimModuleSyntax: true`
- `isolatedModules: true`

**Naming**:
- Variables, functions: `camelCase`
- Classes, types, interfaces: `PascalCase`
- Constants: `UPPER_SNAKE_CASE`
- Files: `kebab-case.ts`

No relative imports (`../`) across package boundaries — use absolute imports configured via `tsconfig.json` paths.

**Test files**: Colocated as `*.test.ts` alongside source files.

## Testing Principles

**Test behavior, not implementation.** Tests verify what the code does, not how. A refactor that doesn't change behavior must not break tests.

**Test edges and errors, not just the happy path.** Empty inputs, boundary values, malformed data, missing files — bugs live in edges. Every error path the code handles should have a test that triggers it.

**Mock only boundaries.** Mock things that are slow (network, filesystem), non-deterministic (time, randomness), or external services you don't control. Never mock internal application logic.

**Verify tests catch failures.** Break the code, confirm the test fails, then fix. Tests that always pass regardless of implementation are useless.

## Error Handling

**Fail fast with clear, actionable messages.** When something goes wrong, raise immediately with context: what operation failed, what input caused it, and what the caller should do.

**Never swallow exceptions silently.** No bare `except: pass` or `.catch(() => {})`. If an error is genuinely ignorable, add a comment explaining why.

**Include context in error messages**: what operation was attempted, what the unexpected value was, and a suggested fix where possible.

## Comments

**Code should be self-documenting.** Choose names that make the purpose obvious. If a comment explains what the code does, the code probably needs refactoring.

**No commented-out code.** Delete it. Version control exists to recover deleted code.

**Comments explain why, not what.** The only valid use for a comment is to explain a non-obvious decision, a known limitation, or a gotcha that the code cannot express.

**No `<!-- TODO -->` comments in rule files.** Open items go to `tasks.md`.
85 changes: 85 additions & 0 deletions .claude/commands/checklist.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
---
description: "OPCIONAL — Valida la calidad de los requirements antes de implementar. Puede ejecutarse en cualquier momento."
---

## Propósito

Genera un checklist que valida que el spec, el plan y las tareas están bien escritos, son completos y no tienen ambigüedades. No bloquea el flujo — es una herramienta de calidad opcional.

Recuerda: el checklist valida los **requirements**, no el código. Es un "test unitario del spec escrito en inglés".

---

## Ejecución

### 1. Verificar rama y PR

```bash
git branch --show-current
gh pr view --json number,state,url,body
```

- Si la rama es `main` o `master`: ERROR "No estás en una rama de feature. Ejecuta /status."
- Si no hay PR: ERROR "No hay PR abierto. ¿Ejecutaste /start?"

### 2. Verificar que hay algo que revisar

Confirmar que existe al menos `spec.md` en el directorio de la feature.

```bash
ls specs/<directorio-rama>/
```

Si no hay spec: ERROR "No hay spec para revisar. Ejecuta /start primero."

### 3. Delegar en speckit.checklist

Invocar `/speckit.checklist` con el contexto de la fase actual.

`speckit.checklist` se encarga de:
- Detectar qué artefactos están disponibles (spec, plan, tasks)
- Hacer preguntas de clarificación sobre el enfoque del checklist
- Generar el fichero en `specs/<directorio>/checklists/<dominio>.md`
- Validar completeness, clarity, consistency, measurability, coverage

**Esperar a que `speckit.checklist` termine completamente antes de continuar.**

### 4. Commit del checklist

```bash
git add specs/
git commit -m "docs: añadir checklist de requirements"
git push origin HEAD
```

### 5. Informe final

```
✅ Checklist generado

📋 <ruta-al-checklist>

Revisa los items marcados como [Gap], [Ambiguity]
o [Conflict] antes de continuar con /implement.

─────────────────────────────────────────
➡️ SIGUIENTE PASO
─────────────────────────────────────────
Cuando estés listo para implementar:
/implement
─────────────────────────────────────────
```

### Cierre de sesión

Leer el contexto actual de la sesión (igual que `/context`).

- **🟢 / 🟡**: No mostrar nada.
- **🟠**: Mostrar al final del informe:
```
🟠 El contexto está alto. Abre una sesión nueva antes del siguiente comando.
```
- **🔴**: Mostrar antes del informe final e interrumpir si el usuario intenta continuar:
```
🔴 Contexto crítico. Abre una sesión nueva AHORA antes de continuar.
```
Loading