Guidelines for AI coding agents working in this Rust codebase.
If I tell you to do something, even if it goes against what follows below, YOU MUST LISTEN TO ME. I AM IN CHARGE, NOT YOU.
YOU ARE NEVER ALLOWED TO DELETE A FILE WITHOUT EXPRESS PERMISSION. Even a new file that you yourself created, such as a test code file. You have a horrible track record of deleting critically important files or otherwise throwing away tons of expensive work. As a result, you have permanently lost any and all rights to determine that a file or folder should be deleted.
YOU MUST ALWAYS ASK AND RECEIVE CLEAR, WRITTEN PERMISSION BEFORE EVER DELETING A FILE OR FOLDER OF ANY KIND.
- Absolutely forbidden commands:
git reset --hard,git clean -fd,rm -rf, or any command that can delete or overwrite code/data must never be run unless the user explicitly provides the exact command and states, in the same message, that they understand and want the irreversible consequences. - No guessing: If there is any uncertainty about what a command might delete or overwrite, stop immediately and ask the user for specific approval. "I think it's safe" is never acceptable.
- Safer alternatives first: When cleanup or rollbacks are needed, request permission to use non-destructive options (
git status,git diff,git stash, copying to backups) before ever considering a destructive command. - Mandatory explicit plan: Even after explicit user authorization, restate the command verbatim, list exactly what will be affected, and wait for a confirmation that your understanding is correct. Only then may you execute it—if anything remains ambiguous, refuse and escalate.
- Document the confirmation: When running any approved destructive command, record (in the session notes / final response) the exact user text that authorized it, the command actually run, and the execution time. If that record is absent, the operation did not happen.
The default branch is main. The master branch exists only for legacy URL compatibility.
- All work happens on
main— commits, PRs, feature branches all merge tomain - Never reference
masterin code or docs — if you seemasteranywhere, it's a bug that needs fixing - The
masterbranch must stay synchronized withmain— after pushing tomain, also push tomaster:git push origin main:master
If you see master referenced anywhere:
- Update it to
main - Ensure
masteris synchronized:git push origin main:master
We only use Cargo in this project, NEVER any other package manager.
- Edition: Rust 2024 (nightly required)
- Dependency versions: Explicit versions for stability
- Configuration: Cargo.toml workspace with
workspace = truepattern - Unsafe code: Forbidden (
#![forbid(unsafe_code)])
This project uses Asupersync (/dp/asupersync) as the primary async runtime with explicit Cx capability tokens for structured concurrency. A Tokio compat bridge (asupersync-tokio-compat) keeps tokio-locked crates (axum, russh, reqwest) working until native alternatives are ready. The binary entry point (src/main.rs) builds an Asupersync runtime, establishes a root Cx, and runs the CLI inside with_tokio_context().
| Crate | Purpose |
|---|---|
asupersync |
Primary async runtime (structured concurrency, Cx tokens) |
asupersync-tokio-compat |
Tokio compatibility bridge for downstream crates |
tokio |
Compat bridge runtime (axum, russh, reqwest still use tokio internally) |
duckdb |
Embedded analytical database (bundled build) |
serde + serde_json + toml |
Serialization (JSON, TOML config) |
clap |
CLI argument parsing (derive mode) |
ratatui + crossterm |
Terminal UI rendering |
axum + tower + tower-http |
Web server, middleware, CORS, static files, tracing |
chrono |
Date/time handling with serde support |
thiserror + anyhow |
Error type derivation and ad-hoc errors |
tracing + tracing-subscriber |
Structured logging with env-filter and JSON output |
reqwest |
HTTP client for webhook delivery and external APIs |
russh + russh-keys |
SSH client for remote machine data collection |
uuid |
Unique IDs (v4, serde-compatible) |
regex |
Pattern matching in alert conditions |
async-trait |
Async trait support for collector and alert channel interfaces |
dashmap |
Concurrent hash maps for alert cooldown tracking |
futures |
Async stream and utility combinators |
rand |
Random number generation for oracle predictions |
proptest |
Property-based testing (dev) |
mockall |
Trait mocking for unit tests (dev) |
vergen-gix |
Build metadata embedding (build.rs) |
The release build optimizes for performance:
# (workspace-level profile not yet defined — add when ready for distribution)NEVER run a script that processes/changes code files in this repo. Brittle regex-based transformations create far more problems than they solve.
- Always make code changes manually, even when there are many instances
- For many simple changes: use parallel subagents
- For subtle/complex changes: do them methodically yourself
If you want to change something or add a feature, revise existing code files in place.
NEVER create variations like:
mainV2.rsmain_improved.rsmain_enhanced.rs
New files are reserved for genuinely new functionality that makes zero sense to include in any existing file. The bar for creating new files is incredibly high.
We do not care about backwards compatibility—we're in early development with no users. We want to do things the RIGHT way with NO TECH DEBT.
- Never create "compatibility shims"
- Never create wrapper functions for deprecated APIs
- Just fix the code directly
After any substantive code changes, you MUST verify no errors were introduced:
# Check for compiler errors and warnings (workspace-wide)
cargo check --workspace --all-targets
# Check for clippy lints (pedantic is enabled)
cargo clippy --workspace --all-targets -- -D warnings
# Verify formatting
cargo fmt --checkIf you see errors, carefully understand and resolve each issue. Read sufficient context to fix them the RIGHT way.
Every component crate includes inline #[cfg(test)] unit tests alongside the implementation. Tests must cover:
- Happy path
- Edge cases (empty input, max values, boundary conditions)
- Error conditions
Cross-component integration tests live in the workspace tests/ directory.
# Run all tests across the workspace
cargo test --workspace
# Run with output
cargo test --workspace -- --nocapture
# Run tests for a specific crate
cargo test -p vc_config
cargo test -p vc_collect
cargo test -p vc_store
cargo test -p vc_query
cargo test -p vc_oracle
cargo test -p vc_guardian
cargo test -p vc_knowledge
cargo test -p vc_alert
cargo test -p vc_tui
cargo test -p vc_web
cargo test -p vc_cli
cargo test -p vc_mcp
# Run tests with all features enabled
cargo test --workspace --all-features# Run workspace-level integration tests
cargo test --test config_smoke
cargo test --test collector_parsing| Location | Focus Areas |
|---|---|
vc_config |
TOML parsing, environment overrides, path expansion, config linting, wizard |
vc_collect |
Collector trait contracts, output parsing, cursor management, SSH command execution |
vc_store |
DuckDB schema migrations, data ingestion, query utilities, audit events |
vc_query |
Health score calculation, guardrails validation, cost estimation, anomaly detection |
vc_oracle |
Rate limit forecasting, agent DNA fingerprinting, evolutionary optimization, experiments |
vc_guardian |
Playbook execution, approval workflows, autopilot mode, rate limiting |
vc_knowledge |
Knowledge entry CRUD, feedback scoring, keyword search |
vc_alert |
Rule evaluation, cooldown tracking, delivery channels |
vc_tui |
Screen rendering, keyboard navigation, theme application |
vc_web |
Axum route handling, WebSocket updates, API response formats |
vc_cli |
Subcommand dispatch, robot mode JSON envelopes, TOON output |
vc_mcp |
MCP tool/resource registration, request/response handling |
tests/ |
Cross-component integration, E2E scenarios, collector parsing |
If you aren't 100% sure how to use a third-party library, SEARCH ONLINE to find the latest documentation and current best practices.
This is the project you're working on. vibe_cockpit is an agent fleet monitoring and orchestration system. It provides a unified TUI dashboard, web dashboard, and robot-mode CLI for monitoring dozens of AI coding agent instances across multiple Linux machines. It collects data from 15+ existing flywheel tools (ntm, caut, cass, ru, br, bv, dcg, rano, process_triage, etc.), stores everything in DuckDB for analytical queries, and provides alerting, prediction, and self-healing capabilities.
Monitors a fleet of 30+ AI coding agent accounts (Claude, GPT, Gemini) across multiple machines. Collects metrics on sessions, tokens, commits, messages, rate limits, system resources, blocked commands, beads progress, and network activity. Surfaces anomalies, forecasts rate limit exhaustion, and can autonomously remediate problems via playbooks.
┌─────────────────────────────────┐
│ User Interfaces │
│ TUI (ratatui) │ Web (axum) │
│ CLI (robot) │ MCP Server │
└────────┬────────┬────────────────┘
│ │
┌────────▼────────▼────────────────┐
│ Query Layer │
│ Health scores, rollups, anomaly │
│ detection, cost estimation │
└────────┬─────────────────────────┘
│
┌──────────────▼──────────────┐
│ DuckDB Store │
│ Analytical queries, schema │
│ migrations, audit log │
└──────┬───────────────┬──────┘
│ │
┌────────────▼───┐ ┌──────▼──────────┐
│ Collectors │ │ Oracle │
│ SSH → remote │ │ Rate limit │
│ Local → tools │ │ forecasting, │
│ Incremental │ │ agent DNA, │
│ cursors │ │ experiments │
└────────────────┘ └─────────────────┘
│
┌───────────────────────▼─────────────┐
│ Guardian (Self-Healing) │
│ Playbooks, autopilot, approval │
│ workflows, account switching │
└──────────────┬──────────────────────┘
│
┌──────────────▼──────────────┐
│ Alert System │
│ Rules, thresholds, patterns │
│ cooldowns, delivery channels│
└─────────────────────────────┘
vibe_cockpit/
├── Cargo.toml # Workspace root + vc binary crate
├── build.rs # Build metadata (vergen-gix)
├── src/
│ └── main.rs # Binary entry point (Asupersync runtime + Tokio compat)
├── crates/
│ ├── vc_config/ # TOML config parsing, env overrides, machine inventory, linting
│ ├── vc_collect/ # Data collectors (SSH, local), cursor management, tool probing
│ ├── vc_store/ # DuckDB storage layer, schema migrations, audit events
│ ├── vc_query/ # Health scores, guardrails, cost estimation, aggregation
│ ├── vc_oracle/ # Prediction engine: rate limits, agent DNA, evolution, experiments
│ ├── vc_guardian/ # Self-healing: playbooks, autopilot, approval workflows
│ ├── vc_knowledge/ # Knowledge base: solutions, patterns, prompts, debug logs
│ ├── vc_alert/ # Alert rules, condition evaluation, delivery channels
│ ├── vc_tui/ # Terminal UI (ratatui): 12 screens, themes, widgets
│ ├── vc_web/ # Web server (axum): JSON API, WebSocket, static files
│ ├── vc_cli/ # CLI commands (clap): subcommands, robot mode, TOON output
│ └── vc_mcp/ # MCP server: tools and resources for agent consumers
├── tests/ # Workspace-level integration tests
│ ├── config_smoke.rs # Config loading smoke test
│ ├── collector_parsing.rs # Collector output parsing tests
│ ├── common/ # Shared test utilities
│ └── e2e/ # End-to-end test scenarios
└── docs/
└── schemas/ # JSON schema definitions
| Crate | Key Files | Purpose |
|---|---|---|
vc_config |
src/lib.rs |
VcConfig, MachineConfig, WebConfig, config linting, wizard |
vc_collect |
src/lib.rs, src/collectors/, src/ssh.rs, src/remote.rs |
Collector trait, SSH runner, MultiMachineCollector, tool probing |
vc_store |
src/lib.rs, src/migrations.rs, src/schema.rs |
VcStore, DuckDB connection, migrations, audit events |
vc_query |
src/lib.rs, src/guardrails.rs, src/cost.rs |
QueryBuilder, HealthScore, FleetOverview, cost estimation, guardrails |
vc_oracle |
src/lib.rs, src/rate_limit.rs, src/dna.rs, src/evolution.rs, src/experiment.rs |
RateLimitForecaster, AgentDna, EvolutionManager, ExperimentManager |
vc_guardian |
src/lib.rs, src/autopilot.rs |
Playbook, PlaybookTrigger, PlaybookStep, autopilot loop |
vc_knowledge |
src/lib.rs |
KnowledgeBase, EntryType (solution, pattern, prompt, debug_log), feedback |
vc_alert |
src/lib.rs |
AlertRule, AlertCondition, Severity, cooldown, delivery channels |
vc_tui |
src/lib.rs, src/screens/, src/theme.rs, src/widgets.rs |
12 Screen variants, render_* functions, Theme |
vc_web |
src/lib.rs |
Axum Router, JSON API, WebSocket, CORS, static file serving |
vc_cli |
src/lib.rs, src/robot.rs, src/schema_registry.rs |
Cli (clap), RobotEnvelope, SchemaRegistry, OutputFormat |
vc_mcp |
src/lib.rs |
McpServer, McpTool, McpResource |
vibe_cockpit polls these tools on local and remote machines:
| Tool | Data Collected |
|---|---|
ntm |
Agent session orchestration, task assignments |
caut (coding_agent_usage_tracker) |
Account usage, remaining credit, reset dates |
cass (coding_agent_session_search) |
Session metadata, token counts, durations, compactions |
ru (repo_updater) |
Git commits, dirty repos, uncommitted code (LoC/tokens), issues, PRs |
br (beads_rust) |
Task status, priority, dependencies, completion rates |
bv (beads_viewer) |
Triage metrics, PageRank, critical path, velocity |
dcg (destructive_command_guard) |
Blocked commands by type, machine, repo |
rano |
Network activity, bandwidth, request counts per agent |
process_triage |
Zombie/runaway processes, system performance |
rch (remote_compilation_helper) |
Remote build counts, worker utilization |
mcp_agent_mail |
Agent messages, thread counts, reservation conflicts |
cloud_benchmarker |
VPS instance benchmarks, monitoring data |
| Screen | Key Binding | Shows |
|---|---|---|
| Overview | 1 |
Fleet health summary, top alerts, active agents |
| Machines | 2 |
Per-machine status (CPU, memory, disk, load) |
| Repos | 3 |
Repository activity, commits, dirty state |
| Accounts | 4 |
Provider accounts, usage percentages, reset dates |
| Sessions | 5 |
Active agent sessions, tokens, durations |
6 |
Agent-to-agent messages, thread activity | |
| Alerts | 7 |
Active/resolved alerts, severity breakdown |
| Guardian | 8 |
Playbook runs, autopilot status, actions taken |
| Oracle | 9 |
Predictions, rate limit forecasts, agent DNA |
| Events | 0 |
Audit log, recent collector runs |
| Beads | b |
Task progress, velocity, completion rates |
| Settings | s |
Configuration display |
The CLI provides agent-optimized output via vc robot <subcommand>:
- Output formats: JSON (default), TOON (token-optimized), Text
- All output wrapped in
RobotEnvelopewith metadata (timestamp, version, format) - Subcommands mirror TUI screens:
status,health,triage,accounts, etc.
| Type | Purpose |
|---|---|
VcConfig |
Root configuration (machines, collectors, alerts, web server settings) |
VcStore |
DuckDB connection wrapper with migrations and query helpers |
Collector |
Async trait for data source implementations |
Cursor |
Incremental collection state (timestamp, offset, hash) |
HealthScore |
Per-machine health with weighted factors |
FleetOverview |
Aggregated fleet metrics for dashboard |
QueryBuilder |
Safe query construction with guardrails |
RateLimitForecaster |
Predicts account rate limit exhaustion |
AgentDna |
Behavioral fingerprint for anomaly detection |
Playbook |
Automated remediation workflow definition |
AlertRule |
Condition-based alert with cooldowns and channels |
McpServer |
MCP protocol server for agent tool/resource access |
RobotEnvelope |
Standardized JSON wrapper for robot-mode output |
Screen |
TUI screen enum (12 variants) |
- DuckDB over SQLite — analytical workloads (aggregations, time-series rollups) are the primary query pattern; DuckDB excels here
- Asupersync async runtime — structured concurrency with Cx capability tokens; Tokio compat bridge for axum, russh, reqwest
- SSH-based remote collection — no agent installation required on remote machines; collectors run commands over SSH
- Incremental collection with cursors — avoids rescanning entire histories every poll cycle
- Fail-soft collectors — a broken collector shows "stale" data, never crashes the system
- Timeout-bounded collection — no collector can hang the system
- Idempotent inserts — same source payload never creates duplicates
- Versioned collector output — every collector has
schema_versionfor forward evolution - Robot mode for agents — the same data accessible to humans via TUI is available to agents via JSON CLI
- Structured tracing — all operations emit tracing spans for diagnostics
- Audit log — every collector run, autopilot action, and user command is recorded
A mail-like layer that lets coding agents coordinate asynchronously via MCP tools and resources. Provides identities, inbox/outbox, searchable threads, and advisory file reservations with human-auditable artifacts in Git.
- Prevents conflicts: Explicit file reservations (leases) for files/globs
- Token-efficient: Messages stored in per-project archive, not in context
- Quick reads:
resource://inbox/...,resource://thread/...
-
Register identity:
ensure_project(project_key=<abs-path>) register_agent(project_key, program, model) -
Reserve files before editing:
file_reservation_paths(project_key, agent_name, ["src/**"], ttl_seconds=3600, exclusive=true) -
Communicate with threads:
send_message(..., thread_id="FEAT-123") fetch_inbox(project_key, agent_name) acknowledge_message(project_key, agent_name, message_id) -
Quick reads:
resource://inbox/{Agent}?project=<abs-path>&limit=20 resource://thread/{id}?project=<abs-path>&include_bodies=true
- Prefer macros for speed:
macro_start_session,macro_prepare_thread,macro_file_reservation_cycle,macro_contact_handshake - Use granular tools for control:
register_agent,file_reservation_paths,send_message,fetch_inbox,acknowledge_message
"from_agent not registered": Alwaysregister_agentin the correctproject_keyfirst"FILE_RESERVATION_CONFLICT": Adjust patterns, wait for expiry, or use non-exclusive reservation- Auth errors: If JWT+JWKS enabled, include bearer token with matching
kid
Beads provides a lightweight, dependency-aware issue database and CLI (br - beads_rust) for selecting "ready work," setting priorities, and tracking status. It complements MCP Agent Mail's messaging and file reservations.
Important: br is non-invasive—it NEVER runs git commands automatically. You must manually commit changes after br sync --flush-only.
- Single source of truth: Beads for task status/priority/dependencies; Agent Mail for conversation and audit
- Shared identifiers: Use Beads issue ID (e.g.,
br-123) as Mailthread_idand prefix subjects with[br-123] - Reservations: When starting a task, call
file_reservation_paths()with the issue ID inreason
-
Pick ready work (Beads):
br ready --json # Choose highest priority, no blockers -
Reserve edit surface (Mail):
file_reservation_paths(project_key, agent_name, ["src/**"], ttl_seconds=3600, exclusive=true, reason="br-123") -
Announce start (Mail):
send_message(..., thread_id="br-123", subject="[br-123] Start: <title>", ack_required=true) -
Work and update: Reply in-thread with progress
-
Complete and release:
br close 123 --reason "Completed" br sync --flush-only # Export to JSONL (no git operations)
release_file_reservations(project_key, agent_name, paths=["src/**"])Final Mail reply:
[br-123] Completedwith summary
| Concept | Value |
|---|---|
Mail thread_id |
br-### |
| Mail subject | [br-###] ... |
File reservation reason |
br-### |
| Commit messages | Include br-### for traceability |
bv is a graph-aware triage engine for Beads projects (.beads/beads.jsonl). It computes PageRank, betweenness, critical path, cycles, HITS, eigenvector, and k-core metrics deterministically.
Scope boundary: bv handles what to work on (triage, priority, planning). For agent-to-agent coordination (messaging, work claiming, file reservations), use MCP Agent Mail.
CRITICAL: Use ONLY --robot-* flags. Bare bv launches an interactive TUI that blocks your session.
bv --robot-triage is your single entry point. It returns:
quick_ref: at-a-glance counts + top 3 picksrecommendations: ranked actionable items with scores, reasons, unblock infoquick_wins: low-effort high-impact itemsblockers_to_clear: items that unblock the most downstream workproject_health: status/type/priority distributions, graph metricscommands: copy-paste shell commands for next steps
bv --robot-triage # THE MEGA-COMMAND: start here
bv --robot-next # Minimal: just the single top pick + claim commandPlanning:
| Command | Returns |
|---|---|
--robot-plan |
Parallel execution tracks with unblocks lists |
--robot-priority |
Priority misalignment detection with confidence |
Graph Analysis:
| Command | Returns |
|---|---|
--robot-insights |
Full metrics: PageRank, betweenness, HITS, eigenvector, critical path, cycles, k-core, articulation points, slack |
--robot-label-health |
Per-label health: health_level, velocity_score, staleness, blocked_count |
--robot-label-flow |
Cross-label dependency: flow_matrix, dependencies, bottleneck_labels |
--robot-label-attention [--attention-limit=N] |
Attention-ranked labels |
History & Change Tracking:
| Command | Returns |
|---|---|
--robot-history |
Bead-to-commit correlations |
--robot-diff --diff-since <ref> |
Changes since ref: new/closed/modified issues, cycles |
Other:
| Command | Returns |
|---|---|
--robot-burndown <sprint> |
Sprint burndown, scope changes, at-risk items |
--robot-forecast <id|all> |
ETA predictions with dependency-aware scheduling |
--robot-alerts |
Stale issues, blocking cascades, priority mismatches |
--robot-suggest |
Hygiene: duplicates, missing deps, label suggestions |
--robot-graph [--graph-format=json|dot|mermaid] |
Dependency graph export |
--export-graph <file.html> |
Interactive HTML visualization |
bv --robot-plan --label backend # Scope to label's subgraph
bv --robot-insights --as-of HEAD~30 # Historical point-in-time
bv --recipe actionable --robot-plan # Pre-filter: ready to work
bv --recipe high-impact --robot-triage # Pre-filter: top PageRank
bv --robot-triage --robot-triage-by-track # Group by parallel work streams
bv --robot-triage --robot-triage-by-label # Group by domainAll robot JSON includes:
data_hash— Fingerprint of source beads.jsonlstatus— Per-metric state:computed|approx|timeout|skipped+ elapsed msas_of/as_of_commit— Present when using--as-of
Two-phase analysis:
- Phase 1 (instant): degree, topo sort, density
- Phase 2 (async, 500ms timeout): PageRank, betweenness, HITS, eigenvector, cycles
bv --robot-triage | jq '.quick_ref' # At-a-glance summary
bv --robot-triage | jq '.recommendations[0]' # Top recommendation
bv --robot-plan | jq '.plan.summary.highest_impact' # Best unblock target
bv --robot-insights | jq '.status' # Check metric readiness
bv --robot-insights | jq '.Cycles' # Circular deps (must fix!)Golden Rule: ubs <changed-files> before every commit. Exit 0 = safe. Exit >0 = fix & re-run.
ubs file.rs file2.rs # Specific files (< 1s) — USE THIS
ubs $(git diff --name-only --cached) # Staged files — before commit
ubs --only=rust,toml src/ # Language filter (3-5x faster)
ubs --ci --fail-on-warning . # CI mode — before PR
ubs . # Whole project (ignores target/, Cargo.lock) Category (N errors)
file.rs:42:5 - Issue description
Suggested fix
Exit code: 1
Parse: file:line:col -> location | Suggested fix -> how to fix | Exit 0/1 -> pass/fail
- Read finding -> category + fix suggestion
- Navigate
file:line:col-> view context - Verify real issue (not false positive)
- Fix root cause (not symptom)
- Re-run
ubs <file>-> exit 0 - Commit
- Critical (always fix): Memory safety, use-after-free, data races, SQL injection
- Important (production): Unwrap panics, resource leaks, overflow checks
- Contextual (judgment): TODO/FIXME, println! debugging
RCH offloads cargo build, cargo test, cargo clippy, and other compilation commands to a fleet of 8 remote Contabo VPS workers instead of building locally. This prevents compilation storms from overwhelming csd when many agents run simultaneously.
RCH is installed at ~/.local/bin/rch and is hooked into Claude Code's PreToolUse automatically. Most of the time you don't need to do anything if you are Claude Code — builds are intercepted and offloaded transparently.
To manually offload a build:
rch exec -- cargo build --release
rch exec -- cargo test
rch exec -- cargo clippyQuick commands:
rch doctor # Health check
rch workers probe --all # Test connectivity to all 8 workers
rch status # Overview of current state
rch queue # See active/waiting buildsIf rch or its workers are unavailable, it fails open — builds run locally as normal.
Note for Codex/GPT-5.2: Codex does not have the automatic PreToolUse hook, but you can (and should) still manually offload compute-intensive compilation commands using rch exec -- <command>. This avoids local resource contention when multiple agents are building simultaneously.
Use ast-grep when structure matters. It parses code and matches AST nodes, ignoring comments/strings, and can safely rewrite code.
- Refactors/codemods: rename APIs, change import forms
- Policy checks: enforce patterns across a repo
- Editor/automation: LSP mode,
--jsonoutput
Use ripgrep when text is enough. Fastest way to grep literals/regex.
- Recon: find strings, TODOs, log lines, config values
- Pre-filter: narrow candidate files before ast-grep
- Need correctness or applying changes ->
ast-grep - Need raw speed or hunting text ->
rg - Often combine:
rgto shortlist files, thenast-grepto match/modify
# Find structured code (ignores comments)
ast-grep run -l Rust -p 'fn $NAME($$$ARGS) -> $RET { $$$BODY }'
# Find all unwrap() calls
ast-grep run -l Rust -p '$EXPR.unwrap()'
# Quick textual hunt
rg -n 'println!' -t rust
# Combine speed + precision
rg -l -t rust 'unwrap\(' | xargs ast-grep run -l Rust -p '$X.unwrap()' --jsonUse mcp__morph-mcp__warp_grep for exploratory "how does X work?" questions. An AI agent expands your query, greps the codebase, reads relevant files, and returns precise line ranges with full context.
Use ripgrep for targeted searches. When you know exactly what you're looking for.
Use ast-grep for structural patterns. When you need AST precision for matching/rewriting.
| Scenario | Tool | Why |
|---|---|---|
| "How does the collector system work?" | warp_grep |
Exploratory; don't know where to start |
| "Where is the rate limit forecaster?" | warp_grep |
Need to understand architecture |
"Find all uses of VcStore::new" |
ripgrep |
Targeted literal search |
"Find files with println!" |
ripgrep |
Simple pattern |
"Replace all unwrap() with expect()" |
ast-grep |
Structural refactor |
mcp__morph-mcp__warp_grep(
repoPath: "/dp/vibe_cockpit",
query: "How does the SSH-based remote collection work?"
)
Returns structured results with file paths, line ranges, and extracted code snippets.
- Don't use
warp_grepto find a specific function name -> useripgrep - Don't use
ripgrepto understand "how does X work" -> wastes time with manual reads - Don't use
ripgrepfor codemods -> risks collateral edits
This project uses beads_rust (br) for issue tracking. Issues are stored in .beads/ and tracked in git.
Important: br is non-invasive—it NEVER executes git commands. After br sync --flush-only, you must manually run git add .beads/ && git commit.
# View issues (launches TUI - avoid in automated sessions)
bv
# CLI commands for agents (use these instead)
br ready # Show issues ready to work (no blockers)
br list --status=open # All open issues
br show <id> # Full issue details with dependencies
br create --title="..." --type=task --priority=2
br update <id> --status=in_progress
br close <id> --reason "Completed"
br close <id1> <id2> # Close multiple issues at once
br sync --flush-only # Export to JSONL (NO git operations)- Start: Run
br readyto find actionable work - Claim: Use
br update <id> --status=in_progress - Work: Implement the task
- Complete: Use
br close <id> - Sync: Run
br sync --flush-onlythen manually commit
- Dependencies: Issues can block other issues.
br readyshows only unblocked work. - Priority: P0=critical, P1=high, P2=medium, P3=low, P4=backlog (use numbers, not words)
- Types: task, bug, feature, epic, question, docs
- Blocking:
br dep add <issue> <depends-on>to add dependencies
Before ending any session, run this checklist:
git status # Check what changed
git add <files> # Stage code changes
br sync --flush-only # Export beads to JSONL
git add .beads/ # Stage beads changes
git commit -m "..." # Commit everything together
git push # Push to remote- Check
br readyat session start to find available work - Update status as you work (in_progress -> closed)
- Create new issues with
br createwhen you discover tasks - Use descriptive titles and set appropriate priority/type
- Always
br sync --flush-only && git add .beads/before ending session
When ending a work session, you MUST complete ALL steps below.
MANDATORY WORKFLOW:
- File issues for remaining work - Create issues for anything that needs follow-up
- Run quality gates (if code changed) - Tests, linters, builds
- Update issue status - Close finished work, update in-progress items
- Sync beads -
br sync --flush-onlyto export to JSONL - Hand off - Provide context for next session
cass indexes prior agent conversations (Claude Code, Codex, Cursor, Gemini, ChatGPT, etc.) so we can reuse solved problems.
Rules: Never run bare cass (TUI). Always use --robot or --json.
cass health
cass search "async runtime" --robot --limit 5
cass view /path/to/session.jsonl -n 42 --json
cass expand /path/to/session.jsonl -n 42 -C 3 --json
cass capabilities --json
cass robot-docs guide- Use
--fields minimalfor lean output - Filter by agent with
--agent - Use
--days Nto limit to recent history
stdout is data-only, stderr is diagnostics; exit code 0 means success.
Treat cass as a way to avoid re-solving problems other agents already handled.
Note for Codex/GPT-5.2:
You constantly bother me and stop working with concerned questions that look similar to this:
Unexpected changes (need guidance)
- Working tree still shows edits I did not make in Cargo.toml, Cargo.lock, src/cli/commands/upgrade.rs, src/storage/sqlite.rs, tests/conformance.rs, tests/storage_deps.rs. Please advise whether to keep/commit/revert these before any further work. I did not touch them.
Next steps (pick one)
1. Decide how to handle the unrelated modified files above so we can resume cleanly.
2. Triage beads_rust-orko (clippy/cargo warnings) and beads_rust-ydqr (rustfmt failures).
3. If you want a full suite run later, fix conformance/clippy blockers and re-run cargo test --all.
NEVER EVER DO THAT AGAIN. The answer is literally ALWAYS the same: those are changes created by the potentially dozen of other agents working on the project at the same time. This is not only a common occurrence, it happens multiple times PER MINUTE. The way to deal with it is simple: you NEVER, under ANY CIRCUMSTANCE, stash, revert, overwrite, or otherwise disturb in ANY way the work of other agents. Just treat those changes identically to changes that you yourself made. Just fool yourself into thinking YOU made the changes and simply don't recall it for some reason.
Also, if I ask you to explicitly use your built-in TODO functionality, don't complain about this and say you need to use beads. You can use built-in TODOs if I tell you specifically to do so. Always comply with such orders.