Skip to content

Latest commit

 

History

History
249 lines (180 loc) · 12.6 KB

File metadata and controls

249 lines (180 loc) · 12.6 KB

CrewAI: Guardrails and Fix Patterns

🧭 Quick Return to Map

You are in a sub-page of Agents & Orchestration.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

Use this page when your orchestration uses CrewAI (agents, tasks, tools, crews, planning) and you see tool loops, wrong snippets, role mixing, or answers that flip between runs. The table maps symptoms to exact WFGY fix pages and gives a minimal recipe you can paste.

Acceptance targets

  • ΔS(question, retrieved) ≤ 0.45
  • Coverage ≥ 0.70 to the intended section or record
  • λ stays convergent across 3 paraphrases and 2 seeds
  • E_resonance stays flat on long windows

Open these first


Typical breakpoints and the right fix


Fix in 60 seconds

  1. Measure ΔS
    Compute ΔS(question, retrieved) and ΔS(retrieved, expected anchor).
    Stable < 0.40, transitional 0.40 to 0.60, risk ≥ 0.60.

  2. Probe λ_observe
    Do a k sweep in retrieval and reorder prompt headers. If λ flips, lock the schema and clamp with BBAM.

  3. Apply the module

  1. Verify
    Coverage ≥ 0.70. ΔS ≤ 0.45. Three paraphrases and two seeds with λ convergent.

Minimal CrewAI pattern with WFGY checks

# Pseudocode: highlight the control points only
from crewai import Agent, Task, Crew

def retrieve_snippets(q):
    # unified analyzer and metric across dense and sparse
    return retriever.search(q, k=10)

def assemble_prompt(context, q):
    # schema-locked prompt with cite first
    return prompt.format(context=context, question=q)

def wfgy_checks(q, context, answer):
    # compute ΔS(question, context) and enforce thresholds
    # record snippet_id, section_id, source_url, offsets, tokens
    metrics = metrics_and_trace(q, context, answer)
    if metrics["risk"]:
        raise RuntimeError("WFGY gate: high ΔS or divergent λ")
    return metrics

researcher = Agent(
    role="retrieval",
    goal="fetch auditable snippets with fields locked",
    backstory="RAG specialist who always cites first"
)

writer = Agent(
    role="reasoning",
    goal="answer with cite then explain using the snippet schema",
    backstory="keeps λ convergent and avoids cross section reuse"
)

task_retrieve = Task(
    description="Retrieve k=10 with unified analyzer, return snippet schema",
    agent=researcher,
    expected_output="list of snippets with {snippet_id, section_id, source_url, offsets, tokens}"
)

task_answer = Task(
    description="Assemble cite-first prompt and answer with strict JSON",
    agent=writer,
    expected_output="{citations:[...], answer:'...'}"
)

crew = Crew(agents=[researcher, writer], tasks=[task_retrieve, task_answer])

def run(question):
    context = retrieve_snippets(question)
    msg = assemble_prompt(context, question)
    answer = crew.kickoff(inputs={"msg": msg})
    metrics = wfgy_checks(question, context, answer)
    return {"answer": answer, "metrics": metrics}

What this enforces

  • Retrieval is observable and parameterized. Analyzer and metric are unified.
  • Prompt is schema locked with cite first and strict JSON for tool outputs.
  • A post generation WFGY gate can halt the run when ΔS is high or λ flips.
  • Traces record snippet to citation mapping for audits.

Specs and recipes RAG Architecture & Recovery · Retrieval Playbook · Retrieval Traceability · Data Contracts


CrewAI specific gotchas

  • Mixed embedding functions across write and read. Rebuild with explicit metric and normalization. See Embedding ≠ Semantic

  • Planner emits tool prompts that bypass the schema. Always validate tool arguments and echo the schema every step. See Prompt Injection

  • Memory overwrite between agents. Stamp mem_rev and mem_hash, split namespaces by agent role. See role drift · memory desync

  • Event storms when multiple tasks write to the same index or KV. Add idempotency keys on {source_id, mem_rev, index_hash}. See Retrieval Traceability

  • Long runs degrade style and flip answers. Split the plan, then re join with a BBCR bridge and clamp with BBAM. See Context Drift


When to escalate

  • ΔS remains ≥ 0.60 Rebuild the index using the checklists and verify with a small gold set. Retrieval Playbook

  • Identical input yields different answers across runs Check version skew and session state. Pre-Deploy Collapse


🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly

Explore More

Layer Page What it’s for
⭐ Proof WFGY Recognition Map External citations, integrations, and ecosystem proof
⚙️ Engine WFGY 1.0 Original PDF tension engine and early logic sketch (legacy reference)
⚙️ Engine WFGY 2.0 Production tension kernel for RAG and agent systems
⚙️ Engine WFGY 3.0 TXT based Singularity tension engine (131 S class set)
🗺️ Map Problem Map 1.0 Flagship 16 problem RAG failure taxonomy and fix map
🗺️ Map Problem Map 2.0 Global Debug Card for RAG and agent pipeline diagnosis
🗺️ Map Problem Map 3.0 Global AI troubleshooting atlas and failure pattern map
🧰 App TXT OS .txt semantic OS with fast bootstrap
🧰 App Blah Blah Blah Abstract and paradox Q&A built on TXT OS
🧰 App Blur Blur Blur Text to image generation with semantic control
🏡 Onboarding Starter Village Guided entry point for new users

If this repository helped, starring it improves discovery so more builders can find the docs and tools.
GitHub Repo stars