🧭 Quick Return to Map
You are in a sub-page of Agents & Orchestration.
To reorient, go back here:
- Agents & Orchestration — orchestration frameworks and guardrails
- WFGY Global Fix Map — main Emergency Room, 300+ structured fixes
- WFGY Problem Map 1.0 — 16 reproducible failure modes
Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.
Use this page when your orchestration uses CrewAI (agents, tasks, tools, crews, planning) and you see tool loops, wrong snippets, role mixing, or answers that flip between runs. The table maps symptoms to exact WFGY fix pages and gives a minimal recipe you can paste.
Acceptance targets
- ΔS(question, retrieved) ≤ 0.45
- Coverage ≥ 0.70 to the intended section or record
- λ stays convergent across 3 paraphrases and 2 seeds
- E_resonance stays flat on long windows
-
Visual map and recovery
RAG Architecture & Recovery -
End to end retrieval knobs
Retrieval Playbook -
Why this snippet
Retrieval Traceability -
Ordering control
Rerankers -
Embedding vs meaning
Embedding ≠ Semantic -
Hallucination and chunk edges
Hallucination -
Long chains and entropy
Context Drift · Entropy Collapse -
Structural collapse and recovery
Logic Collapse -
Prompt injection and schema locks
Prompt Injection -
Multi agent conflicts
Multi-Agent Problems -
Bootstrap and deployment ordering
Bootstrap Ordering · Deployment Deadlock · Pre-Deploy Collapse -
Snippet and citation schema
Data Contracts
-
Agent to agent handoff loops or stalls
Add BBCR bridge steps, set explicit timeouts, log λ per hop, clamp variance with BBAM.
Open: Logic Collapse · Multi-Agent Problems -
High similarity yet wrong meaning
Mixed write and read embeddings, metric mismatch, or fragmented stores.
Open: Embedding ≠ Semantic · Vectorstore Fragmentation -
Hybrid retrieval performs worse than a single retriever
Two stage query drifts, mis weighted rerank, inconsistent analyzer.
Open: Query Parsing Split · Rerankers -
Citations missing or inconsistent across agents
Require cite then explain and lock snippet fields at the task boundary.
Open: Retrieval Traceability · Data Contracts -
Planner injects unsafe tool prompts
Freeze tool schemas and validate arguments before execution.
Open: Prompt Injection -
Long runs flatten style and drift logically
Split tasks, re join with BBCR, measure entropy and stop when it rises.
Open: Context Drift · Entropy Collapse
-
Measure ΔS
Compute ΔS(question, retrieved) and ΔS(retrieved, expected anchor).
Stable < 0.40, transitional 0.40 to 0.60, risk ≥ 0.60. -
Probe λ_observe
Do a k sweep in retrieval and reorder prompt headers. If λ flips, lock the schema and clamp with BBAM. -
Apply the module
- Retrieval drift → BBMC plus Data Contracts
- Reasoning collapse → BBCR bridge plus BBAM, verify with Logic Collapse
- Hallucination re entry after correction → Pattern: Hallucination Re-entry
- Verify
Coverage ≥ 0.70. ΔS ≤ 0.45. Three paraphrases and two seeds with λ convergent.
# Pseudocode: highlight the control points only
from crewai import Agent, Task, Crew
def retrieve_snippets(q):
# unified analyzer and metric across dense and sparse
return retriever.search(q, k=10)
def assemble_prompt(context, q):
# schema-locked prompt with cite first
return prompt.format(context=context, question=q)
def wfgy_checks(q, context, answer):
# compute ΔS(question, context) and enforce thresholds
# record snippet_id, section_id, source_url, offsets, tokens
metrics = metrics_and_trace(q, context, answer)
if metrics["risk"]:
raise RuntimeError("WFGY gate: high ΔS or divergent λ")
return metrics
researcher = Agent(
role="retrieval",
goal="fetch auditable snippets with fields locked",
backstory="RAG specialist who always cites first"
)
writer = Agent(
role="reasoning",
goal="answer with cite then explain using the snippet schema",
backstory="keeps λ convergent and avoids cross section reuse"
)
task_retrieve = Task(
description="Retrieve k=10 with unified analyzer, return snippet schema",
agent=researcher,
expected_output="list of snippets with {snippet_id, section_id, source_url, offsets, tokens}"
)
task_answer = Task(
description="Assemble cite-first prompt and answer with strict JSON",
agent=writer,
expected_output="{citations:[...], answer:'...'}"
)
crew = Crew(agents=[researcher, writer], tasks=[task_retrieve, task_answer])
def run(question):
context = retrieve_snippets(question)
msg = assemble_prompt(context, question)
answer = crew.kickoff(inputs={"msg": msg})
metrics = wfgy_checks(question, context, answer)
return {"answer": answer, "metrics": metrics}What this enforces
- Retrieval is observable and parameterized. Analyzer and metric are unified.
- Prompt is schema locked with cite first and strict JSON for tool outputs.
- A post generation WFGY gate can halt the run when ΔS is high or λ flips.
- Traces record snippet to citation mapping for audits.
Specs and recipes RAG Architecture & Recovery · Retrieval Playbook · Retrieval Traceability · Data Contracts
-
Mixed embedding functions across write and read. Rebuild with explicit metric and normalization. See Embedding ≠ Semantic
-
Planner emits tool prompts that bypass the schema. Always validate tool arguments and echo the schema every step. See Prompt Injection
-
Memory overwrite between agents. Stamp
mem_revandmem_hash, split namespaces by agent role. See role drift · memory desync -
Event storms when multiple tasks write to the same index or KV. Add idempotency keys on
{source_id, mem_rev, index_hash}. See Retrieval Traceability -
Long runs degrade style and flip answers. Split the plan, then re join with a BBCR bridge and clamp with BBAM. See Context Drift
-
ΔS remains ≥ 0.60 Rebuild the index using the checklists and verify with a small gold set. Retrieval Playbook
-
Identical input yields different answers across runs Check version skew and session state. Pre-Deploy Collapse
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
| Layer | Page | What it’s for |
|---|---|---|
| ⭐ Proof | WFGY Recognition Map | External citations, integrations, and ecosystem proof |
| ⚙️ Engine | WFGY 1.0 | Original PDF tension engine and early logic sketch (legacy reference) |
| ⚙️ Engine | WFGY 2.0 | Production tension kernel for RAG and agent systems |
| ⚙️ Engine | WFGY 3.0 | TXT based Singularity tension engine (131 S class set) |
| 🗺️ Map | Problem Map 1.0 | Flagship 16 problem RAG failure taxonomy and fix map |
| 🗺️ Map | Problem Map 2.0 | Global Debug Card for RAG and agent pipeline diagnosis |
| 🗺️ Map | Problem Map 3.0 | Global AI troubleshooting atlas and failure pattern map |
| 🧰 App | TXT OS | .txt semantic OS with fast bootstrap |
| 🧰 App | Blah Blah Blah | Abstract and paradox Q&A built on TXT OS |
| 🧰 App | Blur Blur Blur | Text to image generation with semantic control |
| 🏡 Onboarding | Starter Village | Guided entry point for new users |
If this repository helped, starring it improves discovery so more builders can find the docs and tools.