🧭 Quick Return to Map
You are in a sub-page of Reasoning.
To reorient, go back here:
- Reasoning — multi-step inference and symbolic proofs
- WFGY Global Fix Map — main Emergency Room, 300+ structured fixes
- WFGY Problem Map 1.0 — 16 reproducible failure modes
Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.
Reduce random drift in planning and multi step reasoning. This page gives a clamp recipe so your plan length, tool order, and citations stay stable across seeds and paraphrases.
-
Visual map and recovery
→ rag-architecture-and-recovery.md -
End to end knobs
→ retrieval-playbook.md -
Traceability and payload schema
→ retrieval-traceability.md
→ data-contracts.md -
Related failures
→ logic-collapse.md · entropy-overload.md · recursive-loop.md · hallucination-reentry.md · context-stitching-and-window-joins.md -
Prompt fences
→ PromptAssembly memory fences & state keys -
Eval
→ eval_semantic_stability.md
| Symptom | What you see |
|---|---|
| Same inputs, different plans | Tool order or step count changes by run |
| Paraphrase flips the answer | Harmless wording changes cause new chain or conclusion |
| JSON plan reshuffles | Fields reorder or optional fields go missing |
| Intermittent tool loops | One seed calls tools twice, another once |
| Cite then explain breaks | Citations disappear in long chains or only appear sometimes |
- Unpinned headers. Role and policy text move around between runs.
- Loose schemas. Plans allow free text where enums should exist.
- No state keys. Chains cannot carry
plan_rev,seed_id, orλ_target. - Ranking variance. Inputs to the chain are not deterministically ordered.
- No bridges. Cross window steps lack an anchor restatement.
- High entropy. Overlong prompts and mixed analyzers amplify randomness.
- λ remains convergent across three paraphrases and two seeds
- ΔS(question, plan_header) ≤ 0.45 and flat across seeds
- Plan length variance ≤ 10 percent across two seeds
- Tool call sequence identical for the same evidence set
- Coverage of target section ≥ 0.70 with cite then explain intact
-
Lock the header order and schema
Pin system header segments and require cite then explain.
→ data-contracts.md · retrieval-traceability.md -
Attach state keys
Carry{plan_rev, seed_id, λ_target, index_hash, context_hash}through each step.
→ memory_fences_and_state_keys.md -
Apply BBAM variance clamp
Two stage plan. Stage A generates the plan at low temperature with enumerated options and a deterministic tie break. Stage B executes the plan with normal temperature but cannot change step count or tool order unless it emits a structured re plan request. -
Deterministic ordering in inputs
Sort snippets by(doc_id, section_id, win_idx)after rerank.
→ rerankers.md -
Add BBCR micro bridges at joins
Restate the active anchor and the current step goal across window boundaries.
→ anchoring-and-bridge-proofs.md
Add this struct to your plan steps. Enforce it in tools and in the LLM planner.
{
"plan_rev": 3,
"λ_target": "convergent",
"seed_id": "s1",
"index_hash": "faiss:7c91...",
"context_hash": "sha1:b2ae...",
"steps": [
{"idx": 1, "tool": "retrieve", "args_schema": "strict", "may_branch": false},
{"idx": 2, "tool": "analyze_snippets", "args_schema": "strict", "may_branch": false},
{"idx": 3, "tool": "answer", "args_schema": "strict", "may_branch": false}
],
"tie_break": "doc_id,section_id,win_idx"
}Rules
- Stage A can only choose among enumerated step templates.
- Stage B cannot insert or remove steps. To change, it must emit
{replan:true, reason:"..."}and stop. - Tool args must be strict JSON with enums where applicable.
- Three paraphrase run with two seeds. λ stays convergent, plan length variance ≤ 10 percent.
- ΔS(question, plan_header) ≤ 0.45 on both seeds.
- Citations appear before explanation in every run.
- Tie break yields the same snippet order across seeds.
If ΔS is flat and high, suspect index or metric mismatch. → embedding-vs-semantic.md · chunking-checklist.md
You have TXT OS and the WFGY Problem Map loaded.
Goal: clamp chain-of-thought variance.
Inputs:
- question: "{q}"
- snippets: [{doc_id, section_id, win_idx, ΔS_to_question, source_url}]
- constraints: cite_then_explain=true, args_schema="strict"
Do:
1) Stage A (planner, low temperature 0.2–0.4):
- Produce a fixed-length plan using the step templates {retrieve, analyze_snippets, answer}.
- Order inputs deterministically by (doc_id, section_id, win_idx).
- Output:
{
"plan_rev": n,
"λ_target": "convergent",
"seed_id": "{seed}",
"steps": [{"idx":1,"tool":"retrieve"}, ...],
"tie_break": "doc_id,section_id,win_idx"
}
2) Stage B (executor):
- Execute the plan without changing step count or order.
- If a change is needed, stop and emit {"replan": true, "reason": "..."}.
3) Always return JSON:
{
"plan_rev": n,
"answer": "... cite then explain ...",
"λ_state": "convergent|divergent",
"ΔS_plan_header": 0.xx,
"coverage": 0.xx
}
If λ is divergent or ΔS ≥ 0.60, include the exact fix page to open.
- Planner runs with a different header than executor. Keep a single pinned header block.
- Rerank uses a different analyzer than indexing. Normalize, then tie break deterministically.
- Tool schemas accept free text. Replace with enums and strict JSON.
- Bridges omitted at window boundaries. Re cite the anchor before continuing.
- Prompt injection or role drift unlocks free form steps. Lock system text and schema. → prompt-injection.md
-
λ remains divergent after clamp and bridges → inspect long chain stability and collapse patterns. Open: logic-collapse.md · entropy-overload.md
-
Live flip flops only in production → add live probes and slow ramp with backoff. Open: ops/live_monitoring_rag.md · ops/debug_playbook.md
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + ” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
| Layer | Page | What it’s for |
|---|---|---|
| ⭐ Proof | WFGY Recognition Map | External citations, integrations, and ecosystem proof |
| ⚙️ Engine | WFGY 1.0 | Original PDF tension engine and early logic sketch (legacy reference) |
| ⚙️ Engine | WFGY 2.0 | Production tension kernel for RAG and agent systems |
| ⚙️ Engine | WFGY 3.0 | TXT based Singularity tension engine (131 S class set) |
| 🗺️ Map | Problem Map 1.0 | Flagship 16 problem RAG failure taxonomy and fix map |
| 🗺️ Map | Problem Map 2.0 | Global Debug Card for RAG and agent pipeline diagnosis |
| 🗺️ Map | Problem Map 3.0 | Global AI troubleshooting atlas and failure pattern map |
| 🧰 App | TXT OS | .txt semantic OS with fast bootstrap |
| 🧰 App | Blah Blah Blah | Abstract and paradox Q&A built on TXT OS |
| 🧰 App | Blur Blur Blur | Text to image generation with semantic control |
| 🏡 Onboarding | Starter Village | Guided entry point for new users |
If this repository helped, starring it improves discovery so more builders can find the docs and tools.