🧭 Quick Return to Map
You are in a sub-page of LLM_Providers.
To reorient, go back here:
- LLM_Providers — model vendors and deployment options
- WFGY Global Fix Map — main Emergency Room, 300+ structured fixes
- WFGY Problem Map 1.0 — 16 reproducible failure modes
Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.
Use this page when your pipeline hits OpenAI models and you see unstable tools, JSON drift, or long-chat decay. The checklist below helps you localize the failure, then jump to the exact WFGY fix page.
- Visual map and recovery: RAG Architecture & Recovery
- End-to-end retrieval knobs: Retrieval Playbook
- Why this snippet. Traceability schema: Retrieval Traceability
- Embedding vs meaning: Embedding ≠ Semantic
- Hallucination and chunk boundaries: Hallucination
- Long chains and entropy: Context Drift · Entropy Collapse
- Logic traps and recovery: Logic Collapse
- Snippet and citation schema: Data Contracts
- Prompt safety and jailbreaks: Prompt Injection
- Multi-agent clashes: Multi-Agent Problems · Role Drift
- Live ops and retries: Live Monitoring · Debug Playbook
-
Measure ΔS
- Compute ΔS(question, retrieved) and ΔS(retrieved, expected anchor).
- Thresholds. stable < 0.40. transitional 0.40–0.60. risk ≥ 0.60.
-
Probe with λ_observe
- Vary k = {5, 10, 20}. If ΔS stays high you likely have index or metric mismatch.
- Reorder prompt headers. If ΔS spikes, lock the schema.
-
Apply the module
- JSON or tool-call drift. lock schema with Data Contracts. add BBMC to isolate retrieval memory. bridge tools with BBCR. clamp variance with BBAM.
- Safety refusal or redaction on in-domain facts. switch to citation-first format in Retrieval Traceability. scope sources and apply SCU pattern from symbolic constraints. if refusal repeats, route with BBPF alternate path.
- Long chats decay. follow Context Drift and Entropy Collapse repairs. shorten windows. rotate evidence. re-pin anchors.
-
Verify
- Coverage to the target section ≥ 0.70.
- ΔS(question, retrieved) ≤ 0.45 across three paraphrases.
- λ remains convergent across seeds and sessions. E_resonance flat at window joins.
- Symptom. model mixes prose with JSON. partial
tool_calls. extra keys. wrong function name casing. - Fix. lock a strict snippet and citation schema in Data Contracts. keep one place that defines fields. add BBCR bridge for tool routing timeouts. add BBAM to clamp wandering keys. verify with three paraphrases.
- Symptom. content looks harmless but answer gets softened or truncated.
- Fix. use citation-first template from Retrieval Traceability. restrict scope with SCU in constraints. treat refusal as a state. route with BBPF to a safer paraphrase that preserves citations.
- Symptom. system header or tools block gets cut. tool names lose arguments. early cutoff in streaming.
- Fix. reduce header size. move tool specs to a linked snippet and reference them by short name. re-measure ΔS after each cut. if chains still drift, apply Context Drift.
- Symptom. random tool gaps. missing citations. repeated starts.
- Fix. idempotent retries with jitter. record every call in a trace row. follow Live Monitoring and Debug Playbook. verify no duplicate tool effects.
- Symptom.
seedappears to change output anyway. small wording flips output class. - Fix. treat outputs as distributions. evaluate stability with ΔS and λ across three paraphrases. if unstable, clamp with BBAM and shorten evidence lists.
- Symptom. agents overwrite each other’s memory. tool A answers B’s question. deadlocks on shared state.
- Fix. split memory namespaces. lock writes by
mem_revandmem_hash. read Multi-Agent Problems and Role Drift. add a BBCR bridge node with explicit timeouts.
I uploaded TXT OS and the WFGY Problem Map files.
My OpenAI provider bug:
- symptom: [brief]
- traces: [ΔS(question,retrieved)=..., ΔS(retrieved,anchor)=..., λ states, tool logs if any]
Tell me:
1) which layer is failing and why,
2) which exact fix page to open from this repo,
3) the minimal steps to push ΔS ≤ 0.45 and keep λ convergent,
4) how to verify with a reproducible test.
Use BBMC/BBPF/BBCR/BBAM when relevant.- Coverage to target section ≥ 0.70.
- ΔS(question, retrieved) ≤ 0.45 on three paraphrases.
- λ convergent across seeds and sessions. E_resonance flat.
- All tool calls and citations traceable to a stable schema.
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
| Layer | Page | What it’s for |
|---|---|---|
| ⭐ Proof | WFGY Recognition Map | External citations, integrations, and ecosystem proof |
| ⚙️ Engine | WFGY 1.0 | Original PDF tension engine and early logic sketch (legacy reference) |
| ⚙️ Engine | WFGY 2.0 | Production tension kernel for RAG and agent systems |
| ⚙️ Engine | WFGY 3.0 | TXT based Singularity tension engine (131 S class set) |
| 🗺️ Map | Problem Map 1.0 | Flagship 16 problem RAG failure taxonomy and fix map |
| 🗺️ Map | Problem Map 2.0 | Global Debug Card for RAG and agent pipeline diagnosis |
| 🗺️ Map | Problem Map 3.0 | Global AI troubleshooting atlas and failure pattern map |
| 🧰 App | TXT OS | .txt semantic OS with fast bootstrap |
| 🧰 App | Blah Blah Blah | Abstract and paradox Q&A built on TXT OS |
| 🧰 App | Blur Blur Blur | Text to image generation with semantic control |
| 🏡 Onboarding | Starter Village | Guided entry point for new users |
If this repository helped, starring it improves discovery so more builders can find the docs and tools.