🧭 Quick Return to Map
You are in a sub-page of LLM_Providers.
To reorient, go back here:
- LLM_Providers — model vendors and deployment options
- WFGY Global Fix Map — main Emergency Room, 300+ structured fixes
- WFGY Problem Map 1.0 — 16 reproducible failure modes
Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.
Use this page when failures look provider specific on DeepSeek models. Examples include JSON tool-call drift, unexpected safety blocks, long reasoning preambles that leak into the final channel, or unstable answers across seeds. Each fix maps to WFGY pages so you can verify with measurable targets.
Core acceptance
- ΔS(question, retrieved) ≤ 0.45
- Coverage ≥ 0.70 for the target section
- λ remains convergent across 3 paraphrases
- Visual map and recovery: RAG Architecture & Recovery
- End-to-end retrieval knobs: Retrieval Playbook
- Why this snippet, schema for traceability: Retrieval Traceability
- Ordering control: Rerankers
- Embedding vs meaning: Embedding ≠ Semantic
- Hallucination and chunk boundaries: Hallucination
- Long chains and entropy: Context Drift, Entropy Collapse
- Structural collapse and recovery: Logic Collapse
- Snippet and citation schema: Data Contracts
- Live ops and debug: Live Monitoring, Debug Playbook
- Measure ΔS
- Compute ΔS(question, retrieved) and ΔS(retrieved, expected anchor).
- Thresholds: stable < 0.40, transitional 0.40–0.60, risk ≥ 0.60.
- Probe with λ_observe
- Vary k ∈ {5, 10, 20}. Flat high curve suggests index or metric mismatch.
- Reorder prompt headers. If ΔS spikes, lock the schema.
- Apply the module
- Retrieval drift → BBMC + Data Contracts.
- Reasoning collapse → BBCR bridge + BBAM variance clamp.
- Dead ends in long runs → BBPF alternate path.
Symptoms: function arguments renamed, nulls where objects expected, tool order wrong.
Why: provider side decoding or safety rewrite, schema not anchored.
Do this:
- Lock a strict IO header and cite the schema: Data Contracts
- Add trace tags in the prompt then verify: Retrieval Traceability
- If agents are orchestrating, isolate boundaries: see Agent Boundary Design and Consensus
↳ agent-boundary-design.md, agent-consensus-protocols.md
Symptoms: long internal thoughts appear as user-visible output or consume budget.
Why: channel separation not fixed in the contract; model routes text to a single stream.
Do this:
- Split channels in the schema and clamp with BBAM after BBCR: Logic Collapse, Data Contracts
- Add acceptance probes in live runs: Live Monitoring
Symptoms: top-k looks relevant but answer cites the wrong slice.
Why: embedding metric vs semantics, or chunk boundary bleed.
Do this:
- Compare ΔS(question, retrieved) vs ΔS(retrieved, anchor). If flat-high, swap metric or index: Embedding ≠ Semantic
- Re-chunk and re-anchor citations: Hallucination
Symptoms: small paraphrases flip the conclusion.
Why: uncontrolled variance and unstable memory joins.
Do this:
- Clamp variance after the bridge (BBAM) and verify joins: Context Drift, Memory Coherence
Symptoms: HyDE + BM25 underperforms.
Why: query parsing split or ranker saturation.
Do this:
- Fix the split and re-rank: Retrieval Playbook, pattern_query_parsing_split.md, Rerankers
Symptoms: truncation, repetition, or reset at the tail.
Why: entropy collapse in extended chains.
Do this:
- Shorten hops, insert BBCR checkpoints, and verify entropy targets:
Entropy Collapse, Logic Collapse
Open provider tickets only after these pass:
- ΔS ≤ 0.45 across 3 paraphrases with fixed schema
- Coverage ≥ 0.70 on the target section
- Live traces show correct tool ordering and bounded variance
See Debug Playbook
If the very first call fails after a new deploy, check boot order and fences:
Bootstrap Ordering, Deployment Deadlock, Pre-Deploy Collapse
I uploaded TXT OS and the WFGY Problem Map.
My DeepSeek bug:
- symptom: [brief]
- traces: [ΔS(question,retrieved)=…, ΔS(retrieved,anchor)=…, λ states, tool logs]
Tell me:
1) which layer is failing and why,
2) which exact WFGY page to open from this repo,
3) the minimal steps to push ΔS ≤ 0.45 and keep λ convergent,
4) how to verify with a reproducible test.
Use BBMC/BBPF/BBCR/BBAM where relevant. Do not change infra.| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
| Layer | Page | What it’s for |
|---|---|---|
| ⭐ Proof | WFGY Recognition Map | External citations, integrations, and ecosystem proof |
| ⚙️ Engine | WFGY 1.0 | Original PDF tension engine and early logic sketch (legacy reference) |
| ⚙️ Engine | WFGY 2.0 | Production tension kernel for RAG and agent systems |
| ⚙️ Engine | WFGY 3.0 | TXT based Singularity tension engine (131 S class set) |
| 🗺️ Map | Problem Map 1.0 | Flagship 16 problem RAG failure taxonomy and fix map |
| 🗺️ Map | Problem Map 2.0 | Global Debug Card for RAG and agent pipeline diagnosis |
| 🗺️ Map | Problem Map 3.0 | Global AI troubleshooting atlas and failure pattern map |
| 🧰 App | TXT OS | .txt semantic OS with fast bootstrap |
| 🧰 App | Blah Blah Blah | Abstract and paradox Q&A built on TXT OS |
| 🧰 App | Blur Blur Blur | Text to image generation with semantic control |
| 🏡 Onboarding | Starter Village | Guided entry point for new users |
If this repository helped, starring it improves discovery so more builders can find the docs and tools.