🧭 Quick Return to Map
You are in a sub-page of LLM_Providers.
To reorient, go back here:
- LLM_Providers — model vendors and deployment options
- WFGY Global Fix Map — main Emergency Room, 300+ structured fixes
- WFGY Problem Map 1.0 — 16 reproducible failure modes
Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.
Groq gives very fast inference on supported open models. Speed hides bugs if observability is weak. Use this page to keep retrieval and reasoning stable while you push high tokens per second.
- semantic stress ΔS(question, retrieved) ≤ 0.45
- coverage of target section ≥ 0.70 for direct QA
- λ_observe stays convergent across 3 paraphrases
- streaming output does not change factual content between chunks
- Visual map and recovery: RAG Architecture & Recovery
- End to end retrieval knobs: Retrieval Playbook
- Why this snippet, trace schema: Retrieval Traceability
- Ordering control: Rerankers
- Embedding vs meaning: Embedding ≠ Semantic
- Hallucination and chunk boundaries: Hallucination
- Long chains and entropy: Context Drift, Entropy Collapse
- Snippet and citation schema: Data Contracts
- Logic dead ends, bridge and recover: Logic Collapse
Symptom: partial chunks are plausible, but the final concatenated answer adds claims that do not map to retrieved text.
Probe: measure ΔS(question, retrieved) and ΔS(answer, retrieved) at both chunk level and final join.
Fix: lock cite then explain, flush on section boundaries, require per chunk citations. See Retrieval Traceability and Data Contracts.
If ΔS stays high: apply BBMC alignment and BBAM variance clamp. See RAG Architecture & Recovery.
Symptom: same prompt works on one Groq model, fails on another, citations miss by a few lines.
Probe: λ flips at prompt assembly, ΔS(question, retrieved) rises after you swap models.
Fix: re pin prompt anchors to titles and section ids, avoid brittle token based fences. See Embedding ≠ Semantic and Retrieval Traceability.
Escalate: if recall drops when hybrid retrievers are used, check query parsing split. See Query Parsing Split.
Symptom: high recall, wrong top k order, answer quotes the third best chunk.
Probe: plot ΔS vs k. Flat and high curve points at index metric or normalization mismatch.
Fix: repair index and add rerankers. See Rerankers and Embedding ≠ Semantic.
Acceptance: after fix, ΔS(question, retrieved) ≤ 0.45 with stable ordering across seeds.
Symptom: tool payloads have small schema errors when streaming is enabled.
Probe: λ divergent only at tool stage, not at retrieval.
Fix: enforce schema lock, echo back tool schema before generation, validate then answer. See Logic Collapse and Retrieval Traceability.
Symptom: random capitalization, style flattening, repetition as response grows.
Probe: E_resonance rises while ΔS stays moderate, λ becomes recursive.
Fix: semantic chunking, BBMC with section anchors, BBAM to clamp variance. See Entropy Collapse and Hallucination.
-
Retrieval sanity
Run ΔS(question, retrieved) and coverage to the expected section. Targets at top. -
Prompt assembly
Use system, task, constraints, citations, answer. Forbid re order. Require cite then explain. See Retrieval Traceability. -
Stability modules
If λ flips at reasoning, apply BBCR bridge and BBAM variance clamp. See Logic Collapse. -
Ordering
If recall is fine but answer uses the wrong snippet, add a reranker. See Rerankers. -
Verification
Paraphrase the user question three ways. Keep λ convergent and ΔS ≤ 0.45 on each paraphrase.
- Model families differ on max context and stop token behavior. Do not rely on implicit stops.
- Very fast streaming can hide retrieval jitter. Always record the chosen k list and scores.
- For tool use, stream to a buffer, validate JSON, then emit once. Do not forward partial tool JSON.
- When swapping models, recheck tokenizer dependent anchors, also re run ΔS thresholds.
Open the structural page that matches the probe result.
- High ΔS with correct chunks, logic is wrong: Retrieval Collapse
- Plausible but wrong answers, citations miss: Hallucination
- Long chain drift: Context Drift
- Hybrid retriever failure: Query Parsing Split
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
| Layer | Page | What it’s for |
|---|---|---|
| ⭐ Proof | WFGY Recognition Map | External citations, integrations, and ecosystem proof |
| ⚙️ Engine | WFGY 1.0 | Original PDF tension engine and early logic sketch (legacy reference) |
| ⚙️ Engine | WFGY 2.0 | Production tension kernel for RAG and agent systems |
| ⚙️ Engine | WFGY 3.0 | TXT based Singularity tension engine (131 S class set) |
| 🗺️ Map | Problem Map 1.0 | Flagship 16 problem RAG failure taxonomy and fix map |
| 🗺️ Map | Problem Map 2.0 | Global Debug Card for RAG and agent pipeline diagnosis |
| 🗺️ Map | Problem Map 3.0 | Global AI troubleshooting atlas and failure pattern map |
| 🧰 App | TXT OS | .txt semantic OS with fast bootstrap |
| 🧰 App | Blah Blah Blah | Abstract and paradox Q&A built on TXT OS |
| 🧰 App | Blur Blur Blur | Text to image generation with semantic control |
| 🏡 Onboarding | Starter Village | Guided entry point for new users |
If this repository helped, starring it improves discovery so more builders can find the docs and tools.