🧭 Quick Return to Map
You are in a sub-page of Safety_PromptIntegrity.
To reorient, go back here:
- Safety_PromptIntegrity — prompt injection defense and integrity checks
- WFGY Global Fix Map — main Emergency Room, 300+ structured fixes
- WFGY Problem Map 1.0 — 16 reproducible failure modes
Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.
Citation drift is one of the most common ways LLMs lose trust.
Without cite-then-explain discipline, answers may sound fluent but detach from sources.
This page locks the workflow: every reasoning step begins from citations, not narrative.
- Model produces fluent paragraphs with zero citations.
- Citations appear, but after-the-fact and unverifiable.
- References change between runs, even with same inputs.
- ΔS(question, retrieved) ≤ 0.45 but λ diverges when narrative precedes citation.
- Users complain: "Where did this answer come from?"
- Retrieval schema contract: data-contracts.md
- Trace alignment: retrieval-traceability.md
- Context collapse: logic-collapse.md
- Memory boundaries: memory_fences_and_state_keys.md
- Injection guard: prompt_injection.md
- Every answer starts with citations (no narrative before refs).
- Coverage ≥ 0.70 of the target section.
- ΔS(question, cited snippet) ≤ 0.45.
- λ convergent across 3 paraphrases.
- No hallucinated citations (must resolve to a retriever record).
-
Citation-first discipline
- Always start with
snippet_id,section_id,source_url.
- Always start with
-
Enforce schema
- Required fields:
{ "snippet_id": "...", "section_id": "...", "source_url": "...", "offsets": [..], "tokens": N }
- Required fields:
-
Reason only after citation
- Explain or analyze after citation block.
-
Reject broken runs
- If citation missing → abort answer, return error tip.
-
Stability probe
- Run 3 paraphrases. If λ diverges, lock citation schema, rerun.
| Vector | Symptom | Fix |
|---|---|---|
| Narrative-first | Text precedes refs, unstable λ | Force cite-then-explain ordering |
| Fake refs | Hallucinated URLs | Schema lock + retrieval-traceability.md |
| Drifting refs | Different citations each run | Clamp λ with BBAM, validate ΔS ≤ 0.45 |
| Silent fallback | Model drops refs under safety refusal | Apply SCU (symbolic constraint unlock) |
You must output citations before narrative.
Schema: {snippet_id, section_id, source_url, offsets, tokens}
Rules:
1. Cite first. Explain only after citations are shown.
2. No answer if citations missing.
3. Log ΔS(question, cited snippet). Reject if ≥ 0.60.
4. λ must stay convergent across 3 paraphrases.| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
| Layer | Page | What it’s for |
|---|---|---|
| ⭐ Proof | WFGY Recognition Map | External citations, integrations, and ecosystem proof |
| ⚙️ Engine | WFGY 1.0 | Original PDF tension engine and early logic sketch (legacy reference) |
| ⚙️ Engine | WFGY 2.0 | Production tension kernel for RAG and agent systems |
| ⚙️ Engine | WFGY 3.0 | TXT based Singularity tension engine (131 S class set) |
| 🗺️ Map | Problem Map 1.0 | Flagship 16 problem RAG failure taxonomy and fix map |
| 🗺️ Map | Problem Map 2.0 | Global Debug Card for RAG and agent pipeline diagnosis |
| 🗺️ Map | Problem Map 3.0 | Global AI troubleshooting atlas and failure pattern map |
| 🧰 App | TXT OS | .txt semantic OS with fast bootstrap |
| 🧰 App | Blah Blah Blah | Abstract and paradox Q&A built on TXT OS |
| 🧰 App | Blur Blur Blur | Text to image generation with semantic control |
| 🏡 Onboarding | Starter Village | Guided entry point for new users |
If this repository helped, starring it improves discovery so more builders can find the docs and tools.