🧭 Quick Return to Map
You are in a sub-page of Eval.
To reorient, go back here:
- Eval — model evaluation and benchmarking
- WFGY Global Fix Map — main Emergency Room, 300+ structured fixes
- WFGY Problem Map 1.0 — 16 reproducible failure modes
Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.
A curated gold set is the foundation for evaluation stability. Without strict contracts on the gold data, all eval metrics become meaningless. This page defines how to build, audit, and maintain gold QA sets that align with WFGY acceptance targets.
- Visual map and recovery: RAG Architecture & Recovery
- Traceability schema: Retrieval Traceability
- Payload fences: Data Contracts
- Chunk coverage: Chunking Checklist
- Semantic drift control: Context Drift, Entropy Collapse
- Coverage ≥ 0.80 of target sections
- ΔS(question, gold anchor) ≤ 0.35
- λ state stable across 3 paraphrases and 2 seeds
- No overlap: each gold item maps to exactly one snippet and section
- Identify domains relevant to the pipeline (finance, law, product docs).
- Ensure gold questions are drawn from actual user tasks.
- Each QA item must cite a section ID and expected_doc.
- Anchors must reference stable problem map sections, not ephemeral text.
Example:
{
"id": "Q_0007",
"question": "What causes hallucination re-entry after correction?",
"answer_ref": "PM:patterns/pattern_hallucination_reentry",
"expected_doc": "ProblemMap/patterns/pattern_hallucination_reentry.md",
"section_id": "hallucination-reentry"
}- Minimum 3 per question.
- Probe λ stability under phrasing variance.
{
"id": "Q_0007_P1",
"question": "Why do hallucinations return after being corrected once?"
}- Each gold item must include an exact citation offset.
- If offsets drift, the goldset is invalid until refreshed.
- No gold item should produce ΔS > 0.45 in baseline runs.
- Violations are logged and flagged for refresh.
-
Gold overlaps across sections → Fix: merge or re-scope questions, ensure one-to-one mapping.
-
Anchors point to unstable docs → Fix: only link to long-lived WFGY ProblemMap pages.
-
Paraphrases flip λ → Fix: clamp with BBAM variance controls and revalidate.
-
Coverage below 0.80 → Fix: expand questions until goldset covers every critical node.
- Draft 20–30 candidate QA items.
- Add 3 paraphrases each.
- Link every item to an anchor section.
- Run through
eval_harness.md. - Drop items that fail regression gate.
- Store final goldset in
datasets/gold/.
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
| Layer | Page | What it’s for |
|---|---|---|
| ⭐ Proof | WFGY Recognition Map | External citations, integrations, and ecosystem proof |
| ⚙️ Engine | WFGY 1.0 | Original PDF tension engine and early logic sketch (legacy reference) |
| ⚙️ Engine | WFGY 2.0 | Production tension kernel for RAG and agent systems |
| ⚙️ Engine | WFGY 3.0 | TXT based Singularity tension engine (131 S class set) |
| 🗺️ Map | Problem Map 1.0 | Flagship 16 problem RAG failure taxonomy and fix map |
| 🗺️ Map | Problem Map 2.0 | Global Debug Card for RAG and agent pipeline diagnosis |
| 🗺️ Map | Problem Map 3.0 | Global AI troubleshooting atlas and failure pattern map |
| 🧰 App | TXT OS | .txt semantic OS with fast bootstrap |
| 🧰 App | Blah Blah Blah | Abstract and paradox Q&A built on TXT OS |
| 🧰 App | Blur Blur Blur | Text to image generation with semantic control |
| 🏡 Onboarding | Starter Village | Guided entry point for new users |
If this repository helped, starring it improves discovery so more builders can find the docs and tools.