🏥 Quick Return to Emergency Room
You are in a specialist desk.
For full triage and doctors on duty, return here:
- WFGY Global Fix Map — main Emergency Room, 300+ structured fixes
- WFGY Problem Map 1.0 — 16 reproducible failure modes
Think of this page as a sub-room.
If you want full consultation and prescriptions, go back to the Emergency Room lobby.
This folder provides guardrails for evaluation and observability in RAG and agent pipelines.
It shows how to catch silent drift, regressions, and unstable metrics before they break your system.
- A starter kit to make evals predictable and repeatable.
- Guardrails for metrics, variance, and drift detection.
- Copy-paste probes and configs you can add to your pipeline.
- Acceptance targets you can actually measure and enforce.
- Metrics look unstable between runs.
- Coverage seems high but answers still drift.
- ΔS changes across paraphrases or seeds.
- λ flips divergent after harmless edits.
- Benchmarks regress without any code change.
- Long-run evals show a slow decline.
- ΔS(question, retrieved) ≤ 0.45
- Coverage ≥ 0.70 to target section
- λ remains convergent across 3 paraphrases and 2 seeds
- Variance ratio ≤ 0.15 across seeds
- No downward drift beyond 3 eval windows
- E_resonance stays flat on long evals
| Symptom | Open this page |
|---|---|
| Benchmarks regress with no code change | regression_gate.md |
| Metrics fluctuate or alerts missing | alerting_and_probes.md |
| Coverage looks high but not real | coverage_tracking.md |
| ΔS thresholds unclear | deltaS_thresholds.md |
| λ flips or diverges | lambda_observe.md |
| Variance high between seeds | variance_and_drift.md |
| Need a full setup | eval_playbook.md |
| Logging + monitoring integration | metrics_and_logging.md |
eval_contract:
seeds: 3
paraphrases: 3
targets:
deltaS: <=0.45
coverage: >=0.70
lambda: convergent
variance: <=0.15
drift: <=0.02
alerts:
- deltaS >=0.60
- lambda divergent
- drift slope >0.02Q: What if my metrics vary a lot each run? A: Check variance_and_drift.md. Add more seeds and enforce variance ≤0.15.
Q: My eval passes locally but fails in CI — why? A: See metrics_and_logging.md. Local runs often miss logging detail. CI must enforce the same eval contract.
Q: What if coverage is high but the answer is still wrong? A: Open coverage_tracking.md. You might be measuring snippet recall, not semantic coverage. Switch to ΔS-based coverage.
Q: ΔS is always drifting, even on simple questions. A: Look at deltaS_thresholds.md. Adjust thresholds and clamp variance with λ probes.
Q: How do I stop regressions before release? A: Use regression_gate.md. It defines pass/fail rules so bad models never ship.
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
| Layer | Page | What it’s for |
|---|---|---|
| ⭐ Proof | WFGY Recognition Map | External citations, integrations, and ecosystem proof |
| ⚙️ Engine | WFGY 1.0 | Original PDF tension engine and early logic sketch (legacy reference) |
| ⚙️ Engine | WFGY 2.0 | Production tension kernel for RAG and agent systems |
| ⚙️ Engine | WFGY 3.0 | TXT based Singularity tension engine (131 S class set) |
| 🗺️ Map | Problem Map 1.0 | Flagship 16 problem RAG failure taxonomy and fix map |
| 🗺️ Map | Problem Map 2.0 | Global Debug Card for RAG and agent pipeline diagnosis |
| 🗺️ Map | Problem Map 3.0 | Global AI troubleshooting atlas and failure pattern map |
| 🧰 App | TXT OS | .txt semantic OS with fast bootstrap |
| 🧰 App | Blah Blah Blah | Abstract and paradox Q&A built on TXT OS |
| 🧰 App | Blur Blur Blur | Text to image generation with semantic control |
| 🏡 Onboarding | Starter Village | Guided entry point for new users |
If this repository helped, starring it improves discovery so more builders can find the docs and tools.