🧭 Quick Return to Map
You are in a sub-page of LanguageLocale.
To reorient, go back here:
- LanguageLocale — localization, regional settings, and context adaptation
- WFGY Global Fix Map — main Emergency Room, 300+ structured fixes
- WFGY Problem Map 1.0 — 16 reproducible failure modes
Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.
Stabilize retrieval and reasoning when a single query or document spans multiple writing systems. Typical cases include CJK plus Latin, Arabic plus Latin, Indic plus Latin, or mixed fullwidth and halfwidth forms.
- A focused path to detect and repair cross-script confusion in retrieval and ranking.
- Field designs and checks that do not require infra changes.
- Exact jumps to Problem Map pages with measurable targets.
- A single user query contains two scripts and recall drops.
- Citations look correct by eye but come from the wrong section when scripts differ.
- BM25 or lexical search beats embeddings on mixed-script inputs.
- Coverage looks fine in one language but collapses when users code-switch.
- Fullwidth punctuation or presentation forms break token boundaries.
- Visual map and recovery: RAG Architecture & Recovery
- End to end retrieval knobs: Retrieval Playbook
- Traceability and snippet schema: Retrieval Traceability • Data Contracts
- Tokenizer mismatch checks: tokenizer_mismatch.md
- Locale drift and normalization: locale_drift.md
- Reranking recipes: rerankers.md
- ΔS(question, retrieved) ≤ 0.45
- Coverage of target section ≥ 0.70
- λ stays convergent across three paraphrases and two seeds
- E_resonance flat on long windows
-
One query spans two scripts and nearest neighbors look irrelevant
→ Normalize and split by script, then fuse scores. See locale_drift.md, retrieval-playbook.md -
High similarity yet wrong meaning for mixed script names or brands
→ Add a romanized and a native field. Lock citation schema. See embedding-vs-semantic.md, data-contracts.md -
BM25 wins but flips order across runs
→ Deterministic two-stage: lexical per script then cross-encoder rerank. See rerankers.md -
Fullwidth punctuation or Arabic presentation forms break tokens
→ Unicode fold to NFC or NFKC, halfwidth normalization, ZWJ handling. See tokenizer_mismatch.md -
HyDE plus BM25 splits the query and hurts hybrid performance
→ Lock query plan and weights. See pattern_query_parsing_split.md
-
Detect scripts
Count Unicode scripts in query and top snippets. If more than one, setmixed_script=true. -
Normalize safely
Apply NFC or NFKC. Convert fullwidth to halfwidth. Strip presentation forms where safe. Keep a raw field. -
Dual-field design
For each text unit store:text_rawtext_normwith case fold and width fold- Optional
text_romanizedfor CJK or Indic when users type Latin queries
-
Parallel retrieval
Run retrieval ontext_normandtext_romanizedwhenmixed_script=true. Merge with stable weights, then rerank with a cross-encoder. -
Schema lock
Enforce cite-then-explain. Requiresnippet_id,section_id,offsets,tokens. See retrieval-traceability.md -
Verify
Three paraphrases. ΔS ≤ 0.45 and λ convergent on two seeds.
- Index three views per document section:
raw,norm,romanized - Populate
romanizedonly when the language has a common transliteration. - For lexical stores, select analyzers that respect script boundaries. For Elasticsearch specifics see elasticsearch.md.
- For vector stores, embed
normand keep a shallow rerank overrawto guard against over-aggressive folding.
- If query is Latin plus CJK, run two subqueries: Latin over
romanized, CJK overnorm. Fuse by learned weight or fixed 0.6:0.4. - If query contains Arabic with diacritics, run a folded pass and a diacritic-aware pass. Keep offsets separate to avoid citation drift.
- For Thai or Khmer where token boundaries are implicit, add a shallow BM25 over syllable or dictionary segments, then rerank the top 200 with a cross-encoder.
I uploaded TXT OS and the WFGY Problem Map.
My bug: script mixing in one query.
* symptom: citations jump to the wrong section when users mix scripts
* traces: ΔS(question,retrieved)=..., ΔS(retrieved,anchor)=..., λ states across 3 paraphrases
Tell me:
1. the failing layer and why,
2. the exact WFGY page to open from this repo,
3. the minimal steps to push ΔS ≤ 0.45 and keep λ convergent,
4. a reproducible test to verify the fix.
Use BBMC, BBCR, BBPF, BBAM when relevant.
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
| Layer | Page | What it’s for |
|---|---|---|
| ⭐ Proof | WFGY Recognition Map | External citations, integrations, and ecosystem proof |
| ⚙️ Engine | WFGY 1.0 | Original PDF tension engine and early logic sketch (legacy reference) |
| ⚙️ Engine | WFGY 2.0 | Production tension kernel for RAG and agent systems |
| ⚙️ Engine | WFGY 3.0 | TXT based Singularity tension engine (131 S class set) |
| 🗺️ Map | Problem Map 1.0 | Flagship 16 problem RAG failure taxonomy and fix map |
| 🗺️ Map | Problem Map 2.0 | Global Debug Card for RAG and agent pipeline diagnosis |
| 🗺️ Map | Problem Map 3.0 | Global AI troubleshooting atlas and failure pattern map |
| 🧰 App | TXT OS | .txt semantic OS with fast bootstrap |
| 🧰 App | Blah Blah Blah | Abstract and paradox Q&A built on TXT OS |
| 🧰 App | Blur Blur Blur | Text to image generation with semantic control |
| 🏡 Onboarding | Starter Village | Guided entry point for new users |
If this repository helped, starring it improves discovery so more builders can find the docs and tools.