Skip to content

Latest commit

 

History

History
177 lines (125 loc) · 9.33 KB

File metadata and controls

177 lines (125 loc) · 9.33 KB

Embeddings — Normalization and Scaling

🧭 Quick Return to Map

You are in a sub-page of Embeddings.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

A repair page for scale and metric mismatches in embedding pipelines. Use this when retrieval quality looks good by similarity numbers but the meaning is wrong, or when different stores or models disagree after a migration.

Open these first

When to use this page

  • Similarity scores look high but answers cite the wrong section.
  • Cosine in docs, dot in code, or the reverse.
  • One environment normalizes vectors while another does not.
  • Upgrades introduce new dimensions or multilingual models and recall drops.
  • PQ or HNSW behaves differently after a rebuild.

Acceptance targets

  • ΔS(question, retrieved) ≤ 0.45
  • Coverage of target section ≥ 0.70
  • λ remains convergent across three paraphrases and two seeds
  • E_resonance stays flat on long windows

Map symptoms → structural fixes


60-second checklist

  1. Decide the semantic metric Use cosine for unit vectors. Use dot only when magnitude carries meaning. Record the choice in your data contract.

  2. Enforce one normalization policy Either store all vectors L2-normalized or normalize at query time on both write and read paths. Never mix.

  3. Lock dimensions and model id Record embed_model, dim, metric, normalize=true|false, and EMB_HASH in every payload. See data-contracts.md.

  4. Rebuild when the policy changes If the previous index mixed policies, re-embed and rebuild. Validate with a small gold set and the acceptance targets above.


Minimal probes you can paste into a notebook

Probe A — norm distribution
1. Sample 10k vectors before index.
2. Compute median ||v||2 and IQR.
3. If median ≈ 1.0 with tiny IQR, corpus looks normalized. If not, policy is mixed.

Probe B — metric toggle
1. Run the same top-k with and without L2 normalization on queries.
2. If the winner set flips and ΔS improves only under one policy, lock that policy.

Probe C — k-sweep stability
1. For k in {5, 10, 20}, chart ΔS(question, retrieved).
2. Flat and high values suggest metric or analyzer mismatch.

Probe D — multilingual scale check
1. Split queries by language tag.
2. If one language has systematically higher norms or ΔS, normalize and consider per-language centering.

Common failure patterns and the fix

  • Mixed policies across services Write path stores raw vectors while the retriever normalizes only queries. Fix with one policy. Rebuild or pre-normalize on write.

  • Cosine in code, dot in index Check the store configuration and the client. Align both ends and re-verify with retrieval-traceability.md.

  • Dimensionality drift after model swap Store dim inside the contract and refuse ingestion when dim mismatches. See data-contracts.md.

  • Anisotropy or cluster collapse Try mean-centering and unit-norm. If recall remains low, re-embed with a model that was trained for cosine and re-chunk per the playbook. See retrieval-playbook.md.

  • PQ or HNSW surprises Confirm that training data for PQ used the same normalization policy as the live corpus. Store-specific notes in faiss.md.


Verification protocol

  1. Build a ten question gold set with exact anchors.
  2. Run three paraphrases and two seeds.
  3. Require coverage ≥ 0.70 and ΔS ≤ 0.45 before and after the change.
  4. Keep traces with metric, normalize_flag, dim, EMB_HASH, and index type. Eval references: eval_rag_precision_recall.md

Hand-off checklist for teams

  • Contract fields present in every write embed_model, dim, metric, normalize, EMB_HASH, INDEX_HASH.
  • One policy in code and infra Normalization on both ends or on neither.
  • Store and client agree on metric Unit tests assert the setting at startup.
  • Monitoring Log ΔS and λ by policy. Alert when ΔS ≥ 0.60 or λ flips. Ops references: live_monitoring_rag.md

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly

Explore More

Layer Page What it’s for
⭐ Proof WFGY Recognition Map External citations, integrations, and ecosystem proof
⚙️ Engine WFGY 1.0 Original PDF tension engine and early logic sketch (legacy reference)
⚙️ Engine WFGY 2.0 Production tension kernel for RAG and agent systems
⚙️ Engine WFGY 3.0 TXT based Singularity tension engine (131 S class set)
🗺️ Map Problem Map 1.0 Flagship 16 problem RAG failure taxonomy and fix map
🗺️ Map Problem Map 2.0 Global Debug Card for RAG and agent pipeline diagnosis
🗺️ Map Problem Map 3.0 Global AI troubleshooting atlas and failure pattern map
🧰 App TXT OS .txt semantic OS with fast bootstrap
🧰 App Blah Blah Blah Abstract and paradox Q&A built on TXT OS
🧰 App Blur Blur Blur Text to image generation with semantic control
🏡 Onboarding Starter Village Guided entry point for new users

If this repository helped, starring it improves discovery so more builders can find the docs and tools.
GitHub Repo stars