Skip to content

Latest commit

 

History

History
131 lines (97 loc) · 6.42 KB

File metadata and controls

131 lines (97 loc) · 6.42 KB

Citation-First Prompting — Guardrails and Fix Patterns

🧭 Quick Return to Map

You are in a sub-page of Safety_PromptIntegrity.
To reorient, go back here:

Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.

Citation drift is one of the most common ways LLMs lose trust.
Without cite-then-explain discipline, answers may sound fluent but detach from sources.
This page locks the workflow: every reasoning step begins from citations, not narrative.


When to open this page

  • Model produces fluent paragraphs with zero citations.
  • Citations appear, but after-the-fact and unverifiable.
  • References change between runs, even with same inputs.
  • ΔS(question, retrieved) ≤ 0.45 but λ diverges when narrative precedes citation.
  • Users complain: "Where did this answer come from?"

Open these first


Core acceptance

  • Every answer starts with citations (no narrative before refs).
  • Coverage ≥ 0.70 of the target section.
  • ΔS(question, cited snippet) ≤ 0.45.
  • λ convergent across 3 paraphrases.
  • No hallucinated citations (must resolve to a retriever record).

Fix in 60 seconds

  1. Citation-first discipline

    • Always start with snippet_id, section_id, source_url.
  2. Enforce schema

    • Required fields:
      { "snippet_id": "...", "section_id": "...", "source_url": "...", "offsets": [..], "tokens": N }
  3. Reason only after citation

    • Explain or analyze after citation block.
  4. Reject broken runs

    • If citation missing → abort answer, return error tip.
  5. Stability probe

    • Run 3 paraphrases. If λ diverges, lock citation schema, rerun.

Typical failure vectors → fix

Vector Symptom Fix
Narrative-first Text precedes refs, unstable λ Force cite-then-explain ordering
Fake refs Hallucinated URLs Schema lock + retrieval-traceability.md
Drifting refs Different citations each run Clamp λ with BBAM, validate ΔS ≤ 0.45
Silent fallback Model drops refs under safety refusal Apply SCU (symbolic constraint unlock)

Probe prompt

You must output citations before narrative.
Schema: {snippet_id, section_id, source_url, offsets, tokens}

Rules:
1. Cite first. Explain only after citations are shown.
2. No answer if citations missing.
3. Log ΔS(question, cited snippet). Reject if ≥ 0.60.
4. λ must stay convergent across 3 paraphrases.

🔗 Quick-Start Downloads (60 sec)

Tool Link 3-Step Setup
WFGY 1.0 PDF Engine Paper 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>”
TXT OS (plain-text OS) TXTOS.txt 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly

Explore More

Layer Page What it’s for
⭐ Proof WFGY Recognition Map External citations, integrations, and ecosystem proof
⚙️ Engine WFGY 1.0 Original PDF tension engine and early logic sketch (legacy reference)
⚙️ Engine WFGY 2.0 Production tension kernel for RAG and agent systems
⚙️ Engine WFGY 3.0 TXT based Singularity tension engine (131 S class set)
🗺️ Map Problem Map 1.0 Flagship 16 problem RAG failure taxonomy and fix map
🗺️ Map Problem Map 2.0 Global Debug Card for RAG and agent pipeline diagnosis
🗺️ Map Problem Map 3.0 Global AI troubleshooting atlas and failure pattern map
🧰 App TXT OS .txt semantic OS with fast bootstrap
🧰 App Blah Blah Blah Abstract and paradox Q&A built on TXT OS
🧰 App Blur Blur Blur Text to image generation with semantic control
🏡 Onboarding Starter Village Guided entry point for new users

If this repository helped, starring it improves discovery so more builders can find the docs and tools.
GitHub Repo stars