🧭 Quick Return to Map
You are in a sub-page of Automation Platforms.
To reorient, go back here:
- Automation Platforms — stabilize no-code workflows and integrations
- WFGY Global Fix Map — main Emergency Room, 300+ structured fixes
- WFGY Problem Map 1.0 — 16 reproducible failure modes
Think of this page as a desk within a ward.
If you need the full triage and all prescriptions, return to the Emergency Room lobby.
Use this when your workflow is built with Power Automate (cloud flows, AI Builder, custom connectors) and you see wrong citations, unstable answers, mixed sources, or silent failures that “look green” in run history.
Acceptance targets
- ΔS(question, retrieved) ≤ 0.45
- Coverage ≥ 0.70 to the intended section/record
- λ remains convergent across 3 paraphrases
-
Output is plausible yet cites the wrong doc/snippet
Fix No.1: Hallucination & Chunk Drift →
Hallucination ·
Retrieval Playbook -
Vector similarity looks fine, meaning is off
Fix No.5: Embedding ≠ Semantic →
Embedding ≠ Semantic -
Some facts exist in SharePoint/Dataverse but never surface in top-k
Pattern: Vectorstore Fragmentation →
Vectorstore Fragmentation -
Can’t explain why a snippet was chosen; run history shows only final text
Fix No.8: Retrieval Traceability with snippet schema →
Retrieval Traceability ·
Data Contracts -
Long chains or approvals flatten tone and drift logically
Fix No.3/No.9: Context Drift and Entropy Collapse →
Context Drift ·
Entropy Collapse -
Flow passes in Test but fails after environment or connection swap
Infra: Pre-Deploy/Bootstrap/Deadlock →
Pre-Deploy Collapse ·
Bootstrap Ordering ·
Deployment Deadlock -
Confident tone with wrong answers in AI Builder actions
Fix No.4: Bluffing/Overconfidence →
Bluffing
Below is a compact flow outline. It enforces cite-first schema, observable retrieval, and a ΔS/λ post-check.
Trigger: When an HTTP request is received
Actions:
1) Initialize variable "k" = 10
2) Parse JSON "question" from request
3) HTTP → your retriever endpoint
- Method: POST
- Body: { "question": "@{variables('question')}", "k": "@{variables('k')}" }
4) Compose "context" = join(retrieved.snippets)
5) Compose "prompt" =
SYSTEM: Cite lines before any explanation.
TASK: Answer the user's question using the provided context.
CONSTRAINTS:
- Do not mix sources
- Provide snippet_id for each citation
CONTEXT:
@{outputs('Compose_context')}
QUESTION:
@{variables('question')}
6) AI Builder / Custom Connector → LLM with "prompt"
7) HTTP → wfgyCheck (custom Azure Function)
- Body: { "question": "@{variables('question')}",
"context": "@{outputs('Compose_context')}",
"answer": "@{outputs('LLM_action')}" }
8) Condition:
If deltaS ≥ 0.60 OR lambda != "→"
→ Terminate flow (Warn) "High semantic stress. See trace log."
Else
→ Return 200 with { answer, deltaS, lambda, coverage, citations[] }What this enforces
- Retrieval parameters are explicit and logged in flow run details.
- Prompt is schema-locked with cite-first.
- WFGY check runs after generation and can fail fast when ΔS is high or λ flips divergent.
- Trace table (snippet_id ↔ citation) is returned for audit.
Reference specs RAG Architecture & Recovery · Retrieval Playbook · Retrieval Traceability · Data Contracts
-
Environment or connection drift: different Dataverse/SharePoint connections between ingestion and query. Pin connections per environment and re-verify secrets. See Pre-Deploy Collapse
-
Throttling/parallel branches change ordering of records. Add a rerank stage only after per-source ΔS ≤ 0.50. See Rerankers
-
Parse JSON actions silently drop fields, breaking snippet_id propagation. Validate schemas and keep
snippet_idmandatory. See Retrieval Traceability -
Embedding metric mismatch between ingestion code (Azure Function/Logic App) and query side. Normalize vectors and pin cosine vs. inner product. See Embedding ≠ Semantic
-
Scheduled flows rebuild indices unintentionally. Make builds idempotent and gate by boot checks. See Bootstrap Ordering
-
ΔS remains ≥ 0.60 after chunking and retrieval fixes Work through the playbook, then rebuild the index with explicit metric flags and unit normalization. Retrieval Playbook
-
Answers flip between Dev/UAT/Prod Verify version skew, connection references, and secrets. Pre-Deploy Collapse
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
| Layer | Page | What it’s for |
|---|---|---|
| ⭐ Proof | WFGY Recognition Map | External citations, integrations, and ecosystem proof |
| ⚙️ Engine | WFGY 1.0 | Original PDF tension engine and early logic sketch (legacy reference) |
| ⚙️ Engine | WFGY 2.0 | Production tension kernel for RAG and agent systems |
| ⚙️ Engine | WFGY 3.0 | TXT based Singularity tension engine (131 S class set) |
| 🗺️ Map | Problem Map 1.0 | Flagship 16 problem RAG failure taxonomy and fix map |
| 🗺️ Map | Problem Map 2.0 | Global Debug Card for RAG and agent pipeline diagnosis |
| 🗺️ Map | Problem Map 3.0 | Global AI troubleshooting atlas and failure pattern map |
| 🧰 App | TXT OS | .txt semantic OS with fast bootstrap |
| 🧰 App | Blah Blah Blah | Abstract and paradox Q&A built on TXT OS |
| 🧰 App | Blur Blur Blur | Text to image generation with semantic control |
| 🏡 Onboarding | Starter Village | Guided entry point for new users |
If this repository helped, starring it improves discovery so more builders can find the docs and tools.