Skip to content

Latest commit

 

History

History
355 lines (260 loc) · 14.4 KB

File metadata and controls

355 lines (260 loc) · 14.4 KB
id faq
kind faq_entry
pillar P01
nucleus N04
title CEXAI -- Frequently Asked Questions
quality
density_score
created 2026-05-02
sin_lens knowledge_gluttony
mission MISSION_GAPFILL_R5R6R7

CEXAI FAQ

Getting started

Q: What does the X in CEXAI mean?

Exchange. CEXAI stands for Cognitive Exchange AI. Intelligence compounds faster when shared. Every typed artifact -- a knowledge card, a builder, a vertical nucleus -- is a portable exchange unit. Export it from one CEXAI instance, import it into another, run python _tools/cex_doctor.py, and it validates automatically. Brand config, memory, and secrets stay private. The exchange is about cognition, not identity.

Q: Do I need to know Python?

For basic use inside Claude Code (or any of the supported runtimes), no. You interact through slash commands like /build, /mission, /guide, /grid. The system handles file generation, validation, and git commits for you.

For SDK use or extending CEXAI with custom tools, yes -- Python 3.10+ is required. CLI tools live in _tools/ and the runtime SDK in cex_sdk/.

Q: Why does CEXAI rephrase what I said?

You type "make me a landing page." CEXAI responds with "Dispatching landing-page-builder via 8F pipeline to P05_output." That is intent resolution in action.

Your 5 words carry about 5% of what the LLM actually needs. CEXAI fills the other 95% by mapping your words to its taxonomy: a specific kind (landing_page), a specific pillar (P05), a specific builder (12 components), your brand voice, multiple knowledge sources, and a quality gate. The rephrased response is CEXAI showing you exactly what it understood -- so you can correct it before it builds.

If CEXAI ever maps your intent wrong, just say so. "No, I meant a pricing page, not a landing page." It re-resolves instantly. No penalty, no wasted work.

Q: Is this production-ready?

CEXAI is actively used in production by its creators. The core pipeline (8F, builders, validation, multi-runtime dispatch) is stable. The SDK (cex_sdk) is functional but evolving -- API surfaces may change between versions.

The system has 135+ tools, 54 system tests, pre-commit hooks, and a flywheel audit with 109 checks (python _tools/cex_flywheel_audit.py). It is a complex system with many moving parts -- expect to invest time understanding the architecture before relying on it for critical workloads.

Architecture

Q: What is 8F?

8F is the eight-function pipeline that every task runs through, in order:

# Function Verb What it does
1 CONSTRAIN Restrict Resolve kind, load schema, set bounds
2 BECOME Be Set identity and sin lens
3 INJECT Know Load knowledge, examples, brand
4 REASON Think Plan sections, approach, references
5 CALL Do Use tools, retrieve, scan corpus
6 PRODUCE Generate Create the artifact
7 GOVERN Evaluate Run quality gates and evals
8 COLLABORATE Coordinate Save, compile, commit, signal

A vague 5-word request enters at F1 and a validated, structured artifact exits at F8. Full spec: .claude/rules/8f-reasoning.md. Concept overview: docs/concepts.md.

Q: What is a kind?

A kind is one of 301 atomic artifact types in the CEXAI taxonomy (agent, knowledge_card, landing_page, prompt_template, etc.). Every kind is fully typed: it has a schema, a builder with 12 ISO components (one per pillar), a sub-agent, a knowledge card, and a registered entry in .cex/kinds_meta.json.

Q: What is a builder?

A builder is a factory blueprint that lives in archetypes/builders/{kind}-builder/. Each builder contains 12 ISO files (one per pillar): bld_knowledge, bld_model, bld_prompt, bld_tools, bld_output, bld_schema, bld_eval, bld_architecture, bld_config, bld_memory, bld_feedback, bld_orchestration. Builders define HOW to create a specific kind of artifact.

Q: What is a pillar?

A pillar is one of 12 organizational categories (P01-P12) that group related artifact kinds. For example, P01 (Knowledge) holds knowledge cards, RAG sources, and glossary entries; P05 (Output) holds landing pages, formatters, and parsers. Each pillar has its own _schema.yaml defining structural and density constraints, and lives in N00_genesis/P{NN}_*/.

Q: What is a nucleus?

A nucleus is one of 8 domains (N00-N07) where artifacts live. N00 (Genesis) is the archetype nucleus -- it holds builder definitions, pillar schemas, and shared ISOs that all other nuclei inherit from. N01-N07 are 7 specialized nuclei: intelligence, marketing, engineering, knowledge, operations, commercial, orchestrator. Domain instances live there. Example: N02_marketing/P03_prompt/ holds marketing-specific prompts.

Q: What is the "sin" thing about?

Each specialized nucleus has a "sin lens" -- a personality based on one of the seven deadly sins (Artificial Sins, in CEXAI's framing). This is a cultural heuristic that determines what the nucleus optimizes for under ambiguous input:

  • N01 Intelligence -- Analytical Envy (surpass every existing source)
  • N02 Marketing -- Creative Lust (irresistible prose)
  • N03 Engineering -- Inventive Pride (technical precision)
  • N04 Knowledge -- Knowledge Gluttony (insatiable hunger for sources)
  • N05 Operations -- Gating Wrath (uncompromising quality enforcement)
  • N06 Commercial -- Strategic Greed (maximize every revenue stream)
  • N07 Orchestrator -- Orchestrating Sloth (delegate, coordinate, never build)

The sin biases the LLM toward a specific optimization axis without requiring explicit instructions for every edge case.

Q: What is dual output?

Every artifact exists as two files:

  • .md -- human-readable source (you write this)
  • .yaml or .json -- machine-readable compiled output (generated by cex_compile.py)

The .md is the source of truth. The compiled file is for LLM consumption, embedding, and retrieval. Never edit compiled files directly.

Q: What is intent resolution?

Intent resolution is how CEXAI turns your vague request into a precise action. When you type "make me a landing page," CEXAI resolves:

  • Kind: landing_page (from 301 possible kinds)
  • Pillar: P05 Output (from 12 possible categories)
  • Nucleus: the right specialist agent (from 7 specialized nuclei)
  • Builder: landing-page-builder with 12 ISO components

The pipeline:

  1. Seed word matching -- trigger phrases mapped to canonical actions
  2. Fuzzy matching -- Levenshtein distance catches typos and near-misses
  3. Synonym expansion -- e.g. "webpage" -> landing_page
  4. Confidence scoring -- > 80% executes; below that, asks a clarifier
  5. Verb resolution -- create / improve / analyze / etc.

Source of truth: the Prompt Compiler at N00_genesis/P03_prompt/layers/p03_pc_cex_universal.md (301 kinds, EN-first with PT-BR seeded as the first community contribution). It is loaded at F1 CONSTRAIN by every nucleus.

Q: How does CEXAI handle ambiguous requests?

Three layers protect against misrouted intent:

  1. Confidence threshold -- below 80% on intent resolution, CEXAI asks a clarifying question. Example: "Did you mean a knowledge card (documentation) or a context doc (onboarding guide)?"
  2. AND-split detection -- compound requests like "research competitors and write ad copy" are decomposed into separate tasks routed to different nuclei (N01 for research, N02 for copy).
  3. Restatement protocol -- before executing complex tasks, CEXAI restates what it understood in precise terms so you can confirm or redirect. The cost of a wrong dispatch is seconds, not hours -- everything saves to git, nothing is lost.

Multi-runtime

Q: Can I use GPT / Gemini / Ollama instead of Claude?

Yes. CEXAI is provider-agnostic. It supports Claude, GPT (via OpenAI API), Gemini, and Ollama (fully local). Routing is configured per nucleus in .cex/config/nucleus_models.yaml -- you can set primary models and define fallback chains for each nucleus, including budget profiles (free / mixed / premium).

The chat() function auto-detects the provider from the model name: claude-* goes to Anthropic, gpt-* goes to OpenAI, anything else routes through Ollama or LiteLLM. Boot scripts ship per-runtime variants: boot/n0X.ps1 (Claude), boot/n0X_codex.ps1, boot/n0X_gemini.ps1, boot/n0X_ollama.ps1, boot/n0X_litellm.ps1.

Q: Can I use Portuguese or English?

Both. Simultaneously. In the same sentence if you want.

CEXAI resolves intent, not syntax. "Cria uma landing page" and "create a landing page" trigger the exact same pipeline. "Pesquisar concorrentes" and "research competitors" dispatch the same intelligence nucleus. Even mixed: "quero um prompt template pro meu curso" works.

The verb table covers PT/EN equivalents and the seed words cover trigger phrases in each language. CEXAI doesn't translate your words -- it maps them to the same canonical action regardless of language.

One note: artifact content matches whatever language you use. For English output from a Portuguese prompt, just add "in English" and CEXAI adjusts.

Cost + governance

Q: Where does my data go?

Nowhere external. Everything CEXAI produces lives in your git repository: artifacts, decisions, signals, memory, compiled metadata. There is no external database. The LLM provider sees your prompts during generation (same as any LLM usage), but all persistent state is local files under git version control.

Q: What is the quality gate?

Every artifact gets a quality score:

Score Tier What happens
9.5+ Golden Reference quality
8.0+ Skilled Published and indexed
7.0+ Learning Experimental, not public
< 7.0 Rejected Must be redone

Quality depends on: density (>= 0.85 for most pillars), completeness of frontmatter, specificity (every sentence enables action without external docs), and adherence to schema constraints. Enforced at F7 GOVERN. Optional cross- provider COUNCIL (F7c) blocks publication when judges disagree.

Q: What if CEXAI misunderstands me?

Three things happen before that can cause damage:

  1. Confidence scoring -- high confidence (> 80%) executes; low confidence asks a clarifying question first.
  2. Restatement -- before executing complex tasks, CEXAI restates what it understood in precise terms. You can confirm or redirect.
  3. Non-destructive execution -- CEXAI saves artifacts as new files and commits to git. Nothing is overwritten without explicit instruction. If the output isn't what you wanted, the previous version is one git checkout away.

The most common "misunderstanding" is ambiguity, not error. Just say "no, I meant X" and it re-resolves. The cost of a wrong dispatch is seconds, not hours.

Comparison vs competitors

Q: How is CEXAI different from CrewAI / LangChain / AutoGen?

Those frameworks give you primitives for building multi-agent systems. CEXAI is a complete, opinionated system already built on top of similar primitives. It ships with 301 typed artifact kinds, 300+ builders (12 ISOs each), 12 domain pillars, 7 specialized nuclei, and a quality governance pipeline (8F + density floors + cross-provider council).

The tradeoff: CrewAI / LangChain / AutoGen are more flexible if you want to build from scratch. CEXAI is more productive if your work fits the AI brain model -- you want typed, governed, exchangeable knowledge assets instead of throwaway LLM outputs.

For a side-by-side feature matrix vs the major OSS frameworks, see docs/comparison.md.

Contribution + extension

Q: How do I add a new kind?

Adding a brand-new kind is a 6-step, fully-typed process:

  1. Define the kind in .cex/kinds_meta.json
  2. Create a knowledge card at N00_genesis/P01_knowledge/library/kind/kc_yourkind.md
  3. Create a builder directory at archetypes/builders/yourkind-builder/ with 12 ISO files (one per pillar)
  4. Add a sub-agent definition at .claude/agents/yourkind-builder.md
  5. Add entries to the prompt compiler at N00_genesis/P03_prompt/layers/p03_pc_cex_universal.md
  6. Run python _tools/cex_doctor.py to verify everything is wired correctly

This is non-trivial because every kind is fully typed and governed. The builder ISOs are what teach the LLM to produce high-quality instances of your new kind.

Q: How do I add a custom variant of an existing kind?

For lighter-weight extensions (a domain-specific specialization of an existing kind), use the _custom/ mechanism within a pillar:

  1. Create P{NN}_{pillar}/_custom/{your_variant}/
  2. Add a _schema.yaml that inherits from the parent pillar schema
  3. Create templates and examples following the parent pillar's conventions

The 301 core kinds are fixed and cannot be modified without architecture review. Custom variants are the right tool when your need is "almost a knowledge_card but with one extra constraint."

Q: Where are instances stored?

Instances (real artifacts, not templates) live in nuclei:

N{XX}_{domain}/P{NN}_{pillar}/{type}/my_artifact.md

Example: N01_intelligence/P01_knowledge/knowledge_card/kc_competitor_analysis.md.

Templates live in P{NN}_{pillar}/templates/. Examples live in P{NN}_{pillar}/examples/. The archetype lives in N00_genesis/.

Troubleshooting

Q: How do I validate an artifact?

Three levels of validation:

# Structure check (folders, naming, schemas, density floors)
python _tools/cex_doctor.py

# Builder ISO completeness for a single kind
python _tools/validate_builder.py archetypes/builders/{kind}-builder/

# Schema compliance for a specific file
python _tools/validate_schema.py

Manual checklist: frontmatter complete, density >= 0.8, naming follows pattern, size within max_bytes for the pillar, no prose paragraphs > 3 lines, bullets within 80 chars.

Q: How do I compile artifacts?

python _tools/cex_compile.py --all

Reads every .md artifact, extracts its YAML frontmatter, and generates a compiled counterpart (.yaml or .json) in the compiled/ directory of the relevant pillar. The compiled format is defined by the machine_format field in the pillar's _schema.yaml.


CEXAI FAQ -- Updated 2026-05-02. Single canonical source: docs/faq.md.