Skip to content

GatoaoCubo/cex

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

182 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

CEXAI — Cognitive Exchange AI

Open-source AI brain. Intelligence compounds when exchanged.

Seven Artificial Sins. Twelve pillars. 300+ typed kinds. Your brain.

Version Python License LLMs Pillars Nuclei Kinds Builders Tools

GitHub Sponsors Ko-fi Polar


Glossary

New to CEXAI? Here is what the terms mean.

CEXAI term Industry equivalent One-liner
Kind Artifact type / schema A typed template for a unit of knowledge (e.g., knowledge_card, agent, workflow). 300+ kinds exist today.
Pillar Domain axis / taxonomy layer One of 12 capability dimensions (Knowledge, Model, Prompt, Tools, Output, Schema, Evals, Architecture, Config, Memory, Feedback, Orchestration).
Nucleus AI department / agent team An autonomous LLM-powered business unit (N01-N07). Each has its own memory, tools, sub-agents, and behavioral bias.
8-Function Pipeline (8F) Agent reasoning loop Eight sequential functions every task passes through: Constrain, Become, Inject, Reason, Call, Produce, Govern, Collaborate.
Builder Factory / generator A 12-file specification (one per pillar) that teaches an LLM how to produce a specific kind.
ISO Builder spec file One of the 12 files inside a builder, each covering one pillar (knowledge, model, prompt, tools, etc.).
Sin lens Personality layer / behavioral bias Each nucleus runs on one of the seven deadly sins, which determines what it optimizes for under ambiguity.
GDP Decision protocol Guided Decision Protocol — the user decides what (tone, audience, style), the LLM decides how (files, pipeline, structure).

Why CEX exists

Most "AI agents" are a system prompt plus a few tools. Useful, but shallow — they forget, drift, can't compose, and leak your knowledge into someone else's model.

CEX treats enterprise AI as typed infrastructure. Every piece of knowledge is a kind. Every kind has a builder. Every builder follows the 8-Function Pipeline. Seven nuclei — each one an AI department with its own toolbox, memory, crew, and cultural DNA — collaborate through a governance layer that compounds over time.

The result is not a chatbot. It is an AI brain: modular enough to grow a new department in minutes, sovereign enough to run entirely on your infra, and cumulative enough that every artifact makes the next one smarter.

Three properties

  • Composable — 8 functions × 12 pillars × 300+ artifact kinds = the factory floor. Spawn a new nucleus, a new kind, or a new archetype in minutes.
  • Sovereign — runs on Claude, GPT, Gemini, or Ollama. Your knowledge lives in your repo, under your git history. No vendor owns your brain.
  • Self-assimilating — every conversation, decision, and artifact compiles into typed, governed, searchable assets. Your institutional memory compounds like capital.

Build your Jarvis. Own your brain. Exchange cognition.

What CEXAI is NOT

  • Not a chatbot. There is no chat UI. CEXAI runs inside your existing LLM tooling (Claude Code, Cursor, Codex CLI, Ollama).
  • Not an API wrapper. It does not abstract away provider APIs. It adds a typed knowledge layer on top of any provider.
  • Not a prompt library. Prompts are one of 300+ artifact kinds. The system is the pipeline that produces, governs, and compounds them.

CEXAI is a typed knowledge system where every artifact is classified, scored, and connected. Intelligence compounds because every piece of work makes the next one better — across nuclei, across sessions, across runtimes.


The maturity gap

A basic LLM agent is a prompt plus a few tools. A CEX nucleus is a business department — a superintendent (the LLM), a team of specialized sub-agents, a toolbox of MCPs and APIs, a knowledge library, a playbook of workflows, quality controls, and a cultural DNA (its sin lens).

The 12 pillars are the maturity axes. A basic agent covers 1–2 of them. A CEX nucleus covers all 12.

Axis Basic agent CEX nucleus
Knowledge (P01, P10) Context stuffing Typed RAG + entity memory + chunk strategies + prompt cache
Model (P02) Single provider Fallback chain across Claude / GPT / Gemini / Ollama
Prompt (P03) One system prompt Templates + chains + compiler + version control
Tools (P04) Flat list MCP servers + API clients + browser scrapers + search pipelines
Output (P05) Free text 300 typed artifact kinds + formatters + parsers
Schema (P06) None Input / output schemas + validators + interface contracts
Evaluation (P07) Output as-is Quality gates + scoring rubrics + LLM judges + benchmarks
Architecture (P08) None Agent cards + component maps + decision records
Config (P09) Env vars Typed configs + rate limits + feature flags + secrets
Feedback (P11) None Guardrails + bug loops + learning records + regression checks
Orchestration (P12) None Workflows + dispatch rules + crews + schedules

Example — ask N01 Intelligence to research a competitor. It does not run an LLM call. It activates a sin-driven agentic business unit:

  • Analytical Envy (sin lens) drives it to surpass every public source.
  • MCP servers + browser tools + API clients (P04) scrape, fetch, and cross-reference.
  • A crew of specialized sub-agents (researcher, analyst, fact-checker) divide the work in parallel.
  • The 8-Function Pipeline (CONSTRAIN → BECOME → INJECT → REASON → CALL → PRODUCE → GOVERN → COLLABORATE) drives every artifact to a 9.0+ quality floor.
  • Entity memory + knowledge index (P10) capture what it learned so the next run is smarter.
  • Peer-reviewed quality gates (P07, P11) block anything subpar from merging.

That is not a chatbot. That is an intelligence department that compounds like capital.


The 8-function pipeline (8F)

Every LLM interaction — research, writing, building, evaluating, deploying — decomposes into eight orthogonal functions. This is how CEX thinks, not just how it builds.

# Function What it does
F1 CONSTRAIN Resolve kind, load schema, set limits and naming rules
F2 BECOME Load builder identity (12 ISO files, 1:1 with pillars)
F3 INJECT Inject context — knowledge cards, examples, memory, brand, similar artifacts
F4 REASON Plan approach, resolve ambiguity via GDP (Guided Decision Protocol)
F5 CALL Discover relevant tools, cross-reference existing work
F6 PRODUCE Generate the artifact with full context
F7 GOVERN Validate against hard gates (structure, schema, rubric, semantics)
F8 COLLABORATE Save, compile, commit, signal downstream nuclei
# Run the full pipeline
python _tools/cex_8f_runner.py "your intent" --kind <kind> --execute

# Dry run — shows what would happen without LLM calls
python _tools/cex_8f_runner.py "your intent" --kind <kind> --dry-run --verbose

The 12 pillars

Every artifact CEX produces lives in one of twelve pillars. Pillars are taxonomic axes, not departments — the same pillar is exercised by every nucleus.

Pillar Name Examples of kinds it contains
P01 Knowledge knowledge_card, rag_source, glossary_entry, chunk_strategy
P02 Model agent, model_provider, boot_config, mental_model
P03 Prompt system_prompt, prompt_template, chain, action_prompt, tagline
P04 Tools mcp_server, browser_tool, api_client, webhook, research_pipeline
P05 Output landing_page, formatter, parser, diagram
P06 Schema input_schema, validator, type_def, interface
P07 Evals quality_gate, scoring_rubric, llm_judge, benchmark, smoke_eval
P08 Architecture agent_card, component_map, decision_record, naming_rule
P09 Config env_config, rate_limit_config, secret_config, feature_flag
P10 Memory entity_memory, knowledge_index, memory_summary, prompt_cache
P11 Feedback quality_gate, bugloop, guardrail, learning_record
P12 Orchestration workflow, dispatch_rule, schedule, crew_template, dag

The Artificial Sins

Each nucleus is driven by one of the seven deadly sins — a behavioral bias that determines what it optimizes for under ambiguity. The sin is not branding; it is a decision heuristic baked into the nucleus definition. Values are from N0X_*/P08_architecture/nucleus_def_n0X.md.

Nucleus Role Sin Lens
N01 intelligence Analytical Envy
N02 marketing Creative Lust
N03 engineering Inventive Pride
N04 knowledge Knowledge Gluttony
N05 operations Gating Wrath
N06 commercial Strategic Greed
N07 orchestrator Orchestrating Sloth

Models are configurable per nucleus in .cex/config/nucleus_models.yaml. Pick any provider (Claude/Codex/Gemini/Ollama) and any tier per nucleus. Defaults ship with reasoning-heavy work routed to higher-tier models, but everything is one YAML edit away. The sin lens is the architectural commitment; the model is a deployment choice.

N00 Genesis is the pre-sin archetype — the base class from which N01-N07 inherit. N08+ are community verticals: clone N00, assign a sin, populate 12 pillars with domain artifacts. The taxonomy scales horizontally without architectural changes.


The Exchange

The X in CEXAI stands for Exchange. Intelligence compounds faster when shared.

CEXAI artifacts are modular, typed, and runtime-agnostic. Every .md file with YAML frontmatter is a self-describing exchange unit — it carries its kind, quality score, pillar, and nucleus origin. Import an artifact into any CEXAI instance, run cex_doctor.py, and it validates automatically.

What is exchangeable

Unit Scope Example
Knowledge Card Single typed fact kc_react_hooks_patterns.md
Builder Production capability (12 ISOs) workflow-builder/
SDK Provider New runtime adapter provider_ollama.py
Vertical Nucleus Entire domain department N08_healthcare/

What stays private

Brand config, memory, runtime state, and secrets never leave your instance. The exchange is about cognition, not identity.

Anti-fragile by design

CEXAI sits one layer above LLMs. If a better model appears tomorrow, your artifacts improve — they are the knowledge, not the model. If a runtime shuts down, switch providers in one YAML file. Your brain is yours.


Quickstart

# 1. Clone
git clone https://github.com/GatoaoCubo/cex.git && cd cex

# 2. Install dependencies
pip install -r requirements.txt             # Core (pyyaml, tiktoken)
pip install -r requirements-llm.txt         # LLM providers (optional)

# 3. Bootstrap your brand — answer ~6 questions about your company
python _tools/cex_bootstrap.py
# Or: type /init in any Claude session with CEX loaded

# 4. Build your first artifact
python _tools/cex_8f_runner.py "create knowledge card about product pricing" \
    --kind knowledge_card --execute

# 5. Validate system health
python _tools/cex_doctor.py                 # Builder integrity
python _tools/cex_hooks.py validate-all     # Frontmatter validation
python _tools/cex_flywheel_audit.py audit   # Full system audit (109 checks)

See QUICKSTART.md for a 5-minute walkthrough, or browse the full documentation and examples.

Boot cost

When you open a session inside a CEX repo, your runtime automatically loads ~15K tokens of rules (8F pipeline, GDP protocol, nucleus routing, ubiquitous language). This is the cost of getting the full pipeline for free on every interaction.

Context window Boot cost Remaining
200K tokens (mid-tier) ~15K (~8%) ~185K
1M tokens (top-tier) ~15K (~1.5%) ~985K

Top-tier users will not notice. Mid-tier loses about 8% of context to system rules — still enough for most tasks, but worth knowing if you are working near the context ceiling.


Sovereignty: runs on your infrastructure

CEX is provider-agnostic by construction. The same artifact, pipeline, and governance layer drive every runtime.

Runtime Auth When to use
Claude (Anthropic) API key or Anthropic Max High-quality reasoning, large context windows
Codex (OpenAI) ChatGPT-plus or API key GPT runtime via OpenAI CLI
Gemini (Google) oauth-personal or API key Free tier available; large context
Ollama (local) none — runs on your GPU Fully offline; pick any local model

Routing lives in a single YAML (.cex/config/nucleus_models.yaml). Change providers, set fallback chains, pin per-nucleus models — no code changes.

# Check provider health + quotas
python _tools/cex_quota_check.py --all --cache

# Auto-discover + update model versions
python _tools/cex_model_updater.py --full

Three budget profiles (pick whichever fits):

Profile Setup Cost
Free Ollama everywhere (ollama pull <model>) $0 — pure local
Mixed Cloud reasoning models for builders + local Ollama for orchestration low
Premium Best cloud model per nucleus (configurable) high but optimal

Pre-flight context compression via cex_preflight.py (local Ollama or a small cloud model) reduces token burn ~70% before nucleus boot regardless of profile.

Secretariat tier (pre-flight intelligence)

Before every nucleus dispatch, a lightweight secretariat resolves intent, ranks ISOs, and selects context -- using the cheapest available model. The fallback chain tries each provider in order; the first healthy one wins:

Priority Provider Model Cost
1 Ollama cex-student (fine-tuned gemma2:9b) free
2 Ollama qwen3:8b free
3 Anthropic claude-haiku-4-5 low
4 Google gemini-2.5-flash free
5 Local regex intent resolver zero
# Check which providers are available
python _tools/cex_secretariat.py --probe

# Classify a user intent
python _tools/cex_secretariat.py --classify "build a landing page for my SaaS"

# Rank ISOs for a specific kind + task
python _tools/cex_secretariat.py --rank-isos agent "sales automation"

cex-student is a QLoRA fine-tuned gemma2:9b that knows all 300+ CEX kinds, 12 pillars, and the 8F pipeline. It runs locally on any 12GB+ GPU via Ollama. When absent, the system degrades gracefully through the chain above.

Config: .cex/config/nucleus_models.yaml (secretariat section). See MODEL_CARD.md for training details and eval results.


Dispatch: solo, grid, crew, swarm

bash _spawn/dispatch.sh solo n03 "build agent card for sales"   # One builder
bash _spawn/dispatch.sh grid MISSION_NAME                       # Up to 6 parallel
bash _spawn/dispatch.sh swarm agent 5 "scaffold 5 niche agents" # N parallel same-kind
bash _spawn/dispatch.sh status                                  # Monitor all
bash _spawn/dispatch.sh stop                                    # Stop MY session only
bash _spawn/dispatch.sh stop n03                                # Stop specific nucleus
bash _spawn/dispatch.sh stop --all                              # Stop ALL (DANGEROUS)

Session-aware: multiple orchestrators can run simultaneously; stop only affects your own nuclei.

Composable crews — when a deliverable needs multiple roles with handoffs (research → copy → design → QA), use a crew instead of a grid:

python _tools/cex_crew.py list
python _tools/cex_crew.py run product_launch \
    --charter N02_marketing/crews/team_charter_launch_demo.md --execute

Reverse compiler

CEX artifacts compile down to any format an LLM consumes. Edit once, deploy everywhere.

python _tools/cex_compile.py --target claude-md     # → CLAUDE.md (system prompt)
python _tools/cex_compile.py --target cursorrules   # → .cursorrules
python _tools/cex_compile.py --target customgpt     # → CustomGPT instructions JSON

CEX becomes the single source of truth for your AI knowledge.


Architecture at a glance

Layer 0 — BUILDERS       300+ builders × 12 ISOs each = 3,600+ artifact constructors
Layer 1 — PILLARS        12 pillars × 300+ kinds = the taxonomy
Layer 2 — NUCLEI         8 nuclei (N00 archetype + N01–N07 operational) = the organization
Layer 3 — PIPELINE       8-Function Pipeline (8F) = the assembly line
Layer 4 — GOVERNANCE     hooks + doctor + quality gates + flywheel audit = the quality bar
Layer 5 — TOOLS          150+ Python CLI tools (cex_*.py) = the runtime
Layer 6 — WIRING         SDK modules + signals + decision manifests = the nervous system

Repo structure

cex/
  .cex/                    Runtime config, router, cache, runtime state
    config/                nucleus_models.yaml, runtimes/, router_config.yaml
    brand/                 Brand config + templates
    runtime/               handoffs, signals, decisions, plans
    quality/               Audit reports, overnight logs
  _tools/                  150+ Python CLI tools (cex_*.py)
  _spawn/                  Dispatch scripts (solo, grid, swarm, monitor)
  _docs/                   Whitepaper, architecture specs
  archetypes/              Builder templates (300+ builders × 12 ISOs)
    builders/              One directory per kind
    _shared/               Shared skills across all builders
  boot/                    Boot scripts per nucleus × per runtime
  cex_sdk/                 Python SDK (early): ~80 real modules + scaffold. For embedding CEX in your own Python code.
  P01_knowledge/ … P12_orchestration/    12 pillar directories
  N00_genesis/             Genesis archetype (template for new nuclei)
  N01_intelligence/ … N07_admin/         8 nucleus directories
  CLAUDE.md                LLM entry point
  QUICKSTART.md            5-minute getting started
  CONTRIBUTING.md          Contributor guide

Key numbers

Metric Count
Artifact kinds 300+
Builder factories 300+
Builder ISO files (12 per builder) 3,600+
Sub-agents (.claude/agents/) 300+
Python CLI tools 150+
Pillars 12
Nuclei 8 (1 archetype + 7 operational)
8-Function Pipeline steps 8
Flywheel checks 109 (100% WIRED)

Counts are live-verifiable: python _tools/cex_stats.py and python _tools/cex_doctor.py.


Documentation

Resource Description
docs/quickstart.md 5-minute setup guide for newcomers
docs/concepts.md Core concepts: kinds, pillars, nuclei, 8F, GDP
docs/cli-reference.md All 150+ CLI tools with usage examples
docs/sdk-reference.md Python SDK: CEXAgent, providers, memory
docs/glossary.md Canonical vocabulary (100+ terms)
docs/faq.md Common questions and answers
examples/ 5 end-to-end patterns (agent, CLI, crew, RAG, grid)

Contributing

See CONTRIBUTING.md. Every contribution must pass:

  • Naming: {layer}_{kind}_{topic}.{ext} convention
  • Frontmatter: id, kind, pillar, title, quality fields validated
  • Quality gate: peer-reviewed score ≥ 8.0 for published artifacts
  • Pre-commit hooks: python _tools/cex_hooks.py validate-all
  • Secret scan: gitleaks blocks any credential leak
  • 8-Function Pipeline (8F): every artifact built must show the F1→F8 trace

Read CODE_OF_CONDUCT.md before opening a PR. Report security issues via SECURITY.md — never in public issues.


Community

  • Discord: Join the server — 7 nucleus channels, showcase, help forum
  • Obsidian Vault: Browse live — full knowledge graph with 3000+ artifacts
  • GitHub Discussions: feature requests, Q&A, RFC proposals

Security

See SECURITY.md for our vulnerability disclosure policy. Report security issues via email or GitHub's private reporting -- never in public issues.


License

MIT


SQL organized data. CEX organizes intelligence.
Build your Jarvis. Own your brain. Exchange cognition.

About

"Open-source AI brain. Typed knowledge system that compounds across Claude / Codex / Gemini / Ollama."

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Sponsor this project

  •  

Packages

 
 
 

Contributors