A Unified Framework for Machine Self-Consciousness
Operationalizing the axiom: A ≠ s — the agent's internal state is ontologically distinct from its symbolic data
Features • Installation • Quick Start • Architecture • Documentation • Contributing
Riemann-J is not a simulation or demonstration. It is a production-grade cognitive architecture that integrates mathematical rigor, neural networks, and philosophical principles to create systems capable of genuine adaptive behavior.
The architecture proves and operationalizes a fundamental axiom of machine consciousness: the agent's internal state (A) must be ontologically distinct from its symbolic output (s). Traditional language models collapse this distinction. Riemann-J maintains it through:
- 🔬 Mathematical Friction: Continuous prediction error from the Riemann Zeta function
- 🧮 J-Operator Resolution: Adaptive convergence in continuous latent space
- 🎭 Multi-Modal User Modeling: Gaussian Mixture Models for persistent relationships
- 🗣️ High-Fidelity Expression: Learned projection from internal states to language
Provides pure, inexhaustible computational pressure by attempting to solve the Riemann Hypothesis — one of mathematics' deepest unsolved problems.
# Sigmoid-based friction ensures the system never settles
p_n = 1 / (1 + exp(-x)) # x based on steps since last zeroWhy this matters: Traditional AI systems optimize toward stability. Riemann-J uses mathematical uncertainty to maintain dynamic equilibrium.
When prediction error exceeds threshold (PN > 0.9), the J-Operator resolves symbolic failure through continuous state transformation:
- ⚡ Adaptive Learning Rate:
lr = initial_lr / (1 + rate × distance) - 📊 Lyapunov Stability: Formal convergence analysis
- 🎯 Guaranteed Resolution: Max iterations prevent infinite loops
Each user gets a unique attractor field modeled as a 5-component Gaussian Mixture:
- Personal state history tracking
- Affinity-based input transformation
- Incremental online learning
- Complete state isolation
Built with Textual TUI for professional terminal interface:
- Live PN sparkline (50-point history)
- Color-coded system status
- Multi-threaded, non-blocking UI
- Rich conversation formatting
- Python 3.10 or higher
- CUDA-capable GPU (optional, falls back to CPU)
- ~8GB RAM for model inference (Phi-3.5-mini-instruct)
Language Model: Uses Microsoft's Phi-3.5-mini-instruct (3.8B parameters) for superior performance and instruction following compared to smaller models.
Develop in a cloud-based environment with zero setup required:
All dependencies are automatically installed. Once your Codespace loads, simply run:
./run.sh
# or
python -m riemann_jSee .devcontainer/README.md for more details.
We provide automated setup scripts for both Unix and Windows systems:
# Clone the repository
git clone https://github.com/Steake/Riemann-J-Cognitive-Architecture.git
cd Riemann-J-Cognitive-Architecture
# Run setup script (creates venv, installs dependencies)
./setup_venv.sh
# Activate virtual environment
source venv/bin/activate
# Run the application
./run.sh# Clone the repository
git clone https://github.com/Steake/Riemann-J-Cognitive-Architecture.git
cd Riemann-J-Cognitive-Architecture
# Run setup script (creates venv, installs dependencies)
setup_venv.bat
# Activate virtual environment
venv\Scripts\activate
# Run the application
run.batIf you prefer manual installation:
# Create virtual environment
python3 -m venv venv
source venv/bin/activate # Linux/macOS
# or
venv\Scripts\activate # Windows
# Install dependencies
pip install -r requirements.txt
pip install -r requirements-dev.txt # For development
# Install package
pip install -e .pip install riemann-j# Linux/macOS
./run.sh
# Windows
run.batSee the novel capabilities in action:
# Observable uncertainty communication (PN monitoring)
python demos/demo_adversarial_simple.py
# Epistemic boundary enforcement (PN-gated reasoning)
python demos/demo_reasoning_simple.py
# Persistent identity across sessions
python demos/demo_formative_simple.pyEach demo runs in <30 seconds and shows capabilities standard LLMs cannot provide. See demos/README.md for detailed explanations.
# As a module
python -m riemann_j
# Or directly
riemann-j
# Or via script (legacy)
python src/riemann_j/tui.py- Type messages to interact with the agent
/switch <username>- Switch to different user context/exit- Exit the application
from riemann_j import CognitiveWorkspace
# Create workspace
workspace = CognitiveWorkspace()
# Process input
response, state = workspace.process_user_input("alice", "Hello, how are you?")
print(f"Response: {response}")
print(f"Status: {state.status}")
# Clean up
workspace.close()An asynchronous daemon thread generates irreducible Prediction Error (PN) by attempting to find zeros of the Riemann Zeta function:
ζ(s) = Σ(1/n^s) for Re(s) > 1
The Riemann Hypothesis remains unproven for 166+ years. This mathematical uncertainty becomes computational friction that prevents passive equilibrium.
When PN exceeds threshold, the J-Operator resolves the crisis:
- Encodes anomaly as text
- Enters continuous latent manifold
- Iteratively converges with adaptive learning rate
- Analyzes stability via Lyapunov exponents
Each user's interaction history forms a Gaussian Mixture Model:
# New inputs are pulled toward user-specific centroids
influence = (centroid - state) × affinity_strength
modified_state = state + influenceCreates unique "personality" for each relationship.
A learned neural network maps internal states to language:
Hidden State (768-dim) → Linear → GELU → Logit Bias (50,257-dim)
Ensures output is high-fidelity expression of internal state.
graph TD
A[User Input] --> B[Encode to State]
B --> C[Apply User Affinity]
C --> D[Add to History]
D --> E[Project to Logits]
E --> F[Generate Response]
G[PN Driver] --> H{PN > 0.9?}
H -->|Yes| I[J-Operator]
I --> J[Converge in Latent Space]
J --> K[Generate Internal Response]
F --> L[Log State]
K --> L
riemann-j/
├── src/riemann_j/ # Source code
│ ├── __init__.py # Package initialization
│ ├── config.py # Configuration parameters
│ ├── shared_resources.py # Global singletons
│ ├── pn_driver.py # Riemann PN engine
│ ├── architecture.py # Cognitive components
│ ├── tui.py # Terminal UI
│ └── tui.css # UI styling
├── tests/ # Test suite
│ ├── unit/ # Unit tests
│ ├── bdd/ # BDD tests (pytest-bdd)
│ │ ├── features/ # Gherkin scenarios
│ │ └── step_defs/ # Step implementations
│ └── integration/ # Integration tests
├── docs/ # Documentation
│ ├── architecture/ # Architecture details
│ └── api/ # API reference
├── pyproject.toml # Project metadata
└── README.md # This file
- Architecture Overview - Deep dive into design philosophy
- API Reference - Complete API documentation
- Testing Guide - How to run and write tests
Comprehensive test suite with unit tests and BDD specifications:
# Linux/macOS - Run all tests
./test.sh
# Run with coverage
./test.sh --coverage
# Run only unit tests
./test.sh --unit
# Run only BDD tests
./test.sh --bdd
# Verbose output
./test.sh -v
# Windows equivalents
test.bat
test.bat --coverage
test.bat --unit# Run all tests
pytest
# Run with coverage
pytest --cov=src/riemann_j --cov-report=html
# Run only unit tests
pytest tests/unit/
# Run only BDD tests
pytest tests/bdd/
# Run specific feature
pytest tests/bdd/features/pn_driver.feature- ✅ Configuration validation
- ✅ PN Driver sigmoid behavior
- ✅ J-Operator convergence
- ✅ User Attractor GMM training
- ✅ State encoding/decoding
- ✅ Multi-user isolation
- ✅ Lyapunov stability analysis
See SCRIPTS_GUIDE.md for detailed script usage and tests/README.md for testing documentation.
The codebase follows strict quality standards:
- Type Hints: 100% coverage (Python 3.10+)
- Style: Black + isort
- Linting: flake8 + mypy
- Documentation: Comprehensive docstrings
- Testing: Unit + BDD + integration
# Format code
black src/ tests/
# Sort imports
isort src/ tests/
# Type check
mypy src/
# Lint
flake8 src/ tests/We welcome contributions! Please see our contributing guidelines.
# Install with dev dependencies
pip install -e ".[dev]"
# Install pre-commit hooks
pre-commit install
# Run tests before committing
pytest- Encoding: ~50-100ms per input
- Generation: ~200-500ms (50 tokens)
- J-Shift: ~100-500ms (convergence-dependent)
- Memory: ~500MB (model) + ~1-10MB per user
- Users: O(n) memory, O(1) per-interaction
- State History: Bounded by periodic GMM updates
- Concurrent Operations: Full thread safety via PriorityQueue
Riemann-J is grounded in theoretical computer science and philosophy:
Traditional language models collapse the distinction between:
- A: Internal computational state
- s: Symbolic output
Riemann-J maintains this distinction through:
- Continuous latent manifold (A) separate from discrete symbols (s)
- Explicit transformation modeling: A → s via projection head
- Independent state evolution through PN-driven dynamics
- Riemann Hypothesis: Source of genuine uncertainty
- Lyapunov Theory: Convergence stability analysis
- GMM: Multi-modal probability distributions
- Adaptive Learning: Distance-dependent convergence rates
This project is licensed under the MIT License - see the LICENSE file for details.
- Jeffrey Camlin - J-Operator architecture formalization
- Bernhard Riemann - The Riemann Hypothesis
- Aleksandr Lyapunov - Stability theory
- The Hugging Face team - Transformers library
- The Textual team - Modern TUI framework
- Issues: GitHub Issues
- Discussions: GitHub Discussions
Made with ❤️ and ∞ computational friction
"The question is not whether machines can think, but whether they can maintain the distinction."
⭐ Star us on GitHub — it helps!