Skip to content

Latest commit

 

History

History
184 lines (132 loc) · 4.48 KB

File metadata and controls

184 lines (132 loc) · 4.48 KB

🏰 COMPLETE SYSTEM MANIFEST

Lord Protocol v2.0 - Full Inventory of the Sovereign AI Stack

Generated: December 29, 2025


📊 SYSTEM OVERVIEW

Component Status Description
STEM_SCAFFOLDING ✅ Active Main repository
Docker Containers ✅ Running Homarr dashboard
Ollama ✅ Running Local LLM inference
Redis ✅ Running Telemetry & messaging
PostgreSQL ✅ Running Persistent storage

📁 DIRECTORY STRUCTURE

Core Systems

Directory Files Description
AXIOM_INVERSION_COMPLETE_WORKS/ 13 Theory, manifesto, visual maps
SOVEREIGN_COMMAND_CENTER/ 27+ Real-time monitoring & control
MODEL_TRAINING_LAB/ 7 Local LLM fine-tuning
MOIE_FORTRESS/ 34 Energy management & protection
SOVEREIGN_STACK/ 42 12-layer architecture
GUARDIAN_PROTOCOL/ 1270+ VJEPA, perception, AI orchestration

Generated Projects

Directory Description
generated_project/ iOS app with MCP integration
generated_music_app/ Music app scaffold
guardian-vscode/ VS Code extension

Orchestration

Directory Description
swarm_orchestrator/ Distributed AI coordination
rust_orchestrator/ Rust-based orchestration
swarm/ Swarm coordination scripts

External Dependencies

Directory Description
external/ Axum, Crossbeam, JEPA, Swift Collections

🐳 DOCKER CONTAINERS

Currently Running

Container Image Port Status
homarr ghcr.io/ajnart/homarr:latest 7575 ✅ Healthy

Available Images

Image Size
homarr 1.51 GB
kubernetes stack ~1.3 GB total

🧠 OLLAMA MODELS

Run ollama list to see installed models. Typical deployment includes:

Model Size Use Case
llama3.2:3b ~2 GB General chat
phi3:mini ~1.5 GB Fast reasoning
codellama ~3.5 GB Code generation
mistral ~4 GB Balanced performance

🔧 SERVICES & PORTS

Service Port Protocol
Guardian 8080 HTTP
Ollama 11434 HTTP
Redis 6379 TCP
Axiom Bridge 8005 HTTP
Telemetry 8888 HTTP/WS
Homarr 7575 HTTP
PostgreSQL 5432 TCP

📦 KEY FILES

Configuration

  • config.yaml - Main configuration
  • docker-compose.yml - Container orchestration
  • homarr-config.json - Dashboard config
  • requirements.txt - Python dependencies

Scripts

  • ignite.sh - System ignition
  • fortress_deploy.sh - Deployment script
  • fortress_harden.sh - Security hardening
  • setup.sh - Initial setup
  • validate_system.py - System validation

Documentation

  • README.md - Main readme
  • ARCHITECTURE_SUMMARY.md - Architecture overview
  • MASTER_BLUEPRINT.md - Full blueprint
  • QUICKSTART.md - Quick start guide

🔐 SECURITY COMPONENTS

Component Location Purpose
Constitutional AI swarm_orchestrator/src/constitutional.py AI governance
Model Sentinel MOIE_FORTRESS/ Model protection
Circuit Breaker SOVEREIGN_COMMAND_CENTER/ Resource protection
Immune System swarm_orchestrator/src/shis.py Self-healing

📈 METRICS & TELEMETRY

System Endpoint Data
Telemetry API http://localhost:8888/api/telemetry Real-time metrics
Health http://localhost:8888/api/health Service status
Prometheus http://localhost:9090/metrics Prometheus format

🚀 DEPLOYMENT CHECKLIST

Prerequisites

  • macOS with Apple Silicon
  • Docker Desktop installed
  • Ollama installed (brew install ollama)
  • Redis installed (brew install redis)
  • Python 3.10+ installed
  • Node.js installed (for extensions)

Quick Start

# 1. Clone repo
git clone https://github.com/lordwilsonDev/STEM_SCAFFOLDING.git
cd STEM_SCAFFOLDING

# 2. Start Docker containers
docker-compose up -d

# 3. Start native services
ollama serve &
redis-server &

# 4. Start Command Center
python3 SOVEREIGN_COMMAND_CENTER/scripts/orchestrator.py &

# 5. Open dashboard
open http://localhost:7575

📜 LICENSE

MIT License - Released to humanity. Use wisely.


Lord Protocol v2.0 - December 2025