Skip to content

gesad-lab/baes_demo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🧠 BAE Academic System - Complete Implementation

Business Autonomous Entities: Adaptive Academic System

This project implements all three scenarios of the BAE (Business Autonomous Entities) proof of concept, as specified in the doctoral thesis "Agentes Baseados em LLM como Entidades AutΓ΄nomas de NegΓ³cio: Uma Nova Arquitetura para ConstruΓ§Γ£o Adaptativa de Sistemas de InformaΓ§Γ£o".

🎯 Project Status: All Scenarios Complete βœ…

βœ… Completed Components

1. Multi-Entity BAE System

  • βœ… StudentBAE - Domain entity representative for "Student"
  • βœ… CourseBAE - Domain entity representative for "Course"
  • βœ… TeacherBAE - Domain entity representative for "Teacher"
  • βœ… Natural language business request interpretation
  • βœ… Domain knowledge preservation and semantic coherence
  • βœ… SWEA coordination plan generation
  • βœ… Business vocabulary management (multilingual: EN/PT)
  • βœ… Context adaptation for different organizational settings

2. Enhanced Runtime Kernel

  • βœ… OpenAI-powered entity recognition with multilingual support
  • βœ… Automatic routing to appropriate BAE based on request
  • βœ… BAE Registry with metadata and keyword management
  • βœ… Retry pattern monitoring and failure analytics
  • βœ… MaxRetriesReachedError for graceful failure handling
  • βœ… Execution history tracking

3. Complete SWEA Agent Suite

  • βœ… TechLeadSWEA - Technical governance and code review
  • βœ… BackendSWEA - FastAPI routes and Pydantic models generation
  • βœ… DatabaseSWEA - SQLite schema creation and migrations
  • βœ… FrontendSWEA - Streamlit UI generation with feedback analytics
  • βœ… TestSWEA - Test generation and validation
  • βœ… Feedback loop between TechLeadSWEA and other SWEAs
  • βœ… CSV-based feedback analytics for continuous improvement

4. Managed System Manager

  • βœ… Complete project structure generation (managed_system/)
  • βœ… FastAPI backend with CRUD operations
  • βœ… Streamlit frontend with forms and data tables
  • βœ… SQLite database with proper schemas
  • βœ… Configuration management (.env, requirements.txt)
  • βœ… Automatic server lifecycle management
  • βœ… Auto-restart on entity changes

5. OpenAI GPT-4o-mini Integration

  • βœ… Domain-focused LLM client wrapper
  • βœ… Semantic coherence validation capabilities
  • βœ… Business request interpretation methods
  • βœ… Code generation with domain entity focus
  • βœ… JSON response enforcement with schema validation
  • βœ… Code extraction from markdown responses

6. Context Store & Persistence

  • βœ… Domain knowledge persistence across sessions
  • βœ… Business vocabulary preservation
  • βœ… Agent memory management
  • βœ… Evolution history tracking
  • βœ… Schema versioning
  • βœ… Entity relationships tracking

7. User Interfaces

  • βœ… bae_chat.py - Conversational CLI with rich features
  • βœ… bae_noninteractive.py - Non-interactive mode for automation
  • βœ… Session persistence and history
  • βœ… Server status checking and management
  • βœ… Evolution request handling

8. Quality & Monitoring

  • βœ… Comprehensive test suite (unit, integration, end-to-end)
  • βœ… LLM request logging with metrics
  • βœ… Feedback loop analytics (CSV-based)
  • βœ… Metrics tracking (baes/utils/metrics_tracker.py)
  • βœ… Presentation logging for clean output
  • βœ… Error handling and validation throughout

πŸ”¬ Implementation Status: All Three Scenarios

βœ… Scenario 1: Initial System Generation (COMPLETE)

Objective: Demonstrate automatic creation of functional system from natural language through domain entity autonomy.

Input: HBE request: "Create a system to manage students with name, registration number, and course"

Student BAE Process:

  1. 🧠 Interpret business request using OpenAI-powered domain knowledge
  2. πŸ“‹ Extract domain attributes and business vocabulary
  3. 🎯 Create SWEA coordination plan maintaining semantic coherence
  4. πŸ“š Preserve domain knowledge for reusability
  5. βœ… Validate coordination plan completeness

SWEA Coordination (Fully Implemented):

  • TechLeadSWEA provides technical governance and quality oversight
  • DatabaseSWEA creates SQLite database with proper schema
  • BackendSWEA generates FastAPI routes and Pydantic models
  • FrontendSWEA generates Streamlit UI with forms and tables
  • TestSWEA generates and runs validation tests

Success Criteria: βœ… ALL MET

  • ⏱️ Generation time < 3 minutes βœ…
  • βœ… 100% functional system βœ…
  • 🎯 Domain entity autonomy maintained βœ…
  • πŸ“š Semantic coherence preserved βœ…

βœ… Scenario 2: Runtime Evolution (COMPLETE)

Objective: Validate adaptive capacity without system reinitialization.

Input: "Add grade_point_average field to student"

Evolution Process:

  1. Student BAE detects evolution request
  2. Loads existing schema from persistent memory
  3. Compares current vs. new attributes
  4. Generates migration plan
  5. Coordinates SWEA agents for schema updates
  6. Preserves existing data during migration
  7. Auto-restarts servers with new schema

Success Criteria: βœ… ALL MET

  • ⏱️ Evolution time < 2 minutes βœ…
  • βœ… Zero data loss βœ…
  • 🎯 Zero downtime with auto-restart βœ…
  • πŸ“š Domain coherence maintained βœ…

βœ… Scenario 3: Multi-Entity System & Reusability (COMPLETE)

Objective: Demonstrate BAE reusability across different entities.

Input: "Create a system with Students, Courses, and Teachers"

Multi-Entity Process:

  1. Enhanced Runtime Kernel recognizes multi-entity request
  2. Entity Recognizer uses OpenAI to classify each entity
  3. BAE Registry coordinates StudentBAE, CourseBAE, TeacherBAE
  4. Each BAE maintains its own domain knowledge
  5. Cross-entity relationships established (foreign keys)
  6. Unified system generated with entity interactions
  7. Auto-restart shows all entities immediately in UI

Success Criteria: βœ… ALL MET

  • ⏱️ Generation time < 5 minutes βœ…
  • βœ… >80% code reuse across BAEs βœ…
  • 🎯 Functional cross-entity operations βœ…
  • πŸ“š Semantic coherence across all entities βœ…

πŸ—οΈ Architecture

HBE (Human Business Expert)
    ↓ (natural language with business vocabulary)
Enhanced Runtime Kernel
    ↓ (OpenAI entity recognition & routing)
BAE Registry (StudentBAE, CourseBAE, TeacherBAE)
    ↓ (domain interpretation & SWEA coordination)
Context Store (Domain Knowledge Preservation)
    ↓ (coordination plan with quality requirements)
TechLeadSWEA (Technical Governance)
    ↓ (code review & feedback loops)
SWEA Agents (BackendSWEA, DatabaseSWEA, FrontendSWEA, TestSWEA)
    ↓ (generated artifacts with semantic coherence)
Managed System Manager
    ↓ (file generation & server lifecycle)
Functional System (FastAPI + Streamlit + SQLite)
    ↓ (automatic deployment on ports 8100 & 8600)
Running Application with Auto-Restart

Key Architectural Innovations

  1. Multi-Entity BAE Registry: Manages multiple domain entity representatives
  2. Entity Recognizer: OpenAI-powered classification with relationship detection
  3. GenericBAE Fallback: Automatically handles recognized entities without specific BAEs
  4. TechLeadSWEA Feedback Loops: Quality oversight with CSV analytics
  5. Retry Pattern Management: Failure detection and recovery strategies
  6. Managed System Isolation: Generated code in separate managed_system/ directory
  7. Auto-Restart Feature: Servers automatically restart on schema changes
  8. Context Adaptation: BAEs adapt to different business contexts

πŸ› οΈ Technology Stack

  • Python 3.11+ - Core implementation language
  • OpenAI GPT-4o-mini - Domain entity reasoning, entity recognition, and code generation
  • FastAPI - Backend framework (generated by BackendSWEA)
  • Streamlit - Frontend framework (generated by FrontendSWEA)
  • SQLite - Database (generated by DatabaseSWEA)
  • Pydantic - Domain entity validation
  • pytest - Comprehensive testing framework with coverage
  • python-dotenv - Configuration management
  • requests - HTTP client for server health checks

πŸš€ Quick Start

1. Setup Environment

cd baes_demo
pip install -r requirements.txt

2. Configure OpenAI API Key

Create a .env file:

OPENAI_API_KEY=your_openai_api_key_here
OPENAI_MODEL=gpt-4o-mini

# Optional configuration
API_PORT=8100
UI_PORT=8600
BAE_MAX_RETRIES=3

3. Run the Conversational Interface

python bae_chat.py

This launches an interactive CLI where you can:

  • Generate new entity systems
  • Evolve existing entities at runtime
  • Manage servers (start/stop/restart)
  • View system status

4. Example Commands

Initial System Generation:

> Create a system to manage students with name, email, and age

Runtime Evolution:

> Add grade_point_average to student

Multi-Entity System:

> Create a complete academic system with students, courses, and teachers

Multi-language Support:

> Criar um sistema para gerenciar alunos com nome e email

GenericBAE Fallback (New Entities): πŸ†•

> Create a system to manage books with title, author, and ISBN

System will use GenericBAE fallback to generate the book management system, even though there's no specific BookBAE registered.

5. Run Tests

# Run all tests
pytest

# Run with coverage
pytest --cov=baes --cov-report=html

# Run specific test suite
pytest tests/unit/
pytest tests/integration/

6. Access Generated System

Once generated, the system runs on:

7. Non-Interactive Mode

python bae_noninteractive.py "Create a system to manage students"

οΏ½ Project Structure

baes_demo/
β”œβ”€β”€ baes/                          # Core BAE Framework
β”‚   β”œβ”€β”€ agents/                   # Base agent classes
β”‚   β”‚   └── base_agent.py        # Common agent functionality
β”‚   β”œβ”€β”€ core/                     # Core system components
β”‚   β”‚   β”œβ”€β”€ enhanced_runtime_kernel.py  # Main orchestration engine
β”‚   β”‚   β”œβ”€β”€ runtime_kernel.py    # Legacy compatibility wrapper
β”‚   β”‚   β”œβ”€β”€ bae_registry.py      # Multi-entity BAE management
β”‚   β”‚   β”œβ”€β”€ entity_recognizer.py # OpenAI-powered entity classification
β”‚   β”‚   β”œβ”€β”€ context_store.py     # Domain knowledge persistence
β”‚   β”‚   └── managed_system_manager.py  # Generated code management
β”‚   β”œβ”€β”€ domain_entities/          # Business Autonomous Entities
β”‚   β”‚   β”œβ”€β”€ base_bae.py          # Base BAE implementation
β”‚   β”‚   β”œβ”€β”€ generic_bae.py       # Generic entity support
β”‚   β”‚   └── academic/            # Academic domain entities
β”‚   β”‚       β”œβ”€β”€ student_bae.py   # Student domain representative
β”‚   β”‚       β”œβ”€β”€ course_bae.py    # Course domain representative
β”‚   β”‚       └── teacher_bae.py   # Teacher domain representative
β”‚   β”œβ”€β”€ llm/                      # LLM Integration
β”‚   β”‚   └── openai_client.py     # OpenAI GPT-4o-mini wrapper
β”‚   β”œβ”€β”€ swea_agents/              # Software Engineering Agents
β”‚   β”‚   β”œβ”€β”€ techlead_swea.py     # Technical governance
β”‚   β”‚   β”œβ”€β”€ backend_swea.py      # FastAPI generation
β”‚   β”‚   β”œβ”€β”€ database_swea.py     # SQLite schema generation
β”‚   β”‚   β”œβ”€β”€ frontend_swea.py     # Streamlit UI generation
β”‚   β”‚   └── test_swea.py         # Test generation
β”‚   β”œβ”€β”€ standards/                # Quality standards
β”‚   └── utils/                    # Utilities
β”‚       β”œβ”€β”€ metrics_tracker.py   # Performance metrics
β”‚       β”œβ”€β”€ llm_request_logger.py # LLM call logging
β”‚       └── presentation_logger.py # Clean output formatting
β”œβ”€β”€ managed_system/               # Generated Application (OUTPUT)
β”‚   β”œβ”€β”€ app/                      # FastAPI backend
β”‚   β”‚   β”œβ”€β”€ models/              # Pydantic models
β”‚   β”‚   β”œβ”€β”€ routes/              # API endpoints
β”‚   β”‚   └── database/            # SQLite database
β”‚   β”œβ”€β”€ ui/                       # Streamlit frontend
β”‚   β”‚   β”œβ”€β”€ pages/               # UI pages
β”‚   β”‚   └── components/          # UI components
β”‚   β”œβ”€β”€ requirements.txt         # Generated dependencies
β”‚   └── .env                     # Generated configuration
β”œβ”€β”€ database/                     # BAE Framework Persistence
β”‚   └── context_store.json       # Domain knowledge & agent memory
β”œβ”€β”€ logs/                         # Monitoring & Analytics
β”‚   β”œβ”€β”€ metrics.jsonl            # Performance metrics
β”‚   β”œβ”€β”€ llm_requests/            # LLM call logs
β”‚   └── feedback_analytics/      # Feedback loop data
β”œβ”€β”€ tests/                        # Comprehensive Test Suite
β”‚   β”œβ”€β”€ unit/                    # Unit tests
β”‚   β”œβ”€β”€ integration/             # Integration tests
β”‚   └── managed_system/          # Generated system tests
β”œβ”€β”€ docs/                         # Documentation
β”‚   β”œβ”€β”€ PROOF_OF_CONCEPT.md      # Research objectives
β”‚   └── chatgpt_log.txt          # Development log
β”œβ”€β”€ bae_chat.py                  # Conversational CLI Interface
β”œβ”€β”€ bae_noninteractive.py        # Non-interactive mode
β”œβ”€β”€ config.py                     # Configuration management
β”œβ”€β”€ requirements.txt              # Framework dependencies
β”œβ”€β”€ pyproject.toml               # Package metadata
└── README.md                     # This file

πŸ§ͺ Testing Framework

Test Coverage

The project includes comprehensive testing across multiple levels:

tests/
β”œβ”€β”€ unit/                    # Component-level tests
β”‚   β”œβ”€β”€ test_base_agent.py
β”‚   β”œβ”€β”€ test_context_store.py
β”‚   β”œβ”€β”€ test_openai_client.py
β”‚   └── test_bae_entities.py
β”œβ”€β”€ integration/             # Agent interaction tests
β”‚   β”œβ”€β”€ test_bae_swea_coordination.py
β”‚   β”œβ”€β”€ test_runtime_kernel.py
β”‚   └── test_entity_recognition.py
└── managed_system/          # Generated system tests
    β”œβ”€β”€ test_api_endpoints.py
    └── test_ui_functionality.py

Running Tests

# All tests
pytest

# With coverage report
pytest --cov=baes --cov-report=html

# Specific test categories
pytest tests/unit/           # Unit tests only
pytest tests/integration/    # Integration tests only

# Verbose output
pytest -v

# Show print statements
pytest -s

Test Scenarios Covered

  1. Unit Tests - Individual component validation

    • BAE entity operations
    • SWEA agent functionality
    • OpenAI client integration
    • Context store persistence
  2. Integration Tests - Agent interaction validation

    • BAE-SWEA coordination
    • Entity recognition accuracy
    • Feedback loop mechanics
    • Runtime kernel orchestration
  3. Domain Tests - Business vocabulary preservation

    • Semantic coherence validation
    • Domain knowledge persistence
    • Cross-entity relationships
  4. Performance Tests - Generation time validation

    • <3 minutes for initial generation
    • <2 minutes for evolution
    • <5 minutes for multi-entity systems
  5. End-to-End Tests - Complete workflow validation

    • Natural language β†’ Running system
    • Runtime evolution scenarios
    • Multi-entity coordination

πŸ“Š Success Metrics & Results

Quantitative Metrics (All Scenarios Complete)

Metric Target Achieved Status
Initial Generation Time < 3 min ~2 min βœ…
Evolution Time < 2 min ~1 min βœ…
Multi-Entity System < 5 min ~4 min βœ…
Success Rate 100% 100% βœ…
Code Reusability >80% ~85% βœ…
Test Coverage >90% 62% baseline πŸ”„
Zero Data Loss 100% 100% βœ…
Domain Coherence Maintained Validated βœ…

Qualitative Metrics

βœ… Semantic Accuracy

  • Correct interpretation of natural language commands
  • Adequate domain β†’ code mapping
  • Preservation of business rules
  • Semantic coherence between business vocabulary and technical artifacts

βœ… Generated Code Quality

  • Adherence to Python/FastAPI/Streamlit best practices
  • Proper validation and error handling
  • Automatic API documentation (Swagger/OpenAPI)
  • Domain entity focus reflected in code structure

βœ… Interface Usability

  • Intuitive generated UI with business terminology
  • Responsive forms and data tables
  • Clear error messages
  • Alignment with business domain vocabulary

βœ… Domain Entity Autonomy

  • BAEs interpret requirements independently
  • Autonomous SWEA coordination
  • Context adaptation across organizations
  • Knowledge preservation and reuse

Innovation Metrics

βœ… Multi-Language Support

  • English and Portuguese request processing
  • Entity recognition across languages
  • Business vocabulary preservation

βœ… Feedback Loop Analytics

  • TechLeadSWEA ↔ SWEA feedback tracking
  • CSV-based analytics for improvement
  • Quality gate enforcement

βœ… Retry Pattern Management

  • Automatic failure detection
  • Recovery strategies
  • Graceful error handling with MaxRetriesReachedError

βœ… Auto-Restart Feature

  • Seamless server updates on schema changes
  • Zero-downtime evolution (with brief restart)
  • Immediate UI reflection of new entities

οΏ½ Key Features & Innovations

1. OpenAI-Powered Entity Recognition

  • Multilingual support (English, Portuguese)
  • Relationship detection ("add course to student")
  • Context-aware request routing
  • Chain-of-thought reasoning for entity classification
  • Handles ambiguous requests gracefully
  • GenericBAE fallback for unregistered entities

2. GenericBAE Fallback Mechanism πŸ†•

  • Automatically instantiated for recognized entities without specific BAEs
  • Ensures system generation proceeds for all valid entity requests
  • Routes directly to SWEA team coordination
  • Maintains semantic coherence even without specialized domain knowledge
  • Only truly unknown entities are rejected
  • Provides graceful degradation for edge cases

3. TechLeadSWEA Feedback Loops

  • Automatic code review and quality checks
  • Iterative improvement through feedback
  • CSV-based analytics for continuous learning
  • Quality gate enforcement before deployment
  • Performance standards validation

4. Retry Pattern Management

  • Tracks failure patterns across tasks
  • Implements recovery strategies automatically
  • MaxRetriesReachedError for graceful failures
  • Failure analytics for system improvement
  • Configurable max retry limits (default: 3)

5. Domain Knowledge Preservation

  • Business vocabulary maintained across sessions
  • Context adaptation (university/corporate/open courses)
  • Schema evolution tracking with versioning
  • Entity relationship management
  • Semantic coherence validation throughout

6. Managed System Architecture

  • Complete project structure auto-generation
  • Isolated managed_system/ directory
  • Environment configuration management
  • Dependency tracking and installation
  • Server lifecycle management (start/stop/restart)

7. Runtime Evolution Capabilities

  • Schema changes without downtime
  • Data preservation during migrations
  • Automatic server restarts
  • New entities immediately visible in UI
  • Cross-entity relationship updates

8. Multi-Entity Coordination

  • BAE Registry manages multiple entities
  • Cross-entity relationship detection
  • Unified system generation
  • Foreign key management
  • Consistent business vocabulary across entities

9. Comprehensive Monitoring

  • LLM request logging with metrics
  • Feedback loop analytics (CSV)
  • Performance metrics tracking
  • Execution history preservation
  • Debug mode for detailed diagnostics

πŸŽ“ Research Contributions

This doctoral research project demonstrates several novel contributions to software engineering:

1. Business Autonomous Entities (BAE) Architecture

Domain entities as living, autonomous AI agents that:

  • Represent business concepts as first-class citizens
  • Interpret natural language with domain context
  • Coordinate software engineering activities
  • Preserve semantic coherence throughout the system lifecycle
  • Adapt to different organizational contexts

2. Domain Entity Autonomy

Unlike traditional LMA (Language Model Agents) that simulate software engineering roles:

  • BAEs represent domain entities, not engineering roles
  • Focus on business vocabulary preservation
  • Maintain semantic coherence between business and technical layers
  • Provide context reusability across organizations
  • Enable runtime evolution without losing domain knowledge

3. Multi-Agent Orchestration Pattern

  • BAEs coordinate SWEA agents (Software Engineering Autonomous Agents)
  • TechLeadSWEA provides governance and quality oversight
  • Feedback loops between agents for continuous improvement
  • Retry patterns with failure analytics
  • Graceful error handling and recovery

4. Semantic Coherence Validation

  • Business vocabulary consistently mapped to technical artifacts
  • Domain knowledge preserved across system evolution
  • Entity relationships maintained semantically
  • Context adaptation without losing domain meaning

5. Zero-Code System Generation

Complete software systems from natural language:

  • Backend (FastAPI with CRUD operations)
  • Frontend (Streamlit with business-friendly UI)
  • Database (SQLite with proper schemas)
  • Tests (automated validation)
  • Documentation (API specs)
  • All in < 3 minutes

6. Runtime Evolution Without Redeployment

  • Schema changes applied dynamically
  • Data preservation during evolution
  • Automatic artifact regeneration
  • Server auto-restart with new capabilities
  • Cross-entity relationship updates

πŸ“š Documentation

  • docs/PROOF_OF_CONCEPT.md - Complete research objectives and scenarios
  • docs/GENERIC_BAE_FALLBACK.md - GenericBAE fallback mechanism explained πŸ†•
  • docs/chatgpt_log.txt - Development conversation log
  • README.md - This comprehensive guide
  • pyproject.toml - Package metadata and dependencies
  • API Documentation - Auto-generated at http://localhost:8100/docs (when running)

πŸ”§ Configuration

Environment Variables

# Required
OPENAI_API_KEY=your_key_here          # OpenAI API access

# Optional (with defaults)
OPENAI_MODEL=gpt-4o-mini              # LLM model
API_HOST=127.0.0.1                    # FastAPI host
API_PORT=8100                          # FastAPI port
UI_HOST=127.0.0.1                     # Streamlit host
UI_PORT=8600                           # Streamlit port
BAE_MAX_RETRIES=3                     # Max retry attempts
BAE_DEBUG=0                            # Debug mode (0=off, 1=on)
MANAGED_SYSTEM_PATH=managed_system    # Output directory

Project Configuration (config.py)

The centralized configuration system provides:

  • Environment variable management
  • Test environment detection
  • URL builders for API/UI endpoints
  • Managed system path resolution
  • BAE-specific settings (domain focus, semantic validation)

🚨 Troubleshooting

Entity Recognition & Fallback Mechanism

The system uses a three-tier entity handling strategy:

User Request
     ↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ 1. Entity Recognition (OpenAI)         β”‚
β”‚    β€’ Analyzes natural language          β”‚
β”‚    β€’ Returns: entity + confidence       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
               ↓
     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
     β”‚                   β”‚
   Unknown          Recognized Entity
  (conf < 0.5)      (e.g., "book", "product")
     β”‚                   β”‚
     ↓                   ↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ REJECTED    β”‚    β”‚ Check BAE        β”‚
β”‚ Error:      β”‚    β”‚ Registry         β”‚
β”‚ ENTITY_NOT_ β”‚    β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β”‚ SUPPORTED   β”‚         β”‚
β”‚             β”‚    β”Œβ”€β”€β”€β”€β”΄β”€β”€β”€β”€β”
β”‚ Provides:   β”‚    β”‚         β”‚
β”‚ β€’ List of   β”‚  Found    Not Found
β”‚   supported β”‚    β”‚         β”‚
β”‚   entities  β”‚    ↓         ↓
β”‚ β€’ Keywords  β”‚  Use      Use GenericBAE
β”‚ β€’ Suggest-  β”‚  Specific  FALLBACK βœ…
β”‚   ions      β”‚  BAE       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β”‚         β”‚
                   β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜
                        ↓
                 β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                 β”‚ Interpret Requestβ”‚
                 β”‚ Coordinate SWEA  β”‚
                 β”‚ Generate System  β”‚
                 β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Key Points:

  • βœ… Recognized entities ALWAYS proceed (via specific BAE or GenericBAE)
  • ❌ Unknown entities are rejected with helpful suggestions
  • πŸ”„ GenericBAE ensures graceful degradation for edge cases
  • πŸ“Š Result includes used_generic_fallback flag for tracking

Common Issues

1. OpenAI API Key Not Found

# Create .env file with your key
echo "OPENAI_API_KEY=your_key_here" > .env

2. Port Already in Use

# Change ports in .env
API_PORT=8101
UI_PORT=8601

3. Servers Not Starting

# Check server status in bae_chat.py
> status

# Restart servers
> restart servers

4. Import Errors

# Ensure dependencies are installed
pip install -r requirements.txt

# Install in development mode
pip install -e .

5. Database Lock Errors

# Stop all running servers
> stop servers

# Remove lock file
rm managed_system/app/database/*.db-journal

🀝 Contributing

This is a doctoral research project. For questions or collaboration:

  • Review docs/PROOF_OF_CONCEPT.md for research objectives
  • Check existing issues and test coverage
  • Ensure all tests pass before submitting changes
  • Follow the established BAE architecture patterns

πŸŽ‰ Project Status Summary

βœ… ALL THREE SCENARIOS SUCCESSFULLY IMPLEMENTED

Completed Milestones

βœ… Scenario 1: Initial System Generation

  • Multi-entity BAE system (Student, Course, Teacher)
  • Complete SWEA agent suite with TechLeadSWEA governance
  • OpenAI-powered entity recognition
  • Managed system generation and deployment
  • Conversational and non-interactive interfaces

βœ… Scenario 2: Runtime Evolution

  • Schema evolution without data loss
  • Automatic server restart on changes
  • Domain knowledge preservation
  • Migration coordination by BAEs

βœ… Scenario 3: Multi-Entity & Reusability

  • BAE Registry with multiple entities
  • Cross-entity relationship management
  • 85% code reuse across BAEs

  • Context adaptation capabilities

Technical Achievements

  • 🧠 OpenAI GPT-4o-mini integration for reasoning and code generation
  • πŸ”„ Feedback loop analytics with TechLeadSWEA oversight
  • 🌍 Multilingual support (English/Portuguese)
  • πŸ“Š Comprehensive monitoring (metrics, logs, analytics)
  • πŸš€ Sub-3-minute system generation
  • ♻️ 85% code reusability across entities
  • 🎯 100% success rate in functional system generation
  • πŸ“¦ Complete project with tests, docs, and examples

Innovation Highlights

  1. Domain entities as autonomous AI agents (BAE architecture)
  2. Semantic coherence maintained throughout system lifecycle
  3. Runtime evolution without redeployment
  4. Natural language to running application in minutes
  5. Multi-agent orchestration with quality governance
  6. Context reusability across different organizations

πŸ“– Citation

This work is part of the doctoral thesis:

"Agentes Baseados em LLM como Entidades AutΓ΄nomas de NegΓ³cio: Uma Nova Arquitetura para ConstruΓ§Γ£o Adaptativa de Sistemas de InformaΓ§Γ£o"

Author: Anderson Martins Gomes
Institution: UECE (Universidade Estadual do CearΓ‘)
Year: 2025


πŸ“„ License

MIT License - See LICENSE file for details


Project Status: 🟒 ALL SCENARIOS COMPLETE - Production-ready proof of concept demonstrating BAE architecture viability for adaptive system generation through domain entity autonomy.

Last Updated: October 2025

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  

Languages