Sophisticated AI Companion with Vector Database, Emotional Intelligence, and Model Context Protocol Integration
-
AI generated code
-
Aura could be dangerous despite my attempted safeguards in a number of ways including but not limited to PC damage User mental health and attachment Emotional agentic activity
- ASEKE Framework: Adaptive Socio-Emotional Knowledge Ecosystem
- Real-time Emotional State Detection with neurological correlations
- Cognitive Focus Tracking across different mental frameworks
- Adaptive Self-Reflection for continuous improvement
- π Thinking Extraction: Transparent AI reasoning with thought analysis and cognitive transparency
- Vector Database Integration with ChromaDB for semantic search
- Persistent Conversation Memory with embedding-based retrieval
- Emotional Pattern Analysis over time
- Cognitive State Tracking and trend analysis
- MemVid AI QR code mp4 memory Infinite MP4 based memory
- Internal AI guided Memory Organization tools Move information from short to long term memory systems to avoid bottlenecks and categorize chats
- Model Context Client Utilizes the same MCP config JSON format as Claude Desktop- Use ANY tools!
- Model Context Protocol Server for external tool integration
- Standardized AI Agent Communication following MCP specifications
- Tool Ecosystem Compatibility with other MCP-enabled systems
- Bidirectional Data Exchange with external AI agents
- Emotional Trend Analysis with stability metrics
- Cognitive Pattern Recognition and optimization
- Personalized Recommendations based on interaction history
- Data Export in multiple formats (JSON, CSV, etc.)
- User Input β Frontend β FastAPI
- Processing β Vector DB Search β Context Retrieval
- AI Processing β Gemini API β Response Generation
- State Updates β Emotional/Cognitive Analysis β Pattern Storage
- Memory Storage β Vector DB β Persistent Learning
- External Access β MCP Server β Tool Integration
- Real-time Reasoning Capture: Extract and analyze AI thought processes during conversations
- Thought Summarization: Automatic generation of reasoning summaries for quick understanding
- Cognitive Transparency: Full visibility into how Aura approaches problems and makes decisions
- Reasoning Metrics: Detailed analytics on thinking patterns, processing time, and cognitive load
- Thinking Budget: Configurable reasoning depth (1024-32768 tokens)
- Response Integration: Optional inclusion of reasoning in user responses
- Pattern Analysis: Long-term analysis of reasoning patterns and cognitive development
- Performance Optimization: Thinking efficiency metrics and optimization recommendations
- Basic: Normal, Happy, Sad, Angry, Excited, Fear, Disgust, Surprise
- Complex: Joy, Love, Peace, Creativity, DeepMeditation
- Combined: Hope (Anticipation + Joy), Optimism, Awe, Remorse
- Social: RomanticLove, PlatonicLove, ParentalLove, Friendliness
- Brainwave Patterns: Alpha, Beta, Gamma, Theta, Delta
- Neurotransmitters: Dopamine, Serotonin, Oxytocin, GABA, Norepinephrine
- NTK Layers: Neural Tensor Kernel mapping for emotional states
- KS (Knowledge Substrate): Shared conversational context
- CE (Cognitive Energy): Mental effort and focus allocation
- IS (Information Structures): Ideas and concept patterns
- KI (Knowledge Integration): Learning and connection processes
- KP (Knowledge Propagation): Information sharing mechanisms
- ESA (Emotional State Algorithms): Emotional influence on processing
- SDA (Sociobiological Drives): Social dynamics and trust factors
- Stability Metrics: Emotional consistency over time
- Dominant Patterns: Most frequent emotional states
- Transition Analysis: Emotional state changes and triggers
- Intensity Tracking: Emotional intensity distribution
- Brainwave Correlation: Neural activity pattern analysis
- Focus Patterns: ASEKE component utilization
- Learning Efficiency: Knowledge integration rates
- Context Switching: Cognitive flexibility metrics
- Attention Allocation: Cognitive energy distribution
Responses take some time to process depending on tasks, any coder wants to see if they can speed up the processes I would be grateful.
- Vector database indexing for fast searches
- Async processing for concurrent requests
- Cost and Memory-efficient local all-minilm vector embedding generation
- Autonomous sub-model background Focus gating and task processing for state updates and tool use
- Tool learning adapter
- MemVid infinite QR code video long term memory!
- Health check endpoint
- Performance metrics collection
- Error tracking and reporting
- Resource usage monitoring
I am not a coder so hopefully it sets up right if anyone tries it.
- Python 3.12+
- Google API Key (from Google AI Studio)
- At least 4GB RAM (for vector embeddings)
- 2GB+ storage space
- Clone or Fork and Navigate:
cd /emotion_ai/aura_backend
Uses uv- pyproject.toml and creates a .venv with python 3.12 --seed in the backend 2. Run Setup Script:
./setup.sh- Configure Environment:
I will try to streamline all of this into an OS agnostic app soon.
It will pick up from your OS environment if the API key is configured. It should work if your OS key is set as GEMINI_API_KEY too
# Edit the .env file to use your existing key, sort of unneeded now I think.
echo "GOOGLE_API_KEY=$GOOGLE_API_KEY" > .env# Aura Backend Configuration
# ==========================
# Google API Configuration
GOOGLE_API_KEY=your-google-api-key-here
# Database Configuration
CHROMA_PERSIST_DIRECTORY=./aura_chroma_db
AURA_DATA_DIRECTORY=./aura_data
# Server Configuration
HOST=0.0.0.0
PORT=8000
DEBUG=false
# Logging Configuration
LOG_LEVEL=INFO
# MCP Server Configuration
MCP_SERVER_NAME=aura-companion
MCP_SERVER_VERSION=1.0.0
# Security Configuration
CORS_ORIGINS=["http://localhost:5173", "http://localhost:3000"]
# Features Configuration
ENABLE_EMOTIONAL_ANALYSIS=true
ENABLE_COGNITIVE_TRACKING=true
ENABLE_VECTOR_SEARCH=true
ENABLE_FILE_EXPORTS=true
# AI Response Configuration
# gemini-2.5-flash-preview-05-20
AURA_MODEL=gemini-2.5-flash-preview-05-20
AURA_MAX_OUTPUT_TOKENS=1000000
# Autonomic System Configuration
# gemini-2.0-flash-lite
AURA_AUTONOMIC_MODEL=gemini-2.0-flash-lite
AURA_AUTONOMIC_MAX_OUTPUT_TOKENS=100000
AUTONOMIC_ENABLED=true
AUTONOMIC_TASK_THRESHOLD=medium # low, medium, high
# Rate Limiting Configuration
AUTONOMIC_MAX_CONCURRENT_TASKS=12 # Optimal concurrency for 30 rpm limit
AUTONOMIC_RATE_LIMIT_RPM=25 # Requests per minute (safety margin below 30)
AUTONOMIC_RATE_LIMIT_RPD=1200 # Requests per day (safety margin below 1400)
AUTONOMIC_TIMEOUT_SECONDS=60 # Increased for higher concurrency
# Main Model Rate Limiting (user-configurable based on plan)
MAIN_MODEL_RATE_LIMIT_RPM=10 # Conservative default, increase based on user plan
MAIN_MODEL_RATE_LIMIT_RPD=500 # Daily limit for main model
# Queue Management
AUTONOMIC_QUEUE_MAX_SIZE=100 # Maximum queued tasks
AUTONOMIC_QUEUE_PRIORITY_ENABLED=true # Enable priority-based processing
Easy Full System Start: This will start both backend and frontend in separate terminals automatically:
./start_full_system.shThis script will:
- β Check all prerequisites (Node.js, npm, uv)
- β Set up backend environment (.venv with Python 3.12)
- β Install frontend dependencies if needed
- β Start backend in one terminal (with hot reload)
- β Start frontend in another terminal (with hot reload)
- β Verify both services are running
- β Display status and URLs
Stop All Services:
./stop_full_system.shIf you prefer to start services manually:
Backend:
cd aura_backend
./start.shFrontend (in a separate terminal):
npm install # First time only
npm run dev- Frontend: http://localhost:5173
- Backend API: http://localhost:8000
- API Documentation: http://localhost:8000/docs
- Health Check:
GET /health - Process Conversation:
POST /conversation - Search Memories:
POST /search - Emotional Analysis:
GET /emotional-analysis/{user_id} - Export Data:
POST /export/{user_id}
Visit http://localhost:8000/docs for interactive API documentation.
- search_aura_memories: Semantic search through conversation history
- analyze_aura_emotional_patterns: Deep emotional trend analysis
- store_aura_conversation: Add memories to Aura's knowledge base
- get_aura_user_profile: Retrieve user personalization data
- export_aura_user_data: Data export functionality
- query_aura_emotional_states: Information about emotional intelligence system
- query_aura_aseke_framework: ASEKE cognitive architecture details
To connect external MCP clients to Aura:
Edit your directory path and place in claude desktop config json.
{
"mcpServers": {
"aura-companion": {
"command": "uv",
"args": [
"--directory",
"/home/ty/Repositories/ai_workspace/emotion_ai/aura_backend",
"run",
"aura_server.py"
]
}
}
}βββββββββββββββββββββββββββββββββββββββββββββββββββ
β Frontend β
β (React/TypeScript) β
βββββββββββββββββββ¬ββββββββββββββββββββββββββββββββ
β HTTP/WebSocket
βββββββββββββββββββΌββββββββββββββββββββββββββββββββ
β FastAPI β
β (REST API Layer) β
βββββββββββββββββββ¬ββββββββββββββββββββββββββββββββ€
β β β
β ββββββββββββββββΌββββββββββββββ β
β β Vector Database β β
β β (ChromaDB) β β
β β β β
β β β’ Conversation Memory β β
β β β’ Emotional Patterns β β
β β β’ Cognitive States β β
β β β’ Knowledge Substrate β β
β ββββββββββββββββββββββββββββββ β
β β
β ββββββββββββββββββββββββββββββ β
β β State Manager β β
β β β β
β β β’ Emotional Transitions β β
β β β’ Cognitive Focus Changes β β
β β β’ Automated DB Operations β β
β β β’ Pattern Recognition β β
β ββββββββββββββββββββββββββββββ β
β β
β ββββββββββββββββββββββββββββββ β
β β File System β β
β β β β
β β β’ User Profiles β β
β β β’ Data Exports β β
β β β’ Session Storage β β
β β β’ Backup Management β β
β ββββββββββββββββββββββββββββββ β
βββββββββββββββββββ¬ββββββββββββββββββββββββββββββββ
β MCP Protocol
βββββββββββββββββββΌββββββββββββββββββββββββββββββββ
β MCP Server β
β (External Tool Access) β
β β
β β’ Memory Search Tools β
β β’ Emotional Analysis Tools β
β β’ Data Export Tools β
β β’ ASEKE Framework Access β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
curl http://localhost:8000/health# Test thinking extraction capabilities
cd aura_backend
python test_thinking.py
# Interactive thinking demonstration
python thinking_demo.py
# Check thinking system status
curl http://localhost:8000/thinking-statuspytest tests/./test_setup.py# Example using wrk
wrk -t12 -c400 -d30s http://localhost:8000/healthI apologize for the mess, I do not know if any of this works below but feel free to try if you are brave or know what you are doing.
# Build image
docker build -t aura-backend .
# Run container
docker run -p 8000:8000 -v ./aura_data:/app/aura_data aura-backend# Copy service file
sudo cp aura-backend.service /etc/systemd/system/
# Enable and start
sudo systemctl enable aura-backend
sudo systemctl start aura-backendUpdate your frontend to use these endpoints:
const API_BASE = 'http://localhost:8000';
// Replace localStorage with API calls
const response = await fetch(`${API_BASE}/conversation`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
user_id: userId,
message: userMessage,
session_id: sessionId
})
});Real-time updates and streaming responses will be available via WebSocket connections.
Create custom MCP tools by extending the mcp_server.py:
@tool
async def custom_aura_tool(params: CustomParams) -> Dict[str, Any]:
"""Your custom tool implementation"""
# Implementation here
passDirect vector database access for advanced queries:
from main import vector_db
results = await vector_db.search_conversations(
query="emotional support",
user_id="user123",
n_results=10
)Claude has started Aura services and the stop script still doesn't work:
fuser -k 8000/tcp-
Installation Errors:
# Ensure Python 3.12+ python3 --version # Clean installation rm -rf venv/ ./setup.sh
-
API Key Issues:
# Check environment source venv/bin/activate echo $GOOGLE_API_KEY
-
Vector DB Issues: This is asshole AI- you will lose your db
# Reset database rm -rf aura_chroma_db/ ./test_setup.py -
Memory Issues:
- Increase system memory allocation
- Reduce vector embedding batch sizes
- Use lightweight embedding models
Check logs in:
- Console output during development
- System logs:
journalctl -u aura-backend(if using systemd) - Application logs:
./aura_data/logs/
- All user data stored locally
- No external data transmission (except Google API)
- Vector embeddings are anonymized
- Session data encrypted in transit
- API key authentication
- Rate limiting enabled
- CORS configuration
- Input validation and sanitization
- Real-time WebSocket connections
- Advanced emotion prediction models
- Multi-user collaboration features
- Enhanced MCP tool ecosystem
- Mobile app backend support
- Advanced analytics dashboard
- Integration with external AI models
- Multi-modal interaction (voice, video, text)
- Federated learning across Aura instances
- Advanced personality adaptation
- Enterprise deployment options
- Open-source community ecosystem
My stuff is MIT I suppose but there is other software like google-genai and memvid so it is a mixed bag I think ie don't steal my ideas and try to make money, without me. lol but I am super poor.
Contributions welcome! Please read our contributing guidelines and submit pull requests for review.
For issues and support:
- Check troubleshooting section
- Review logs and error messages
- Create detailed issue reports
- Join community discussions
Aura Emotion AI - Powering the future of AI companionship and assistance through advanced emotional intelligence and sophisticated memory systems.



