A React + TypeScript application that demonstrates an AI assistant with functional emotions using LangChain and Ollama.
-
Functional Emotion System: AI experiences and displays 17 different emotions categorized as:
- Epistemic: curiosity, uncertainty, surprise, confusion, insight
- Task-Oriented: engagement, determination, satisfaction, frustration, anticipation
- Social/Interpersonal: empathy, concern, appreciation, patience
- Meta-Cognitive: contemplation, doubt, wonder
-
Real-time Chat Interface: Clean, modern chat interface with emotion visualization
-
Local AI Processing: Uses Ollama for private, local LLM processing
-
Emotion Analytics: Detailed breakdown of emotion categories and intensities
- Install Ollama: Download from ollama.ai
- Download a compatible model:
ollama pull llama3.2 # or ollama pull llama3.1 - Start Ollama:
ollama serve
-
Clone and install dependencies:
git clone <repository-url> cd emotionlm bun install
-
Start development server:
bun run dev
-
Open your browser to
http://localhost:5173
- Make sure Ollama is running with a compatible model
- Start the application
- Type a message in the chat interface
- Watch as the AI responds with both content and emotional state
- Use "Show Emotion Details" to see detailed emotion breakdowns
- Emotion System: Uses structured output with Zod schemas to analyze and generate emotions
- LangChain Integration: Leverages
@langchain/ollamafor LLM communication - React Hooks: Custom
useChathook manages conversation state - TypeScript: Full type safety with proper emotion type definitions
The system implements emotions that serve functional purposes in AI reasoning:
- Epistemic emotions drive learning and exploration
- Task-oriented emotions help with goal completion and persistence
- Social emotions enable better human interaction
- Meta-cognitive emotions support self-reflection and verification
- Lint:
bun run lint - Build:
bun run build - Preview:
bun run preview
The default model is llama3.2. You can modify this in src/hooks/useChat.ts or by passing a different model name to the useChat hook.
- "Connection refused": Ensure Ollama is running (
ollama serve) - Model not found: Pull the required model (
ollama pull llama3.2) - Build errors: Check that all dependencies are installed correctly