A controllable animated character with facial expressions, head poses, and speech bubbles. Control via REST API, WebSocket, or MCP (Model Context Protocol).
- Expressive Head: Stylized SVG head with articulated neck and pivot
- Facial Moods: 9 moods with smooth transitions (neutral, smile, frown, laugh, angry, sad, surprise, confusion, thinking)
- Actions & Speech: Wink, talk, and live speech bubbles that sync with mouth motion
- Head Pose Control: Yaw and pitch with guard rails so the face never leaves view
- Idle Animations: Natural blinking, breathing, eye saccades, and subtle head movements
- Character Variants: Switch between different character appearances (Human, Einstein, and more)
- Theming: Multiple built-in themes with customizable colors and gradients
- Multiple Control Methods:
- Interactive Face Control Panel
- RESTful API
- WebSocket (real-time)
- MCP (Model Context Protocol)
The character demonstrating facial expressions, speech bubbles, and head movements.
Ragdoll comes with four built-in themes, each with unique color palettes and visual styles:
Default (warm, human-like) • Robot (metallic, futuristic) • Alien (green, otherworldly) • Monochrome (classic black and white)
Change themes via the UI control panel, REST API, WebSocket, or MCP tools.
npm install
# or
bun installOption A: Frontend only (for local development)
npm run devOption B: Frontend + API server (for full functionality)
# Terminal 1: Start the API server
cd apps/demo && npm run server
# Terminal 2: Start the frontend (from root)
npm run devThis starts:
- Web interface at
http://localhost:5173 - API server at
http://localhost:3001
cd apps/demo && npm run mcp-serverRun the application in production mode with a single container:
docker-compose up --buildThis starts:
- Combined frontend + backend server at
http://localhost:3001 - Serves pre-built static files and API from the same server
- Includes REST API, WebSocket, and all features
Note: Docker runs in production mode (optimized build), not development mode. For hot-reload development, use Option A or B above instead.
ragdoll/
├── packages/
│ └── ragdoll/ # @vokality/ragdoll - core character framework
│ ├── src/
│ │ ├── components/ # RagdollCharacter React component
│ │ ├── controllers/ # CharacterController, ExpressionController, etc.
│ │ ├── models/ # RagdollGeometry, RagdollSkeleton
│ │ ├── themes/ # Theme system (Default, Robot, Alien, Monochrome)
│ │ ├── variants/ # Character variants (Human, Einstein)
│ │ ├── types/ # TypeScript type definitions
│ │ └── animation/ # Easing functions
│ └── tests/ # Test suite
│
├── apps/
│ ├── demo/ # Browser demo with control panel
│ │ └── src/
│ │ ├── ui/ # UI components (Scene, ControlPanel, etc.)
│ │ ├── api/ # Express server with WebSocket
│ │ └── mcp/ # MCP server for browser version
│ │
│ └── emote/ # VS Code extension
│ ├── src/ # Extension host code
│ └── webview/ # Webview React app
│
└── package.json # Workspace root (bun workspaces)
The character is built with:
- RagdollSkeleton: Lightweight root → headPivot → neck chain
- RagdollGeometry: SVG-based cartoon head, hair, and facial features
- HeadPoseController: Smooth, clamped yaw/pitch interpolation
- ExpressionController: Mood blending plus overlay actions (wink/talk)
- IdleController: Natural micro-movements (blink, breathe, saccades)
- CharacterController: Coordinates facial state, head pose, and speech bubbles
The built-in control panel (right side of screen) provides:
- Mood picker (all 9 moods)
- Wink and talk triggers (with clear button)
- Speech bubble editor with tone (default/whisper/shout)
- Head pose sliders for yaw/pitch
- Pomodoro timer with customizable session and break durations
- Theme selector (top-left: Default, Robot, Alien, Monochrome)
- Variant selector (top-left: Human, Einstein)
- Live connection status
Base URL: http://localhost:3001/api
POST /api/facial-state– Primary endpoint for moods, actions, head pose, and speech bubblesPOST /api/joint– Direct control of theheadPivotandneckjoints (advanced)GET /api/state– Current serialized character stateGET /api/moods– List of supported moodsGET /api/joints– List of available joints (headPivot, neck)
Facial State
POST /api/facial-state
Content-Type: application/json
{
"mood": { "value": "laugh", "duration": 0.4 },
"action": { "type": "wink" },
"headPose": { "yaw": 0.2, "pitch": -0.05, "duration": 0.5 },
"bubble": { "text": "hi there!", "tone": "whisper" }
}You can send any subset of the payload (e.g., only bubble to update speech).
Joint Control
POST /api/joint
Content-Type: application/json
{
"joint": "headPivot",
"angle": { "x": 0, "y": 0.5, "z": 0 }
}State Query
GET /api/state
{
"headPose": { "yaw": 0.1, "pitch": -0.05 },
"joints": {
"headPivot": { "x": 0, "y": 0.1, "z": 0 },
"neck": { "x": -0.05, "y": 0, "z": 0 }
},
"mood": "smile",
"action": "talk",
"bubble": { "text": "hello!", "tone": "default" },
"animation": {
"action": "talk",
"actionProgress": 0.48,
"isTalking": true
}
}# Set laugh mood
curl -X POST http://localhost:3001/api/facial-state \
-H "Content-Type: application/json" \
-d '{"mood": {"value": "laugh", "duration": 0.4}}'
# Wink
curl -X POST http://localhost:3001/api/facial-state \
-H "Content-Type: application/json" \
-d '{"action": {"type": "wink"}}'
# Make the head glance left and up
curl -X POST http://localhost:3001/api/facial-state \
-H "Content-Type: application/json" \
-d '{"headPose": {"yaw": -0.3, "pitch": 0.1, "duration": 0.6}}'
# Set a speech bubble
curl -X POST http://localhost:3001/api/facial-state \
-H "Content-Type: application/json" \
-d '{"bubble": {"text": "LLMs can talk now!", "tone": "shout"}}'
# Get current state
curl http://localhost:3001/api/stateConnect to: ws://localhost:3001
import { io } from "socket.io-client";
const socket = io("http://localhost:3001");
// Subscribe to state updates (10 FPS)
socket.emit("subscribe-state");
socket.on("state-update", (state) => {
console.log("Current state:", state);
});
// Broadcast facial updates in real-time
socket.emit("facial-state", {
mood: { value: "smile" },
headPose: { yaw: 0.15 },
});
// Listen for changes triggered by others
socket.on("facial-state-broadcast", (payload) => {
console.log("Remote payload:", payload);
});
// Unsubscribe when done
socket.emit("unsubscribe-state");The MCP server exposes the ragdoll as MCP tools that can be used by AI assistants.
setMood– Transition to a named moodtriggerAction– Wink or start talkingclearAction– Stop the current actionsetHeadPose– Adjust yaw/pitch in degreessetSpeechBubble– Provide or clear bubble text
Add to your MCP client configuration (e.g., Claude Desktop or Cursor):
{
"mcpServers": {
"ragdoll": {
"command": "bun",
"args": ["run", "mcp-server"],
"cwd": "/path/to/ragdoll/apps/demo"
}
}
}Once configured, you can control the ragdoll through natural language:
User: Give them a big laugh
AI: [Uses setMood tool with mood="laugh"]
User: Have them wink and say hi!
AI: [Uses triggerAction tool (wink) then setSpeechBubble tool]
User: Reset back to neutral quietly
AI: [Uses setMood tool (neutral) and clearAction tool]
The head-only rig exposes two joints:
- headPivot – Horizontal swivel (yaw)
- neck – Vertical nod (pitch)
neutralsmilefrownlaughangrysadsurpriseconfusionthinking
- Head pivot (yaw) and neck (pitch) use spring interpolation for a smooth robotic glance.
- Pose changes clamp to ±35° yaw and ±20° pitch so the face never leaves frame.
- Mood transitions ease between stored facial configurations (mouth scale/offset, eyebrows, eye squish).
- Actions (wink/talk) layer on top of the base mood for additive motion.
- Talking drives procedural mouth squash/stretch synchronized with speech bubbles.
- Blinking: Natural random blinks with smooth eyelid motion
- Breathing: Subtle chest expansion cycle
- Saccades: Quick eye micro-movements for lifelike gaze
- Head Micro-movements: Organic noise-based subtle head drift
This is a monorepo using Bun workspaces:
packages/ragdoll- Core character framework (@vokality/ragdoll)apps/demo- Browser demo application with API, WebSocket, and MCP serverapps/emote- VS Code extension
Clear separation of concerns with type-safe interfaces between packages.
- React 19 with TypeScript and React Compiler
- SVG for 2D character rendering
- Framer Motion for animations
- Express 5 for REST API
- Socket.io for WebSocket
- MCP SDK for Model Context Protocol
- Vite 7 for build tooling
- Bun for server-side scripts
# Development (from root)
npm run dev
# Production build (from root - builds all packages and apps)
npm run build
# Preview production build (from apps/demo)
cd apps/demo && npm run preview
# Type check (from root)
npm run typecheck
# Lint (from root)
npm run lint
# Format (from root)
npm run formatimport requests
# Laugh, wink, and add a speech bubble
requests.post('http://localhost:3001/api/facial-state', json={
'mood': {'value': 'laugh', 'duration': 0.4},
'action': {'type': 'wink'},
'bubble': {'text': 'Python says hi!', 'tone': 'default'}
})
# Get state
state = requests.get('http://localhost:3001/api/state').json()
print(f"Mood: {state['mood']}")
print(f"Bubble: {state['bubble']}")await fetch("http://localhost:3001/api/facial-state", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
mood: { value: "smile", duration: 0.3 },
headPose: { yaw: 0.25, duration: 0.4 },
bubble: { text: "JS was here", tone: "whisper" },
}),
});
const state = await fetch("http://localhost:3001/api/state").then((r) =>
r.json(),
);
console.log(state.headPose, state.bubble);Make sure port 3001 is available:
lsof -i :3001- Check browser console for errors
- Try refreshing the page
- Verify MCP configuration path is correct (should point to
apps/demodirectory) - Check that
cd apps/demo && npm run mcp-serverworks standalone - Restart your MCP client
MIT




