Skip to content

Vokality/ragdoll

Repository files navigation

Ragdoll - Animated Character Controller

A controllable animated character with facial expressions, head poses, and speech bubbles. Control via REST API, WebSocket, or MCP (Model Context Protocol).

Features

  • Expressive Head: Stylized SVG head with articulated neck and pivot
  • Facial Moods: 9 moods with smooth transitions (neutral, smile, frown, laugh, angry, sad, surprise, confusion, thinking)
  • Actions & Speech: Wink, talk, and live speech bubbles that sync with mouth motion
  • Head Pose Control: Yaw and pitch with guard rails so the face never leaves view
  • Idle Animations: Natural blinking, breathing, eye saccades, and subtle head movements
  • Character Variants: Switch between different character appearances (Human, Einstein, and more)
  • Theming: Multiple built-in themes with customizable colors and gradients
  • Multiple Control Methods:
    • Interactive Face Control Panel
    • RESTful API
    • WebSocket (real-time)
    • MCP (Model Context Protocol)

Example

Ragdoll Character in Action

The character demonstrating facial expressions, speech bubbles, and head movements.

Themes

Ragdoll comes with four built-in themes, each with unique color palettes and visual styles:

Default Theme Robot Theme Alien Theme Monochrome Theme

Default (warm, human-like) • Robot (metallic, futuristic) • Alien (green, otherworldly) • Monochrome (classic black and white)

Change themes via the UI control panel, REST API, WebSocket, or MCP tools.

Quick Start

1. Install Dependencies

npm install
# or
bun install

2. Run the Application

Option A: Frontend only (for local development)

npm run dev

Option B: Frontend + API server (for full functionality)

# Terminal 1: Start the API server
cd apps/demo && npm run server

# Terminal 2: Start the frontend (from root)
npm run dev

This starts:

  • Web interface at http://localhost:5173
  • API server at http://localhost:3001

3. (Optional) Run MCP Server

cd apps/demo && npm run mcp-server

4. Docker (Production)

Run the application in production mode with a single container:

docker-compose up --build

This starts:

  • Combined frontend + backend server at http://localhost:3001
  • Serves pre-built static files and API from the same server
  • Includes REST API, WebSocket, and all features

Note: Docker runs in production mode (optimized build), not development mode. For hot-reload development, use Option A or B above instead.

Architecture

Monorepo Structure

ragdoll/
├── packages/
│   └── ragdoll/              # @vokality/ragdoll - core character framework
│       ├── src/
│       │   ├── components/   # RagdollCharacter React component
│       │   ├── controllers/  # CharacterController, ExpressionController, etc.
│       │   ├── models/       # RagdollGeometry, RagdollSkeleton
│       │   ├── themes/       # Theme system (Default, Robot, Alien, Monochrome)
│       │   ├── variants/     # Character variants (Human, Einstein)
│       │   ├── types/       # TypeScript type definitions
│       │   └── animation/   # Easing functions
│       └── tests/            # Test suite
│
├── apps/
│   ├── demo/                 # Browser demo with control panel
│   │   └── src/
│   │       ├── ui/           # UI components (Scene, ControlPanel, etc.)
│   │       ├── api/          # Express server with WebSocket
│   │       └── mcp/          # MCP server for browser version
│   │
│   └── emote/                # VS Code extension
│       ├── src/              # Extension host code
│       └── webview/          # Webview React app
│
└── package.json              # Workspace root (bun workspaces)

Character System

The character is built with:

  • RagdollSkeleton: Lightweight root → headPivot → neck chain
  • RagdollGeometry: SVG-based cartoon head, hair, and facial features
  • HeadPoseController: Smooth, clamped yaw/pitch interpolation
  • ExpressionController: Mood blending plus overlay actions (wink/talk)
  • IdleController: Natural micro-movements (blink, breathe, saccades)
  • CharacterController: Coordinates facial state, head pose, and speech bubbles

Control Methods

1. UI Control Panel

The built-in control panel (right side of screen) provides:

  • Mood picker (all 9 moods)
  • Wink and talk triggers (with clear button)
  • Speech bubble editor with tone (default/whisper/shout)
  • Head pose sliders for yaw/pitch
  • Pomodoro timer with customizable session and break durations
  • Theme selector (top-left: Default, Robot, Alien, Monochrome)
  • Variant selector (top-left: Human, Einstein)
  • Live connection status

2. REST API

Base URL: http://localhost:3001/api

Endpoints

  • POST /api/facial-state – Primary endpoint for moods, actions, head pose, and speech bubbles
  • POST /api/joint – Direct control of the headPivot and neck joints (advanced)
  • GET /api/state – Current serialized character state
  • GET /api/moods – List of supported moods
  • GET /api/joints – List of available joints (headPivot, neck)

Facial State

POST /api/facial-state
Content-Type: application/json
{
  "mood": { "value": "laugh", "duration": 0.4 },
  "action": { "type": "wink" },
  "headPose": { "yaw": 0.2, "pitch": -0.05, "duration": 0.5 },
  "bubble": { "text": "hi there!", "tone": "whisper" }
}

You can send any subset of the payload (e.g., only bubble to update speech).

Joint Control

POST /api/joint
Content-Type: application/json
{
  "joint": "headPivot",
  "angle": { "x": 0, "y": 0.5, "z": 0 }
}

State Query

GET /api/state

{
  "headPose": { "yaw": 0.1, "pitch": -0.05 },
  "joints": {
    "headPivot": { "x": 0, "y": 0.1, "z": 0 },
    "neck": { "x": -0.05, "y": 0, "z": 0 }
  },
  "mood": "smile",
  "action": "talk",
  "bubble": { "text": "hello!", "tone": "default" },
  "animation": {
    "action": "talk",
    "actionProgress": 0.48,
    "isTalking": true
  }
}

Examples

# Set laugh mood
curl -X POST http://localhost:3001/api/facial-state \
  -H "Content-Type: application/json" \
  -d '{"mood": {"value": "laugh", "duration": 0.4}}'

# Wink
curl -X POST http://localhost:3001/api/facial-state \
  -H "Content-Type: application/json" \
  -d '{"action": {"type": "wink"}}'

# Make the head glance left and up
curl -X POST http://localhost:3001/api/facial-state \
  -H "Content-Type: application/json" \
  -d '{"headPose": {"yaw": -0.3, "pitch": 0.1, "duration": 0.6}}'

# Set a speech bubble
curl -X POST http://localhost:3001/api/facial-state \
  -H "Content-Type: application/json" \
  -d '{"bubble": {"text": "LLMs can talk now!", "tone": "shout"}}'

# Get current state
curl http://localhost:3001/api/state

3. WebSocket

Connect to: ws://localhost:3001

import { io } from "socket.io-client";

const socket = io("http://localhost:3001");

// Subscribe to state updates (10 FPS)
socket.emit("subscribe-state");

socket.on("state-update", (state) => {
  console.log("Current state:", state);
});

// Broadcast facial updates in real-time
socket.emit("facial-state", {
  mood: { value: "smile" },
  headPose: { yaw: 0.15 },
});

// Listen for changes triggered by others
socket.on("facial-state-broadcast", (payload) => {
  console.log("Remote payload:", payload);
});

// Unsubscribe when done
socket.emit("unsubscribe-state");

4. MCP (Model Context Protocol)

The MCP server exposes the ragdoll as MCP tools that can be used by AI assistants.

Available Tools

  • setMood – Transition to a named mood
  • triggerAction – Wink or start talking
  • clearAction – Stop the current action
  • setHeadPose – Adjust yaw/pitch in degrees
  • setSpeechBubble – Provide or clear bubble text

MCP Configuration

Add to your MCP client configuration (e.g., Claude Desktop or Cursor):

{
  "mcpServers": {
    "ragdoll": {
      "command": "bun",
      "args": ["run", "mcp-server"],
      "cwd": "/path/to/ragdoll/apps/demo"
    }
  }
}

Using MCP Tools

Once configured, you can control the ragdoll through natural language:

User: Give them a big laugh
AI: [Uses setMood tool with mood="laugh"]

User: Have them wink and say hi!
AI: [Uses triggerAction tool (wink) then setSpeechBubble tool]

User: Reset back to neutral quietly
AI: [Uses setMood tool (neutral) and clearAction tool]

Available Joints

The head-only rig exposes two joints:

  • headPivot – Horizontal swivel (yaw)
  • neck – Vertical nod (pitch)

Available Moods

  • neutral
  • smile
  • frown
  • laugh
  • angry
  • sad
  • surprise
  • confusion
  • thinking

Animation System

Head Pose

  • Head pivot (yaw) and neck (pitch) use spring interpolation for a smooth robotic glance.
  • Pose changes clamp to ±35° yaw and ±20° pitch so the face never leaves frame.

Expressions & Actions

  • Mood transitions ease between stored facial configurations (mouth scale/offset, eyebrows, eye squish).
  • Actions (wink/talk) layer on top of the base mood for additive motion.
  • Talking drives procedural mouth squash/stretch synchronized with speech bubbles.

Idle Animations

  • Blinking: Natural random blinks with smooth eyelid motion
  • Breathing: Subtle chest expansion cycle
  • Saccades: Quick eye micro-movements for lifelike gaze
  • Head Micro-movements: Organic noise-based subtle head drift

Development

Project Structure

This is a monorepo using Bun workspaces:

  • packages/ragdoll - Core character framework (@vokality/ragdoll)
  • apps/demo - Browser demo application with API, WebSocket, and MCP server
  • apps/emote - VS Code extension

Clear separation of concerns with type-safe interfaces between packages.

Tech Stack

  • React 19 with TypeScript and React Compiler
  • SVG for 2D character rendering
  • Framer Motion for animations
  • Express 5 for REST API
  • Socket.io for WebSocket
  • MCP SDK for Model Context Protocol
  • Vite 7 for build tooling
  • Bun for server-side scripts

Building

# Development (from root)
npm run dev

# Production build (from root - builds all packages and apps)
npm run build

# Preview production build (from apps/demo)
cd apps/demo && npm run preview

# Type check (from root)
npm run typecheck

# Lint (from root)
npm run lint

# Format (from root)
npm run format

API Examples

Python

import requests

# Laugh, wink, and add a speech bubble
requests.post('http://localhost:3001/api/facial-state', json={
    'mood': {'value': 'laugh', 'duration': 0.4},
    'action': {'type': 'wink'},
    'bubble': {'text': 'Python says hi!', 'tone': 'default'}
})

# Get state
state = requests.get('http://localhost:3001/api/state').json()
print(f"Mood: {state['mood']}")
print(f"Bubble: {state['bubble']}")

JavaScript/Node.js

await fetch("http://localhost:3001/api/facial-state", {
  method: "POST",
  headers: { "Content-Type": "application/json" },
  body: JSON.stringify({
    mood: { value: "smile", duration: 0.3 },
    headPose: { yaw: 0.25, duration: 0.4 },
    bubble: { text: "JS was here", tone: "whisper" },
  }),
});

const state = await fetch("http://localhost:3001/api/state").then((r) =>
  r.json(),
);
console.log(state.headPose, state.bubble);

Troubleshooting

API server not starting

Make sure port 3001 is available:

lsof -i :3001

Character not visible

  1. Check browser console for errors
  2. Try refreshing the page

MCP server not connecting

  1. Verify MCP configuration path is correct (should point to apps/demo directory)
  2. Check that cd apps/demo && npm run mcp-server works standalone
  3. Restart your MCP client

License

MIT

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Contributors 3

  •  
  •  
  •  

Languages