Voice AI Customer Service Platform
Next-Generation Customer Support: Intelligent AI agents that seamlessly collaborate with human representatives to deliver exceptional customer experiences at scale.
|
|
|
|
- Real-time dashboard with live metrics
- Switch tracking and resolution analytics
- Performance monitoring and SLA tracking
- Complete audit trail of all interactions
- Socket.io powered real-time updates
| Backend | Node.js, Express, TypeScript |
| Frontend | React 18, Vite, TypeScript |
| Database | PostgreSQL + pgvector (vector embeddings) |
| Cache | Redis (sessions + real-time state) |
| ORM | Prisma (type-safe database access) |
| Real-time | Socket.io (WebSocket communication) |
| AI Services | Retell AI (voice), Google Gemini (chat/copilot) |
| Telephony | Telnyx (phone network integration) |
| Embeddings | OpenAI (RAG knowledge base) |
┌─────────────────────────────────────────────────────────────────────┐
│ CUSTOMER LAYER │
│ (Voice Calls + Text Chat) │
└─────────────────────────────────────────────────────────────────────┘
│
┌───────────────┴───────────────┐
│ │
▼ ▼
┌──────────────────┐ ┌──────────────────┐
│ VOICE CHANNEL │ │ TEXT CHANNEL │
│ │ │ │
│ Telnyx Phone │ │ Chat Widget │
│ Retell AI STT │ │ Gemini LLM │
│ Retell AI TTS │ │ Context Memory │
└────────┬─────────┘ └────────┬─────────┘
│ │
└──────────────┬───────────────┘
│
▼
┌──────────────────────────┐
│ BACKEND CORE │
│ (Node.js + Express) │
│ │
│ • Session Manager │
│ • Switch Controller │
│ • Copilot Engine │
│ • RAG Knowledge Base │
│ • Analytics Engine │
│ • Webhook Handlers │
└─────────┬────────────────┘
│
┌─────────────┼─────────────┐
│ │ │
▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐
│PostgreSQL│ │ Redis │ │Socket.io │
│+pgvector │ │ Sessions │ │Real-time │
└──────────┘ └──────────┘ └─────┬────┘
│
▼
┌──────────────────────────┐
│ AGENT DASHBOARD │
│ (React SPA) │
│ │
│ • Live Transcript View │
│ • AI Copilot Sidebar │
│ • Queue Management │
│ • Control Panel │
│ • Analytics Dashboard │
└──────────────────────────┘
The approach to seamless handoffs:
┌─────────────────────────────────────────────────────────────┐
│ CONFERENCE ROOM │
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ CUSTOMER │ │ AI AGENT │ │ HUMAN │ │
│ │ │ │ │ │ REP │ │
│ │ Always │ │ Muted/ │ │ Muted/ │ │
│ │ Active │ │ Unmuted │ │ Unmuted │ │
│ └──────────┘ └──────────┘ └──────────┘ │
│ │
│ SWITCH = Mute one participant, Unmute another │
│ RESULT = Zero call drops, full context preserved │
└─────────────────────────────────────────────────────────────┘
Benefits:
- No call reconnection required
- No context loss during handoff
- Sub-second switching time
- Customer doesn't hear any interruption
- Scalable to multiple agents per call
- Node.js 18+ and npm 9+
- Docker Desktop (optional - for PostgreSQL + Redis)
- API Keys (see Environment Variables section)
# 1. Clone the repository
git clone <repository-url>
cd Senpilot-Customer-Service-App
# 2. Install dependencies
npm install
# 3. Set up environment variables
cp .env.example .env
# Edit .env with your API keys (see below)
# 4. Start the development servers
npm run devThe app will start with:
- Frontend: http://localhost:5173
- Backend: http://localhost:3001
- Storage: In-memory (works without Docker)
For persistent data storage and full analytics:
# Start PostgreSQL + Redis containers
docker-compose up -d
# Initialize the database
npm run db:generate
cd packages/database
export DATABASE_URL="postgresql://postgres:postgres@localhost:5433/customer_service?schema=public"
npx prisma migrate dev --name init
npx tsx src/seed.ts
cd ../..
# Start the app
npm run dev| URL | Description |
|---|---|
http://localhost:5173 |
Customer Demo (Chat + Voice) |
http://localhost:5173/agent |
Agent Dashboard |
http://localhost:3001/health |
Backend Health Check |
The platform includes 3 pre-built demo scenarios to showcase different use cases:
| Scenario | Description |
|---|---|
| 💰 High Bill Dispute | Customer frustrated about unexpectedly high bill |
| 🚨 Report Gas Leak | Emergency situation requiring immediate escalation |
| 🏠 Setup New Service | New customer requesting service activation |
Click any scenario button in the Chat or Voice interface to start a pre-configured conversation.
Create a .env file in the project root. Copy from .env.example and fill in your values:
# ═══════════════════════════════════════════════════════════
# MINIMAL SETUP (Works without Docker)
# ═══════════════════════════════════════════════════════════
# Server Configuration
PORT=3001
NODE_ENV=development
FRONTEND_URL=http://localhost:5173
# Database (Required - but app falls back to in-memory if unavailable)
DATABASE_URL="postgresql://postgres:postgres@localhost:5433/customer_service?schema=public"
REDIS_URL="redis://localhost:6379"
# ═══════════════════════════════════════════════════════════
# AI SERVICES (Add these for full functionality)
# ═══════════════════════════════════════════════════════════
# Google Gemini - Powers text chat + AI copilot
# Get key at: https://makersuite.google.com/
GEMINI_API_KEY=your_gemini_api_key
# Retell AI - Powers voice calls
# Get key at: https://retellai.com
RETELL_API_KEY=your_retell_api_key
RETELL_AGENT_ID=your_retell_agent_id
# ═══════════════════════════════════════════════════════════
# OPTIONAL SERVICES (Enhanced features)
# ═══════════════════════════════════════════════════════════
# Telnyx - Phone number integration
TELNYX_API_KEY=your_telnyx_api_key
TELNYX_CONNECTION_ID=your_connection_id
TELNYX_PHONE_NUMBER=+1234567890
# OpenAI - For RAG embeddings
OPENAI_API_KEY=your_openai_api_key
# Webhooks - For production deployments
WEBHOOK_BASE_URL=https://your-domain.com🎙️ Retell AI (Voice Agent)
- Sign up at retellai.com
- Create a new agent in the dashboard
- Configure the agent:
- Model:
gpt-4o-miniorgpt-4 - Voice: Select from 11labs voices
- System prompt: Use utility customer service context
- Model:
- Copy your API key and Agent ID to
.env
Utility Voice Agent Setup:
# Use our automated setup script
curl -X POST http://localhost:3001/api/voice/agent/create-llm
# This creates an LLM with:
# - Utility-specialized system prompt
# - Emergency gas leak detection
# - Billing/outage/payment knowledge
# - Natural conversation flow💬 Google Gemini (Text Chat + Copilot)
- Get API key from Google AI Studio
- Add to
.env:GEMINI_API_KEY=your_key - The platform automatically uses Gemini for:
- Text chat responses (same personality as voice)
- Agent copilot suggestions
- Sentiment analysis
- Context-aware recommendations
No additional setup required - it works out of the box!
Telnyx (Optional - Phone Integration)
- Sign up at telnyx.com
- Purchase a phone number
- Create a TeXML application
- Set webhook URL:
https://your-domain/webhooks/telnyx - Assign phone number to application
- Add credentials to
.env
Note: Phone integration is optional. Voice calls also work via browser WebRTC.
customer-service-app/
├── apps/
│ ├── backend/ # Node.js API Server
│ │ └── src/
│ │ ├── controllers/ # HTTP endpoints & webhooks
│ │ │ ├── chatController.ts # Text chat API
│ │ │ ├── voiceController.ts # Voice call management
│ │ │ ├── switchController.ts # AI↔Human switching
│ │ │ ├── retellController.ts # Retell webhooks
│ │ │ └── analyticsController.ts # Metrics & diagnostics
│ │ ├── services/
│ │ │ ├── chat/ # Chat message processing
│ │ │ ├── voice/ # Voice call handling
│ │ │ ├── ai/ # Gemini LLM integration
│ │ │ ├── copilot/ # AI copilot engine
│ │ │ ├── state/ # Redis session management
│ │ │ └── analytics/ # Metrics aggregation
│ │ ├── sockets/
│ │ │ └── agentGateway.ts # Socket.io real-time events
│ │ └── server.ts # Express + Socket.io server
│ │
│ └── web-client/ # React Frontend
│ └── src/
│ ├── components/
│ │ ├── agent-dashboard/ # Agent UI components
│ │ │ ├── QueuePanel.tsx # Incoming requests queue
│ │ │ ├── LiveTranscript.tsx # Real-time conversation
│ │ │ ├── SidebarCopilot.tsx # AI suggestions panel
│ │ │ ├── ChatReplyInput.tsx # Agent message input
│ │ │ └── ControlPanel.tsx # Switch/mute controls
│ │ ├── customer-widget/ # Customer-facing UI
│ │ │ ├── ChatWindow.tsx # Text chat interface
│ │ │ └── CallButton.tsx # Voice call button
│ │ └── shared/ # Reusable components
│ ├── hooks/
│ │ ├── useCallState.ts # Call state + Socket.io
│ │ ├── useAgentQueue.ts # Queue management
│ │ └── useChatSocket.ts # Chat real-time sync
│ └── pages/
│ ├── AgentPortal.tsx # Main agent dashboard
│ └── CustomerDemo.tsx # Customer demo page
│
├── packages/
│ ├── database/ # Prisma ORM
│ │ ├── prisma/
│ │ │ └── schema.prisma # Database models
│ │ └── src/
│ │ ├── index.ts # Prisma client
│ │ └── seed.ts # Test data seeder
│ │
│ └── shared-types/ # TypeScript Interfaces
│ └── src/
│ └── index.ts # Shared types across apps
│
├── docker-compose.yml # PostgreSQL + Redis
├── package.json # Monorepo workspace config
└── .env # Environment variables
The command center for human representatives:
- ** Queue Panel** (Left): Live incoming requests with alerts
- ** Transcript View** (Center): Real-time conversation display
- ** Copilot Panel** (Right): AI suggestions and knowledge search
- ** Control Panel** (Bottom): Switch to/from AI, mute, hold, end
- ** Metrics Footer**: Active calls, resolution times, performance
Dual-channel customer interface:
- Text Chat: Instant messaging with AI/human agents
- Voice Call: Browser-based WebRTC voice calls
- Seamless Mode Switching: Toggle between chat and voice
- Status Indicators: AI vs Human agent, connection status
| Endpoint | Method | Description |
|---|---|---|
/health |
GET | Health check with service status |
/api/chat |
POST | Send customer chat message |
/api/chat/respond |
POST | Human agent response |
/api/chat/switch |
POST | Switch between AI and human |
/api/voice/web-call |
POST | Create browser-based voice call |
/api/voice/agent |
GET | Get voice agent configuration |
/api/switch |
POST | AI↔Human handoff for voice |
/api/analytics/dashboard |
GET | Live dashboard metrics |
/api/analytics/switches |
GET | Switch analytics by timeframe |
/api/copilot/search |
POST | Search knowledge base |
Client → Server:
agent:join- Agent joins their roomcall:join- Subscribe to call updatescall:request_switch- Request AI↔Human switchchat:send_message- Agent sends chat messagequeue:subscribe- Subscribe to queue updatesmetrics:subscribe- Subscribe to live metrics
Server → Client:
transcript:update- New message in conversationcopilot:suggestion- AI suggestion for agentcall:state_update- Call mode changedqueue:add- New request in queuequeue:update- Queue item updatedmetrics:update- Dashboard metrics refresh
The platform tracks comprehensive analytics:
{
"overview": {
"totalCalls": 1547,
"activeCalls": 12,
"avgDuration": 245,
"totalSwitches": 289
},
"today": {
"calls": 87,
"switches": 23,
"avgDuration": 198
},
"modeDistribution": {
"aiResolved": 1094, // 70.8% AI resolution
"humanResolved": 312, // 20.2% human only
"mixed": 141 // 9.1% both
},
"switchMetrics": {
"avgSwitchTime": 1.2, // Seconds
"topReasons": {
"CUSTOMER_REQUEST": 152,
"COMPLEXITY": 89,
"ESCALATION": 48
}
}
}- Average handle time (AHT)
- First response time (FRT)
- Resolution rate by channel
- Agent utilization
- Customer satisfaction proxy metrics
- Emergency detection accuracy
| Scenario | Channel | Steps |
|---|---|---|
| Happy Path | Voice | Customer inquiry → AI resolves → Call ends |
| Escalation | Voice | Customer requests human → Switch → Human resolves |
| Emergency | Voice | Gas leak mentioned → Auto-escalate → Emergency team |
| Text Chat | Chat | Customer asks question → AI responds → Follow-up |
| Multi-switch | Both | AI → Human → AI → Human (stress test) |
# Backend API tests
cd apps/backend
npm test
# Frontend component tests
cd apps/web-client
npm test
# E2E tests (full flow)
npm run test:e2eOur specialized domain with pre-built knowledge:
- Billing inquiries: Explain charges, rate tiers, high bills
- Payment support: Set up payment plans, financial hardship
- Outage reporting: Status updates, estimated restoration
- Service changes: Start/stop/transfer service
- Emergency response: Gas leak detection and escalation
ROI: 70% AI resolution rate = ~$3M annual savings for 100-agent call center
- Order tracking and status updates
- Returns and refund processing
- Product recommendations
- VIP customer prioritization
- Inventory and shipping inquiries
- Appointment scheduling and reminders
- Insurance verification
- Prescription refills
- General health information (non-diagnosis)
- HIPAA-compliant audit trails
- Account balance and transaction inquiries
- Fraud detection and reporting
- Loan/mortgage application support
- Investment guidance escalation
- Compliance-ready conversation logs
- All API calls encrypted with TLS 1.3
- Database encryption at rest
- Redis session data encrypted
- PII data masked in logs
- Complete conversation transcripts stored
- Switch events logged with timestamps
- Agent actions tracked
- GDPR data deletion support
- Configurable data retention policies
- Agent authentication (planned)
- Role-based access control (planned)
- API key rotation support
- Rate limiting on public endpoints
Redis connection refused
The app automatically falls back to in-memory storage. You'll see:
Redis unavailable - using in-memory storage
(Start Redis with: docker-compose up -d)
For persistent sessions, start Docker:
docker-compose up -dDatabase connection failed
If you see Prisma errors about database connection:
- Option A: Start Docker for full database support:
docker-compose up -d npm run db:generate
- Option B: Continue without database (analytics will show mock data)
Voice calls not working
Voice calls require Retell AI configuration:
- Sign up at retellai.com
- Create a voice agent
- Add to
.env:RETELL_API_KEY=your_key RETELL_AGENT_ID=your_agent_id - Restart the server
Text chat shows basic responses
For AI-powered responses, add your Gemini API key:
- Get key from Google AI Studio
- Add to
.env:GEMINI_API_KEY=your_key - Restart the server
Port already in use
Kill existing processes:
# Kill backend (port 3001)
lsof -ti:3001 | xargs kill -9
# Kill frontend (port 5173)
lsof -ti:5173 | xargs kill -9
# Restart
npm run devThis project is licensed under the MIT License - see the LICENSE file for details.
- Retell AI - Voice AI platform
- Google Gemini - LLM for chat & copilot
- Telnyx - Telephony infrastructure
- OpenAI - Embeddings for RAG
- Prisma - Next-gen ORM