A high-performance, scalable chat application built with modern technologies supporting 10,000+ concurrent users.
graph TB
subgraph "Frontend Clients"
C1[Client 1]
C2[Client 2]
C3[Client n...]
end
subgraph "Load Balancer"
LB[NGINX]
end
subgraph "WebSocket Servers"
WS1[WebSocket Server 1]
WS2[WebSocket Server 2]
WS3[WebSocket Server n...]
end
subgraph "Message Queue"
K[Apache Kafka]
end
subgraph "Pub/Sub Layer"
R1[Redis Pub/Sub]
end
subgraph "Database"
DB[PlanetScale]
end
C1 & C2 & C3 --> LB
LB --> WS1 & WS2 & WS3
WS1 & WS2 & WS3 <--> R1
WS1 & WS2 & WS3 <--> K
K --> DB
sequenceDiagram
participant Client
participant LoadBalancer
participant WebSocketServer1
participant WebSocketServer2
participant RedisPubSub
participant Kafka
participant PlanetScale
Client->>LoadBalancer: Connect via WebSocket
LoadBalancer->>WebSocketServer1: Route to available server
Client->>WebSocketServer1: Send message
WebSocketServer1->>RedisPubSub: Publish message
RedisPubSub->>WebSocketServer2: Broadcast to all servers
WebSocketServer2->>Client: Deliver to other clients
WebSocketServer1->>Kafka: Queue message for persistence
Kafka->>PlanetScale: Store message in database
- Real-time Communication: Low-latency messaging using WebSocket protocol
- High Scalability: Supports 10,000+ concurrent users
- Message Persistence: Reliable message storage using PlanetScale
- Load Balancing: Efficient request distribution with NGINX
- Microservices Architecture: Built with Turborepo for better modularity
- Message Queueing: Kafka integration for reliable message processing
- Pub/Sub System: Redis for real-time message broadcasting
- High Availability: 99.9% uptime during peak traffic
- Next.js for server-side rendering
- TypeScript for type safety
- WebSocket client implementation
- Node.js microservices
- WebSocket server implementation
- Turborepo for monorepo management
- Redis Pub/Sub for message broadcasting
- Apache Kafka for message queuing
- PlanetScale for scalable database
- NGINX for load balancing
βββ apps/
β βββ web/ # Next.js frontend
β βββ websocket-server/ # Node.js WebSocket server
βββ packages/
β βββ shared/ # Shared utilities
β βββ database/ # Database schemas
β βββ config/ # Configuration
βββ turbo.json # Turborepo configuration
- Clone the repository
git clone https://github.com/yourusername/scalable-chat-app.git
cd scalable-chat-app
- Install dependencies
npm install
- Set up environment variables
cp .env.example .env
- Start development servers
npm run dev
- Node.js 18+
- Redis
- Apache Kafka
- PlanetScale account
- Start Redis server
redis-server
- Start Kafka
# Start Zookeeper
bin/zookeeper-server-start.sh config/zookeeper.properties
# Start Kafka
bin/kafka-server-start.sh config/server.properties
- Run the application
turbo dev
- Multiple WebSocket servers handle client connections
- Redis Pub/Sub ensures message delivery across all servers
- NGINX load balancer distributes client connections
- PlanetScale handles database scaling automatically
- Kafka manages high-throughput message persistence
- Efficient connection pooling and query optimization
- Message batching for bulk operations
- Connection pooling for database efficiency
- Caching frequently accessed data in Redis
- Concurrent Users: 10,000+
- Message Latency: <100ms
- Uptime: 99.9%
- Message Throughput: 1000+ messages/second
- WebSocket connection authentication
- Rate limiting
- Input validation
- SQL injection prevention
- XSS protection
- Implement message encryption
- Add file sharing capabilities
- Enhance monitoring and alerting
- Add support for voice/video calls
- Implement message search functionality
Contributions are welcome! Please read our Contributing Guide for details on our code of conduct and the process for submitting pull requests.
This project is licensed under the MIT License - see the LICENSE file for details.
Mohd Jami Khan
- LinkedIn: Mohd Jami Khan
- Portfolio: mohdjami.me
- Email: [email protected]
Made with β€οΈ by Mohd Jami Khan