A high-performance, Dockerized rate limiting microservice built with Node.js, TypeScript, Redis, Lua scripting, and Redis Cluster, implementing multiple real-world algorithms used in API Gateways.
- 📑 Table of Contents
- ⭐ Features
- 🛠 Tech Stack
- 🧩 Architecture Overview
- 🧠 Rate Limiting Algorithms (Full Details)
- 🔥 API Endpoints
- 📦 Example Response
- 🐳 Setup & Installation
- 🧪 Testing & Load Scenarios
- 📬 Postman Collection
- 🧭 Roadmap
- 📄 License
- 🤝 Contributing
- ⭐ Show Your Support
✔ Implements four industry-standard algorithms:
- Fixed Window
- Sliding Window
- Token Bucket
- Leaky Bucket
✔ Redis Cluster support for high scalability
✔ Atomic operations using Lua
✔ /metrics endpoint aggregates: allowedRequests, blockedRequests, tokensRemaining, resetTime, totalRequests
✔ Dockerized microservice
✔ Basic Auth (for testing)
✔ Modular folder structure
- Sliding Window Log algorithm
- Load balancing layer
- API Keys & RBAC (Redis-only possible)
- Middleware-level caching
- Gateway features:
- rate limiting
- authentication
- load balancing
- circuit breaking
- Prometheus/Grafana dashboards
- Distributed tracing (OpenTelemetry)
Node.js • TypeScript • Express.js • Redis / Redis Cluster • Lua • Zod • Docker & Docker Compose
Client → API Gateway (future) → Rate Limiter Service → Redis / Redis Cluster
- Each algorithm uses:
- Dedicated Redis keys
- Atomic Lua scripts
- Isolated logic for dashboard comparison
1️⃣ Fixed Window Algorithm
The Fixed Window algorithm assigns a fixed number of allowed requests inside a fixed time window.
- Example: 10 requests per 60 seconds
- Requests exceeding limit → blocked
- Counter resets when next window starts
- 10 requests at 59th sec + 10 at 1st sec of next window → 20 requests in 2 sec → possible overload
- Reason: tracks only current window, not the last 60 seconds
2️⃣ Token Bucket Algorithm
Stores requests as tokens in a bucket.
- Bucket has fixed capacity
- Tokens refill at a fixed rate
- Each request consumes 1 token
- If tokens exist → request allowed, else blocked
- Supports bursts up to bucket capacity
- Smooth traffic control
- Capacity = 10 tokens, Refill = 1 token/sec
- 10 requests → allowed
- 11th → blocked
- After 1 sec → 1 token refills → allowed
3️⃣ Leaky Bucket Algorithm
Ensures constant output rate.
- Requests enter a queue (bucket)
- Processed at fixed leak rate
- If bucket full → request rejected
- Smooth & uniform traffic
- Prevents burst attacks
- Protects server load
- No bursts allowed
- Example: Leak rate = 5 req/sec → 100 requests arrive → only 5 processed/sec, rest queued/rejected
4️⃣ Sliding Window Algorithm
Improves Fixed Window by tracking requests in the last N seconds, not fixed blocks.
- Fairer distribution
- Prevents burst issues at window edges
| Method | Endpoint | Description |
|---|---|---|
POST |
/api/limiter/test |
Fixed Window |
POST |
/api/limiter/sliding |
Sliding Window |
POST |
/api/limiter/tokenbucket |
Token Bucket |
POST |
/api/limiter/leakybucket |
Leaky Bucket |
POST |
/api/limiter/all |
Run all algorithms together |
GET |
/api/limiter/metrics |
Get aggregated metrics |
{
"activeKeys": 1,
"response": {
"allowed": true,
"remaining": 9,
"resetTime": 1703174400
},
"blockedRequests": 27,
"allowedRequests": 67,
"totalRequests": 94
}# Clone the repository
git clone https://github.com/Anshikakalpana/rate-limiter
cd rate-limiter
# Start with Docker Compose
docker compose up --buildStarts:
- Redis Cluster
- Node.js server
- Lua scripts loaded automatically
Test bursts, allowed vs blocked, algorithm comparison via /metrics
Tools: Postman Runner, k6, Artillery, JMeter
Postman collection to test all endpoints
- Sliding Window Log algorithm
- API Keys & RBAC (Redis-only)
- Middleware caching
- Load balancing
- Circuit breaking
- Prometheus/Grafana dashboards
- Distributed tracing (OpenTelemetry)
MIT License — free for personal & commercial use.
Contributions, issues, and feature requests are welcome!
Feel free to check the issues page.
Give a ⭐️ if this project helped you!
Made with ❤️ by Anshika Kalpana