This implementation provides production-ready rate limiting designed exclusively for production environments. It requires Redis and fails fast if not properly configured, ensuring no security gaps in production deployments.
- Redis Required: Fails to start without Redis configuration
- No Development Fallbacks: Eliminates potential security issues from development code
- Fail-Safe: Blocks requests when rate limiting service is unavailable
- IP Validation: Strict IP detection with security-first error handling
// REDIS_URL environment variable is REQUIRED
if (!process.env.REDIS_URL) {
throw new Error('REDIS_URL environment variable is required for rate limiting');
}| Endpoint | Rate Limit | Duration | Block Duration | Use Case |
|---|---|---|---|---|
| General APIs | 100 requests | 15 minutes | 15 minutes | Search, validation |
| Team Registration | 3 registrations | 1 hour | 1 hour | Prevent spam registrations |
| File Uploads | 5 uploads | 1 hour | 1 hour | Protect server resources |
| College Creation | 10 colleges | 1 hour | 30 minutes | Moderate content creation |
| Validation Requests | 50 requests | 10 minutes | 10 minutes | Form validation |
No additional setup required. Uses in-memory storage automatically.
- Sign up for Redis Cloud (free tier available)
- Create a database and get your connection string
- Add to your environment variables:
REDIS_URL=redis://default:password@host:port
NODE_ENV=production- Sign up for Upstash (perfect for serverless)
- Create a Redis database
- Get your connection string:
REDIS_URL=redis://default:password@host:port
NODE_ENV=production# Install Redis locally
docker run -d -p 6379:6379 redis:alpine
# Or use package manager
# macOS: brew install redis
# Ubuntu: sudo apt install redis-server
# Environment variable
REDIS_URL=redis://localhost:6379The system uses a multi-layer approach to detect real client IPs with production-grade validation:
function getClientIP(request: NextRequest): string {
// 1. Try Next.js built-in IP (if available)
// 2. x-forwarded-for header (proxy/load balancer)
// 3. x-real-ip header (nginx)
// 4. cf-connecting-ip header (Cloudflare)
// 5. FAIL if no valid IP found (security-first approach)
}- No Fallbacks: If IP cannot be determined, request is rejected
- Strict Validation: Filters out localhost and invalid IPs
- IPv6 Support: Handles both IPv4 and IPv6 addresses
- Fail-Safe: Blocks requests when IP detection fails
Eliminated Security Risks:
- β No in-memory storage fallback
- β No development session IDs
- β No localhost IP sharing
- β No automatic environment detection
- β No development warning suppression
Security-First Design:
- β Redis Required: Application fails to start without Redis
- β Strict IP Validation: Rejects requests with undetectable IPs
- β Fail-Safe Approach: Blocks requests when service is unavailable
- β No Development Code: Zero development-specific logic in production
Request proceeds normally with headers:
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 95
X-RateLimit-Reset: 2025-08-11T07:00:00.000Z
{
"status": "error",
"message": "Too many requests. Please try again later.",
"error": "Rate limit exceeded",
"details": {
"limit": 100,
"remaining": 0,
"resetTime": "2025-08-11T07:00:00.000Z",
"retryAfter": 900
}
}HTTP Status: 429 Too Many Requests
β
Redis connected for rate limiting # Production
β οΈ Using in-memory rate limiting (development mode) # Development
All rate limit errors are logged for monitoring:
console.error('Rate limit check error:', error);# Multiple requests to test in-memory limiting
for i in {1..105}; do
curl -w "Status: %{http_code}\n" http://localhost:3000/api/colleges
done- Set up Redis connection
- Set
NODE_ENV=production - Deploy to serverless platform
- Test across multiple regions/instances
# Team registration (3/hour limit)
curl -X POST http://localhost:3000/api/teamRegistration
# College creation (10/hour limit)
curl -X POST http://localhost:3000/api/colleges
# Validation (50/10min limit)
curl -X POST http://localhost:3000/api/validateStep- Add Redis URL to environment variables
- Set
NODE_ENV=production - Deploy - rate limiting automatically scales
- Configure Redis connection
- Set environment variables
- Functions share rate limit state via Redis
Compatible with all serverless platforms when Redis is configured.
Modify rateLimiterConfig in /lib/rateLimiter.ts:
const rateLimiterConfig = {
custom: {
points: 50, // Requests allowed
duration: 3600, // Time window (seconds)
blockDuration: 1800, // Block duration (seconds)
}
};const WHITELISTED_IPS = ['192.168.1.1', '10.0.0.1'];
function getClientIP(request: NextRequest): string {
const ip = // ... existing logic
if (WHITELISTED_IPS.includes(ip)) {
return `whitelisted-${ip}`;
}
return ip;
}// Replace IP-based with user-based limiting
const userId = await getUserId(request);
const rateLimitKey = userId || clientIP;
await limiter.consume(rateLimitKey);- Redis Connection: Always use connection pooling in production
- Key Expiration: Redis automatically handles key cleanup
- Memory Usage: Redis uses minimal memory for rate limit counters
- Failover: If Redis is down, requests will be allowed (fail-open)
- Scaling: Redis-based solution scales to millions of requests
- Memory: ~1KB per unique IP in Redis
- Latency: ~1-2ms additional per request
- Network: Minimal Redis round-trip per request
- Scaling: Linear scaling with Redis cluster
This implementation is now production-ready and will properly enforce rate limits across distributed serverless environments!