Skip to content

Latest commit

 

History

History
502 lines (418 loc) · 17.5 KB

README.md

File metadata and controls

502 lines (418 loc) · 17.5 KB

Rate Limiter Benchmark

A comprehensive benchmark suite for comparing rate-limiter-flexible options using Valkey and Redis clients.

View Benchmark Results

CSV Summary Data - Raw benchmark metrics

Project Overview

This project benchmarks rate limiting performance using Valkey and Redis-OSS with the rate-limiter-flexible package. The benchmark provides an objective comparison between different rate limiter implementations to help developers choose the most performant solution for their applications.

Disclosure: This project is developed and maintained by a valkey-glide maintainer.
To use valkey-glide you can visit npm, for usage with rate-limiter-flexible refer to the documentation.

Architecture

  • Server: Fastify-based API server with rate limiting middleware

    • Main server (src/server/index.ts)
    • Configuration (src/server/config/index.ts)
    • API routes (src/server/routes/index.ts)
    • Rate limiter factory (src/server/lib/rateLimiterFactory.ts)
    • Client management (src/server/lib/clientFactory.ts)
  • Valkey and Redis used as backends for rate limiting

    • Valkey: Latest at the point of the benchmark - v8.1.0
    • Redis: Latest OSS version which wasn't published as Valkey as well - v7.0.0
  • Rate Limiters: Using rate-limiter-flexible with different backends:

    • Valkey Glide – Modern TypeScript-native client, built with a focus on stability, reliability, performance, and scalability. Designed specifically to provide superior fault tolerance and user experience.
    • IOValkey – Client based on the ioredis API, enhanced with Valkey performance.
    • Redis IORedis – Popular Redis client for Node.js
  • Benchmark Layer:

    • Autocannon for HTTP load testing with resource monitoring (src/benchmark/autocannon.ts)
    • Environment variable configuration for benchmark parameters
    • CPU/memory resource tracking (src/benchmark/monitor.ts)
  • Infrastructure:

    • Docker containers for both standalone and cluster configurations
    • Docker Compose files for easy deployment
    • Environment variables controlling cluster mode
    • Dedicated benchmark network for consistent results
  • Scripts:

    • Benchmark orchestration: scripts/run-all.sh for full test suite
    • Individual benchmark runner: scripts/run-benchmark.sh
    • Report generation: scripts/generate_report.py creates HTML reports and CSV summaries
    • Network troubleshooting: scripts/fix-network.sh

Getting Started

  1. Install Node.js dependencies:

    npm install
  2. Install Python dependencies (for reporting): Ensure you have Python installed. It's recommended to use a virtual environment.

    pip install -r requirements.txt
  3. Run Benchmarks: Use the main script to run all tests and generate the report automatically:

    ./scripts/run-all.sh

    Follow the prompts to choose between:

    • Quick Benchmark (light workload)
    • Full Benchmark (light workload and heavy workload)

Benchmark Options

The run-all.sh script provides a comprehensive benchmark suite, but you can also customize individual runs using environment variables:

# Example: Run a 60-second benchmark with 50 connections using the light workload against valkey-glide
DURATION=60 CONNECTIONS=50 REQUEST_TYPE=light RATE_LIMITER_TYPE=valkey-glide ./scripts/run-benchmark.sh

Available environment variables:

  • DURATION: Test duration in seconds (default: 30)
  • CONNECTIONS: Number of concurrent connections (default: 10)
  • REQUEST_TYPE: Workload type (default: "light", options: "light" or "heavy")
  • RATE_LIMITER_TYPE: Implementation to test (default: "unknown")
  • OUTPUT_FILE: Path to save benchmark results (optional)

Client Implementations

The benchmark tests the following clients:

  1. Valkey Glide - Modern TypeScript-native client, built with a focus on stability, reliability, performance, and scalability. Designed specifically to provide superior fault tolerance and user experience.
  2. IOValkey - Client based on the ioredis API with Valkey performance
  3. Redis IORedis - Standard Redis client for Node.js

Each client is tested in both standalone and cluster configurations.

Testing Scenarios

The benchmark suite covers multiple testing scenarios:

  1. Workload Types:

    • Light workload: Minimal API processing
    • Heavy workload: Compute-intensive API responses (configurable complexity level)
  2. Run Durations:

    • Short (30s) for quick comparisons
    • Medium (120s) for sustained performance analysis
    • Extended (150s) for 50-100 connection tests with heavy workloads
    • Long (180-210s) for high concurrency tests (500-1000 connections)
  3. Concurrency Levels:

    • 50 connections: Base testing level for all configurations
    • 100 connections: Medium load testing for all configurations
    • 500 connections: High load testing for standalone mode
    • 1000 connections: Extreme load testing for cluster mode
  4. Deployment Variations:

    • Standalone: Single Redis/Valkey instance
    • Cluster: 6-node configuration (3 primaries, 3 replicas)
  5. Client Implementations:

    • Valkey Glide (both standalone and cluster modes)
    • IOValkey (both standalone and cluster modes)
    • Redis IORedis (both standalone and cluster modes)
  6. Test Iterations:

    • Each configuration runs 3 times to ensure statistical significance
    • Includes 10-second warmup period before each test
    • 5-second cooldown between test configurations
    • 10-second cooldown between different client implementations

Metrics Collected

The benchmark collects the following performance metrics:

  • Throughput: Requests per second
  • Latency: Average, median (p50), p97.5, and p99 response times
  • Rate Limiting: Percentage of requests that hit rate limits
  • System Resources: CPU and memory usage during benchmarks
  • Error Rates: Percentage of failed requests

Benchmark Methodology and Results Processing

To ensure statistical significance and account for system variability, each benchmark configuration is run multiple times (typically three iterations). This approach minimizes the impact of outliers and transient system behavior.

The data processing methodology follows these steps:

  1. Multiple Iterations: Each benchmark configuration (client/workload/concurrency combination) is executed three consecutive times with identical parameters.
  2. Median Selection: For each performance metric, the median value from all runs is selected for the final report, providing a more stable representation than a single run or a mean value that could be skewed by outliers.
  3. Consistency Verification: Standard deviation is calculated across runs to ensure test stability. High variance may indicate unstable test conditions and is flagged in the report.
  4. Comparative Analysis: Percentage differences between implementations are calculated to highlight relative performance characteristics.

This methodology ensures the benchmark results are:

  • Reproducible: Multiple runs increase confidence in the measurements
  • Representative: Median values avoid skew from outliers
  • Comparable: Consistent methodology across all client implementations

Results Structure

Benchmark results are organized by timestamp in the results/ directory:

results/
├── YYYYMMDD_HHMMSS/            # Timestamp-based directory for each run
│   ├── benchmark.log           # Full log output from the benchmark
│   ├── README.md               # Run-specific details
│   ├── {implementation}_{workload}_{connections}c_{duration}s_run{N}.json      # Raw data
│   └── {implementation}_{workload}_{connections}c_{duration}s_run{N}.json.log  # Logs
└── latest -> YYYYMMDD_HHMMSS/  # Symlink to most recent run

Example result file: valkey-glide_light_100c_30s_run1.json

Current Project Structure

/home/ubuntu/ratelimit_bench
├── docker/                            # Docker volume mounts
│   └── app/                           # Application code for Docker
├── results/                           # Benchmark results
│   ├── YYYYMMDD_HHMMSS/               # Timestamp-based directories
│   └── latest -> YYYYMMDD_HHMMSS/     # Symlink to latest run
├── scripts/
│   ├── fix-network.sh                 # Docker network troubleshooting
│   ├── generate_report.py             # Python report generator
│   ├── run-all.sh                     # Main benchmark orchestration
│   └── run-benchmark.sh               # Individual benchmark runner
├── src/
│   ├── benchmark/                     # Benchmark code
│   │   ├── autocannon.ts              # HTTP benchmarking using autocannon
│   │   ├── index.ts                   # Benchmark entry point
│   │   └── monitor.ts                 # Resource monitoring utilities
│   └── server/                        # Server implementation
│       ├── config/                    # Server configuration
│       ├── lib/                       # Core libraries and utilities
│       ├── middleware/                # Server middleware including rate limiting
│       └── routes/                    # API route definitions
├── docker-compose.yml                 # Base Docker Compose configuration
├── docker-compose-redis-cluster.yml   # Redis cluster configuration
├── docker-compose-valkey-cluster.yml  # Valkey cluster configuration
├── Dockerfile.loadtest                # Dockerfile for benchmark runner
├── Dockerfile.server                  # Dockerfile for API server
├── redis.conf                         # Redis configuration
├── valkey.conf                        # Valkey configuration
├── package.json                       # Node.js dependencies and scripts
├── tsconfig.json                      # TypeScript configuration
└── requirements.txt                   # Python dependencies for reporting

Troubleshooting

  • Docker Network Issues: If containers have trouble communicating, try running:

    ./scripts/fix-network.sh
  • Permissions Issues: Ensure scripts are executable:

    chmod +x ./scripts/*.sh ./scripts/*.py
  • Container Cleanup: To remove all containers and start fresh:

    docker-compose down -v
    docker-compose -f docker-compose-redis-cluster.yml down -v
    docker-compose -f docker-compose-valkey-cluster.yml down -v

Contributing

Contributions are welcome! Please follow the existing code style and ensure tests pass before submitting pull requests.

License

This project is open source and available under the MIT License.

Benchmark Results

The benchmark results below compare the performance of different rate limiter implementations across various scenarios. Data is collected from extensive testing under controlled conditions to ensure fair comparison.

Interactive Results Report

For the best experience, view the full interactive benchmark report:

Key Findings (Generated on: April 16, 2025)

  • Valkey Glide consistently outperforms other clients in both standalone and cluster configurations
  • Performance differences become more pronounced under higher concurrency scenarios (500-1000 connections)
  • All clients demonstrate stable performance across multiple test runs, validating reproducibility of results
  • At high concurrency (1000 connections), Valkey Glide maintains significantly lower latency compared to IORedis

Cluster Mode Results

Client Mode RequestType Concurrency Duration ReqPerSec Latency_Avg Latency_P50 Latency_P99 RateLimitHits CPUUsage
valkey-glide cluster heavy 50 150 6,064,837 2.04 2.00 3.00 3,173,724 53.30
iovalkey cluster heavy 50 150 5,240,067 2.12 2.00 3.00 2,742,213 45.59
ioredis cluster heavy 50 150 4,484,765 2.84 3.00 4.00 2,346,830 38.16
valkey-glide cluster heavy 1000 210 3,332,332 84.91 71.00 519.00 2,441,648 35.79
iovalkey cluster heavy 1000 210 3,168,085 90.02 82.00 241.00 2,321,125 34.00
ioredis cluster heavy 1000 210 1,246,590 143.87 97.00 1,640.00 913,144 18.71

Standalone Mode Results

Client Mode RequestType Concurrency Duration ReqPerSec Latency_Avg Latency_P50 Latency_P99 RateLimitHits CPUUsage
valkey-glide standalone heavy 50 150 6,164,561 2.01 2.00 3.00 3,225,896 53.89
iovalkey standalone heavy 50 150 5,558,435 2.05 2.00 3.00 2,908,731 49.84
ioredis standalone heavy 50 150 4,680,253 2.33 2.00 4.00 2,449,174 41.43
valkey-glide standalone heavy 500 210 3,656,168 38.60 33.00 113.00 2,678,727 35.99
iovalkey standalone heavy 500 210 1,613,720 62.11 46.00 784.00 1,182,155 19.15
ioredis standalone heavy 500 210 1,608,894 66.28 48.00 794.00 1,178,439 19.85

Performance Analysis

  1. Throughput Comparison:

    • In cluster mode with heavy workload (50 connections), valkey-glide achieves 35% higher throughput than ioredis
    • At high concurrency (1000 connections), valkey-glide maintains a 167% throughput advantage over ioredis
  2. Latency Comparison:

    • valkey-glide consistently maintains lower latency at all concurrency levels
    • P99 latency for valkey-glide at high concurrency (1000 conn) is 68% lower than ioredis in cluster mode
  3. Scalability:

    • valkey-glide shows superior handling of increased concurrency with significantly better latency and throughput preservation
    • All clients show performance degradation at extreme concurrency, but valkey-glide degrades more gracefully

Raw Data Access

All benchmark data is available in the following formats:

Last updated: April 16, 2025