-
Notifications
You must be signed in to change notification settings - Fork 370
add benchmark suite #867
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
add benchmark suite #867
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Large diffs are not rendered by default.
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,14 @@ | ||
| [package] | ||
| name = "workers-rs-benchmark" | ||
| version = "0.1.0" | ||
| edition = "2021" | ||
| license = "MIT OR Apache-2.0" | ||
|
|
||
| [lib] | ||
| crate-type = ["cdylib", "rlib"] | ||
|
|
||
| [dependencies] | ||
| worker.workspace = true | ||
| serde.workspace = true | ||
| serde_json.workspace = true | ||
| futures-util.workspace = true |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,59 @@ | ||
| # workers-rs Benchmark Suite | ||
|
|
||
| Performance benchmark for workers-rs that measures streaming and parallel sub-request performance. | ||
|
|
||
| ## How to run | ||
|
|
||
| First, make sure to clone workers-rs with all submodules. | ||
|
|
||
| Then from the root of workers-rs: | ||
|
|
||
| ```bash | ||
| npm run build | ||
| ``` | ||
|
|
||
| to build the local `worker-build`. | ||
|
|
||
| Then run the benchmark: | ||
|
|
||
| ```bash | ||
| cd benchmark | ||
| npm install | ||
| npm run bench | ||
| ``` | ||
|
|
||
| ## What it does | ||
|
|
||
| - Streams 1MB of data from `/stream` endpoint in 8KB chunks | ||
| - Makes 10 parallel sub-requests to `/stream` from `/benchmark` endpoint | ||
| - All requests are internal (no network I/O) to isolate workers-rs performance | ||
| - Runs 20 iterations with 3 warmup requests | ||
|
|
||
| ## Output | ||
|
|
||
| The benchmark provides: | ||
|
|
||
| - Per-iteration timing for Node.js end-to-end and Worker internal execution | ||
| - Summary statistics: average, min, and max times | ||
| - Data transfer statistics (10MB per iteration = 10 parallel 1MB streams) | ||
| - Average throughput in Mbps | ||
|
|
||
| ## Configuration | ||
|
|
||
| Adjust parameters in `run.mjs`: | ||
| - `iterations` - Number of benchmark runs (default: 20) | ||
| - Warmup count (default: 3) | ||
|
|
||
| Adjust workload in `src/lib.rs`: | ||
| - Number of parallel requests (default: 10) | ||
| - Data size per request (default: 1MB) | ||
| - Chunk size for streaming (default: 8KB) | ||
|
|
||
| ## Rust Toolchain | ||
|
|
||
| `rust-toolchain.toml` in the root of workers-rs sets the Rust toolchain. Changing this can be used to | ||
| benchmark against different toolchain versions. | ||
|
|
||
| ## Compatibility Date | ||
|
|
||
| The current compaitibility date is set to `2025-11-01` in the `wrangler.toml`. Finalization registry was enabled as of `2025-05-05`, so is included. |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,14 @@ | ||
| { | ||
| "name": "workers-rs-benchmark", | ||
| "version": "0.1.0", | ||
| "type": "module", | ||
| "description": "Performance benchmark suite for workers-rs", | ||
| "private": true, | ||
| "scripts": { | ||
| "build": "WASM_BINDGEN_BIN=../wasm-bindgen/target/debug/wasm-bindgen ../target/debug/worker-build --release", | ||
| "bench": "npm run build && node run.js" | ||
| }, | ||
| "dependencies": { | ||
| "miniflare": "^4.20250923.0" | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. it seems this is extremely old isn't it? can you also enable dependably on this path? |
||
| } | ||
| } | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,128 @@ | ||
| #!/usr/bin/env node | ||
|
|
||
| /** | ||
| * Benchmark runner for workers-rs | ||
| * | ||
| * This script runs performance benchmarks against the worker server. | ||
| * It measures the time taken to complete a benchmark that makes 10 parallel | ||
| * sub-requests, each streaming 1MB of data internally. | ||
| */ | ||
|
|
||
| import { Miniflare } from 'miniflare'; | ||
|
|
||
| async function runBenchmark() { | ||
| console.log('🚀 Starting workers-rs benchmark suite\n'); | ||
|
|
||
| // Initialize Miniflare instance with the compiled worker | ||
| console.log('📦 Initializing Miniflare...'); | ||
| const mf = new Miniflare({ | ||
| workers: [ | ||
| { | ||
| name: 'benchmark', | ||
| scriptPath: './build/index.js', | ||
| compatibilityDate: '2025-01-06', | ||
| modules: true, | ||
| modulesRules: [ | ||
| { type: 'CompiledWasm', include: ['**/*.wasm'], fallthrough: true } | ||
| ], | ||
| outboundService: 'benchmark', | ||
| } | ||
| ] | ||
| }); | ||
|
|
||
| const mfUrl = await mf.ready; | ||
| console.log(`✅ Miniflare ready at ${mfUrl}\n`); | ||
|
|
||
| // Run warmup requests | ||
| console.log('🔥 Running warmup requests...'); | ||
| for (let i = 0; i < 3; i++) { | ||
| await mf.dispatchFetch(`${mfUrl}benchmark`); | ||
| } | ||
| console.log('✅ Warmup complete\n'); | ||
|
|
||
| // Run benchmark iterations | ||
| const iterations = 20; | ||
| const results = []; | ||
|
|
||
| console.log(`📊 Running ${iterations} benchmark iterations...\n`); | ||
|
|
||
| for (let i = 0; i < iterations; i++) { | ||
| const iterStart = Date.now(); | ||
| const response = await mf.dispatchFetch(`${mfUrl}benchmark`); | ||
| const iterEnd = Date.now(); | ||
|
|
||
| const nodeJsDuration = iterEnd - iterStart; | ||
| const result = await response.json(); | ||
|
|
||
| if (!result.success) { | ||
| console.error(`❌ Iteration ${i + 1} failed:`, result.errors); | ||
| await mf.dispose(); | ||
| process.exit(1); | ||
| } | ||
|
|
||
| results.push({ | ||
| iteration: i + 1, | ||
| nodeJsDuration, | ||
| workerDuration: result.duration_ms, | ||
| totalBytes: result.total_bytes, | ||
| numRequests: result.num_requests, | ||
| }); | ||
|
|
||
| console.log(` Iteration ${i + 1}:`); | ||
| console.log(` Node.js end-to-end time: ${nodeJsDuration}ms`); | ||
| console.log(` Worker internal time: ${result.duration_ms}ms`); | ||
| console.log(` Data transferred: ${(result.total_bytes / (1024 * 1024)).toFixed(2)}MB`); | ||
| console.log(` Sub-requests: ${result.num_requests}`); | ||
| console.log(); | ||
| } | ||
|
|
||
| // Calculate statistics | ||
| const nodeJsDurations = results.map(r => r.nodeJsDuration); | ||
| const workerDurations = results.map(r => r.workerDuration); | ||
|
|
||
| const avgNodeJs = nodeJsDurations.reduce((a, b) => a + b, 0) / iterations; | ||
| const avgWorker = workerDurations.reduce((a, b) => a + b, 0) / iterations; | ||
|
|
||
| const minNodeJs = Math.min(...nodeJsDurations); | ||
| const maxNodeJs = Math.max(...nodeJsDurations); | ||
| const minWorker = Math.min(...workerDurations); | ||
| const maxWorker = Math.max(...workerDurations); | ||
|
|
||
| // Print summary | ||
| console.log('━'.repeat(60)); | ||
| console.log('📈 BENCHMARK SUMMARY'); | ||
| console.log('━'.repeat(60)); | ||
| console.log(); | ||
| console.log('Node.js End-to-End Time:'); | ||
| console.log(` Average: ${avgNodeJs.toFixed(2)}ms`); | ||
| console.log(` Min: ${minNodeJs.toFixed(2)}ms`); | ||
| console.log(` Max: ${maxNodeJs.toFixed(2)}ms`); | ||
| console.log(); | ||
| console.log('Worker Internal Time:'); | ||
| console.log(` Average: ${avgWorker.toFixed(2)}ms`); | ||
| console.log(` Min: ${minWorker.toFixed(2)}ms`); | ||
| console.log(` Max: ${maxWorker.toFixed(2)}ms`); | ||
| console.log(); | ||
| console.log('Benchmark Configuration:'); | ||
| console.log(` Parallel sub-requests: 10`); | ||
| console.log(` Data per sub-request: 1MB`); | ||
| console.log(` Total data per iteration: 10MB`); | ||
| console.log(` Iterations: ${iterations}`); | ||
| console.log(); | ||
| console.log('━'.repeat(60)); | ||
|
|
||
| // Calculate throughput | ||
| const totalBytes = results[0].totalBytes; | ||
| const throughputMbps = (totalBytes * 8 / (avgWorker / 1000)) / (1024 * 1024); | ||
| console.log(`🚀 Average throughput: ${throughputMbps.toFixed(2)} Mbps`); | ||
| console.log('━'.repeat(60)); | ||
|
|
||
| // Cleanup | ||
| await mf.dispose(); | ||
| console.log('\n✅ Benchmark complete!'); | ||
| } | ||
|
|
||
| runBenchmark().catch((error) => { | ||
| console.error('❌ Benchmark failed:', error); | ||
| process.exit(1); | ||
| }); |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,96 @@ | ||
| use worker::*; | ||
|
|
||
| #[event(fetch)] | ||
| async fn main(req: Request, _env: Env, _ctx: Context) -> Result<Response> { | ||
| let url = req.url()?; | ||
| let path = url.path(); | ||
|
|
||
| match path { | ||
| "/stream" => handle_stream().await, | ||
| "/benchmark" => handle_benchmark(&url).await, | ||
| _ => Response::error("Not Found", 404), | ||
| } | ||
| } | ||
|
|
||
| /// Streams 1MB of data in chunks | ||
| async fn handle_stream() -> Result<Response> { | ||
| use futures_util::stream; | ||
|
|
||
| // Create 1MB of data (1024 * 1024 bytes) | ||
| let chunk_size = 8192; // 8KB chunks | ||
| let num_chunks = (1024 * 1024) / chunk_size; // 128 chunks | ||
| let chunk = vec![b'x'; chunk_size]; | ||
|
|
||
| // Create a stream that yields the data | ||
| let data_stream = | ||
| stream::iter((0..num_chunks).map(move |_| Ok::<Vec<u8>, worker::Error>(chunk.clone()))); | ||
|
|
||
| Response::from_stream(data_stream) | ||
| } | ||
|
|
||
| /// Main benchmark handler that makes 10 parallel sub-requests | ||
| async fn handle_benchmark(url: &Url) -> Result<Response> { | ||
| // Get the base URL from the request | ||
| let base_url = format!( | ||
| "{}://{}", | ||
| url.scheme(), | ||
| url.host_str().unwrap_or("localhost") | ||
| ); | ||
| let stream_url = format!("{}/stream", base_url); | ||
|
|
||
| // Create 10 parallel sub-requests | ||
| let mut tasks = Vec::new(); | ||
|
|
||
| for i in 0..10 { | ||
| let stream_url = stream_url.clone(); | ||
|
|
||
| // Create a task for each sub-request | ||
| let task = async move { | ||
| // Make the sub-request to the streaming endpoint | ||
| let mut response = Fetch::Url(stream_url.parse().unwrap()) | ||
| .send() | ||
| .await | ||
| .map_err(|e| format!("Fetch error on request {}: {:?}", i, e))?; | ||
|
|
||
| // Consume the stream to ensure all data is read | ||
| let body = response | ||
| .bytes() | ||
| .await | ||
| .map_err(|e| format!("Body read error on request {}: {:?}", i, e))?; | ||
|
|
||
| let total_bytes = body.len() as u64; | ||
|
|
||
| Ok::<u64, String>(total_bytes) | ||
| }; | ||
|
|
||
| tasks.push(task); | ||
| } | ||
|
|
||
| // Execute all tasks in parallel | ||
| let start = Date::now().as_millis(); | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this seems to be a not good path. can we use rdtsc? |
||
| let results = futures_util::future::join_all(tasks).await; | ||
| let end = Date::now().as_millis(); | ||
| let duration_ms = end - start; | ||
|
|
||
| // Check for errors and sum up total bytes | ||
| let mut total_bytes = 0u64; | ||
| let mut errors = Vec::new(); | ||
|
|
||
| for (i, result) in results.iter().enumerate() { | ||
| match result { | ||
| Ok(bytes) => total_bytes += bytes, | ||
| Err(e) => errors.push(format!("Request {}: {}", i, e)), | ||
| } | ||
| } | ||
|
|
||
| // Return summary as JSON | ||
| let summary = serde_json::json!({ | ||
| "success": errors.is_empty(), | ||
| "duration_ms": duration_ms, | ||
| "total_bytes": total_bytes, | ||
| "num_requests": 10, | ||
| "errors": errors, | ||
| }); | ||
|
|
||
| Response::from_json(&summary) | ||
| } | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,6 @@ | ||
| name = "workers-rs-benchmark" | ||
| main = "build/worker/shim.mjs" | ||
| compatibility_date = "2025-09-09" | ||
|
|
||
| [build] | ||
| command = "cargo install -q worker-build && worker-build --release" |
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -23,6 +23,7 @@ | |||||
| }, | ||||||
| "scripts": { | ||||||
| "build": "cd wasm-bindgen && cargo build -p wasm-bindgen-cli --bin wasm-bindgen && cd .. && cargo build -p worker-build", | ||||||
| "bench": "cd benchmark && npm run build && npm run bench", | ||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||
| "test": "cd test && NO_MINIFY=1 WASM_BINDGEN_BIN=../wasm-bindgen/target/debug/wasm-bindgen ../target/debug/worker-build --dev && NODE_OPTIONS='--experimental-vm-modules' npx vitest run", | ||||||
| "test-http": "cd test && NO_MINIFY=1 WASM_BINDGEN_BIN=../wasm-bindgen/target/debug/wasm-bindgen ../target/debug/worker-build --release --features http && NODE_OPTIONS='--experimental-vm-modules' npx vitest run", | ||||||
| "test-mem": "cd test && npx wrangler dev --enable-containers=false", | ||||||
|
|
||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.