|
| 1 | +# Single Node Benchmark Workflow |
| 2 | + |
| 3 | +System benchmarking workflow for the Parallel Works ACTIVATE platform. Runs CPU, memory, and disk I/O benchmarks on a single node and displays results through an interactive Plotly-based web visualization. |
| 4 | + |
| 5 | +## Quick Start |
| 6 | + |
| 7 | +### Run Locally |
| 8 | + |
| 9 | +```bash |
| 10 | +pip install -r requirements.txt |
| 11 | +python scripts/local_runner.py --duration 10 |
| 12 | +``` |
| 13 | + |
| 14 | +### Deploy to ACTIVATE |
| 15 | + |
| 16 | +1. Push this repository to your ACTIVATE account |
| 17 | +2. Select the workflow from your workflows list |
| 18 | +3. Configure benchmark options and target cluster |
| 19 | +4. Run and view interactive results via the tunnel session |
| 20 | + |
| 21 | +## Workflow Inputs |
| 22 | + |
| 23 | +| Input | Type | Default | Description | |
| 24 | +|-------|------|---------|-------------| |
| 25 | +| cluster | compute-clusters | - | Target cluster for benchmark execution | |
| 26 | +| duration | number | 10 | Duration for each benchmark in seconds (5-60) | |
| 27 | +| run_cpu | boolean | true | Run CPU benchmark | |
| 28 | +| run_memory | boolean | true | Run memory benchmark | |
| 29 | +| run_disk | boolean | true | Run disk I/O benchmark | |
| 30 | + |
| 31 | +## Benchmarks |
| 32 | + |
| 33 | +| Benchmark | Metric | Description | |
| 34 | +|-----------|--------|-------------| |
| 35 | +| CPU | ops/sec | Prime number calculations measuring single-thread performance | |
| 36 | +| Memory | MB/s | Sequential memory read/write throughput | |
| 37 | +| Disk I/O | MB/s | Sequential file read/write using 256MB test file | |
| 38 | + |
| 39 | +### CPU Benchmark |
| 40 | + |
| 41 | +Calculates prime numbers using trial division. Measures operations per second and tracks the number of primes found during the test duration. |
| 42 | + |
| 43 | +### Memory Benchmark |
| 44 | + |
| 45 | +Allocates a 100MB buffer and performs sequential write and read operations. Measures throughput in MB/s for both operations. |
| 46 | + |
| 47 | +### Disk I/O Benchmark |
| 48 | + |
| 49 | +Uses `dd` to write a 256MB test file and then read it back. Measures sequential throughput in MB/s. The test file is automatically cleaned up after the benchmark. |
| 50 | + |
| 51 | +## Results |
| 52 | + |
| 53 | +Benchmark results are saved as: |
| 54 | +- `benchmark_results.json` - Raw results in JSON format |
| 55 | +- `benchmark_results.html` - Interactive Plotly visualization |
| 56 | + |
| 57 | +The HTML visualization includes: |
| 58 | +- Bar charts for each benchmark type |
| 59 | +- System information table |
| 60 | +- Hover tooltips with detailed metrics |
| 61 | + |
| 62 | +## Development |
| 63 | + |
| 64 | +### Setup |
| 65 | + |
| 66 | +```bash |
| 67 | +# Install development dependencies |
| 68 | +pip install -r requirements-dev.txt |
| 69 | +``` |
| 70 | + |
| 71 | +### Run Tests |
| 72 | + |
| 73 | +```bash |
| 74 | +# Run all tests |
| 75 | +pytest |
| 76 | + |
| 77 | +# Run with coverage |
| 78 | +pytest --cov=scripts --cov-report=html |
| 79 | + |
| 80 | +# Skip slow integration tests |
| 81 | +pytest -m "not slow" |
| 82 | +``` |
| 83 | + |
| 84 | +### Local Runner Options |
| 85 | + |
| 86 | +```bash |
| 87 | +# Quick test with short duration |
| 88 | +python scripts/local_runner.py --duration 3 |
| 89 | + |
| 90 | +# Run only CPU benchmark |
| 91 | +python scripts/local_runner.py --cpu-only --duration 5 |
| 92 | + |
| 93 | +# Run and serve results in browser |
| 94 | +python scripts/local_runner.py --duration 10 --serve |
| 95 | + |
| 96 | +# Specify custom output directory |
| 97 | +python scripts/local_runner.py --output-dir ./my_results --serve --port 9000 |
| 98 | +``` |
| 99 | + |
| 100 | +## Project Structure |
| 101 | + |
| 102 | +``` |
| 103 | +activate-benchmark/ |
| 104 | +├── workflow.yaml # ACTIVATE workflow definition |
| 105 | +├── scripts/ |
| 106 | +│ ├── __init__.py # Package marker |
| 107 | +│ ├── run_benchmarks.sh # Benchmark execution script |
| 108 | +│ ├── run_benchmarks.py # Python module for benchmarks (testable) |
| 109 | +│ ├── generate_plot.py # Interactive plot generation (Plotly) |
| 110 | +│ ├── serve_results.py # Simple HTTP server for results |
| 111 | +│ └── local_runner.py # CLI tool to run workflow locally |
| 112 | +├── tests/ |
| 113 | +│ ├── __init__.py |
| 114 | +│ ├── conftest.py # Pytest fixtures |
| 115 | +│ ├── test_benchmarks.py # Unit tests for benchmark functions |
| 116 | +│ ├── test_plot.py # Unit tests for plot generation |
| 117 | +│ └── test_integration.py # Integration tests for full workflow |
| 118 | +├── results/ # Output directory (created at runtime) |
| 119 | +├── requirements.txt # Runtime dependencies |
| 120 | +├── requirements-dev.txt # Development/test dependencies |
| 121 | +├── pytest.ini # Pytest configuration |
| 122 | +└── README.md # This file |
| 123 | +``` |
| 124 | + |
| 125 | +## Dependencies |
| 126 | + |
| 127 | +- Python 3.8+ |
| 128 | +- plotly (Python package for visualization) |
| 129 | +- Standard system tools: dd, python3 |
| 130 | + |
| 131 | +## Troubleshooting |
| 132 | + |
| 133 | +### Disk benchmark shows 0 MB/s |
| 134 | + |
| 135 | +The disk benchmark requires write access to the temp directory. Ensure sufficient disk space is available. |
| 136 | + |
| 137 | +### Memory benchmark is slow |
| 138 | + |
| 139 | +The memory benchmark allocates 100MB of memory. On systems with limited RAM, this may cause swapping. |
| 140 | + |
| 141 | +### Cannot access results via browser |
| 142 | + |
| 143 | +Ensure the ACTIVATE tunnel session is properly configured and your browser allows popups from the ACTIVATE domain. |
0 commit comments