Skip to content

Commit 60f8e3a

Browse files
committed
Initial commit of benchmark workflow.
0 parents  commit 60f8e3a

18 files changed

Lines changed: 1638 additions & 0 deletions

.gitignore

Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
# Virtual environment
2+
venv
3+
4+
# Benchmark results
5+
results
6+
7+
# Python
8+
__pycache__
9+
*.py[cod]
10+
*$py.class
11+
*.so
12+
.Python
13+
build/
14+
dist/
15+
*.egg-info/
16+
17+
# Pytest
18+
.pytest_cache
19+
.coverage
20+
htmlcov/
21+
22+
# IDE
23+
.vscode/
24+
.idea/
25+
*.swp
26+
*.swo

LICENSE

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
MIT License
2+
3+
Copyright (c) 2026 Parallel Works, Inc
4+
5+
Permission is hereby granted, free of charge, to any person obtaining a copy
6+
of this software and associated documentation files (the "Software"), to deal
7+
in the Software without restriction, including without limitation the rights
8+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9+
copies of the Software, and to permit persons to whom the Software is
10+
furnished to do so, subject to the following conditions:
11+
12+
The above copyright notice and this permission notice shall be included in all
13+
copies or substantial portions of the Software.
14+
15+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21+
SOFTWARE.

README.md

Lines changed: 143 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,143 @@
1+
# Single Node Benchmark Workflow
2+
3+
System benchmarking workflow for the Parallel Works ACTIVATE platform. Runs CPU, memory, and disk I/O benchmarks on a single node and displays results through an interactive Plotly-based web visualization.
4+
5+
## Quick Start
6+
7+
### Run Locally
8+
9+
```bash
10+
pip install -r requirements.txt
11+
python scripts/local_runner.py --duration 10
12+
```
13+
14+
### Deploy to ACTIVATE
15+
16+
1. Push this repository to your ACTIVATE account
17+
2. Select the workflow from your workflows list
18+
3. Configure benchmark options and target cluster
19+
4. Run and view interactive results via the tunnel session
20+
21+
## Workflow Inputs
22+
23+
| Input | Type | Default | Description |
24+
|-------|------|---------|-------------|
25+
| cluster | compute-clusters | - | Target cluster for benchmark execution |
26+
| duration | number | 10 | Duration for each benchmark in seconds (5-60) |
27+
| run_cpu | boolean | true | Run CPU benchmark |
28+
| run_memory | boolean | true | Run memory benchmark |
29+
| run_disk | boolean | true | Run disk I/O benchmark |
30+
31+
## Benchmarks
32+
33+
| Benchmark | Metric | Description |
34+
|-----------|--------|-------------|
35+
| CPU | ops/sec | Prime number calculations measuring single-thread performance |
36+
| Memory | MB/s | Sequential memory read/write throughput |
37+
| Disk I/O | MB/s | Sequential file read/write using 256MB test file |
38+
39+
### CPU Benchmark
40+
41+
Calculates prime numbers using trial division. Measures operations per second and tracks the number of primes found during the test duration.
42+
43+
### Memory Benchmark
44+
45+
Allocates a 100MB buffer and performs sequential write and read operations. Measures throughput in MB/s for both operations.
46+
47+
### Disk I/O Benchmark
48+
49+
Uses `dd` to write a 256MB test file and then read it back. Measures sequential throughput in MB/s. The test file is automatically cleaned up after the benchmark.
50+
51+
## Results
52+
53+
Benchmark results are saved as:
54+
- `benchmark_results.json` - Raw results in JSON format
55+
- `benchmark_results.html` - Interactive Plotly visualization
56+
57+
The HTML visualization includes:
58+
- Bar charts for each benchmark type
59+
- System information table
60+
- Hover tooltips with detailed metrics
61+
62+
## Development
63+
64+
### Setup
65+
66+
```bash
67+
# Install development dependencies
68+
pip install -r requirements-dev.txt
69+
```
70+
71+
### Run Tests
72+
73+
```bash
74+
# Run all tests
75+
pytest
76+
77+
# Run with coverage
78+
pytest --cov=scripts --cov-report=html
79+
80+
# Skip slow integration tests
81+
pytest -m "not slow"
82+
```
83+
84+
### Local Runner Options
85+
86+
```bash
87+
# Quick test with short duration
88+
python scripts/local_runner.py --duration 3
89+
90+
# Run only CPU benchmark
91+
python scripts/local_runner.py --cpu-only --duration 5
92+
93+
# Run and serve results in browser
94+
python scripts/local_runner.py --duration 10 --serve
95+
96+
# Specify custom output directory
97+
python scripts/local_runner.py --output-dir ./my_results --serve --port 9000
98+
```
99+
100+
## Project Structure
101+
102+
```
103+
activate-benchmark/
104+
├── workflow.yaml # ACTIVATE workflow definition
105+
├── scripts/
106+
│ ├── __init__.py # Package marker
107+
│ ├── run_benchmarks.sh # Benchmark execution script
108+
│ ├── run_benchmarks.py # Python module for benchmarks (testable)
109+
│ ├── generate_plot.py # Interactive plot generation (Plotly)
110+
│ ├── serve_results.py # Simple HTTP server for results
111+
│ └── local_runner.py # CLI tool to run workflow locally
112+
├── tests/
113+
│ ├── __init__.py
114+
│ ├── conftest.py # Pytest fixtures
115+
│ ├── test_benchmarks.py # Unit tests for benchmark functions
116+
│ ├── test_plot.py # Unit tests for plot generation
117+
│ └── test_integration.py # Integration tests for full workflow
118+
├── results/ # Output directory (created at runtime)
119+
├── requirements.txt # Runtime dependencies
120+
├── requirements-dev.txt # Development/test dependencies
121+
├── pytest.ini # Pytest configuration
122+
└── README.md # This file
123+
```
124+
125+
## Dependencies
126+
127+
- Python 3.8+
128+
- plotly (Python package for visualization)
129+
- Standard system tools: dd, python3
130+
131+
## Troubleshooting
132+
133+
### Disk benchmark shows 0 MB/s
134+
135+
The disk benchmark requires write access to the temp directory. Ensure sufficient disk space is available.
136+
137+
### Memory benchmark is slow
138+
139+
The memory benchmark allocates 100MB of memory. On systems with limited RAM, this may cause swapping.
140+
141+
### Cannot access results via browser
142+
143+
Ensure the ACTIVATE tunnel session is properly configured and your browser allows popups from the ACTIVATE domain.

pytest.ini

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
[pytest]
2+
testpaths = tests
3+
python_files = test_*.py
4+
python_classes = Test*
5+
python_functions = test_*
6+
addopts = -v --tb=short
7+
markers =
8+
slow: marks tests as slow (deselect with '-m "not slow"')
9+
integration: marks tests as integration tests

requirements-dev.txt

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
-r requirements.txt
2+
pytest>=7.4.0
3+
pytest-cov>=4.1.0
4+
pytest-timeout>=2.2.0

requirements.txt

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
plotly>=5.18.0

scripts/__init__.py

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
# Single Node Benchmark Scripts Package
2+
"""
3+
This package contains scripts for running system benchmarks
4+
and generating interactive visualizations.
5+
"""

0 commit comments

Comments
 (0)