Skip to content

Commit 58768ba

Browse files
committed
🚀 Add comprehensive benchmark integration system
- Add benchmark automation script (run_benchmarks.py) - Add benchmark comparison script (compare_benchmarks.py) - Update CI workflow with benchmark jobs and documentation deployment - Add benchmark artifacts to .gitignore - Add comprehensive scripts documentation - Enable automated benchmark result updates in documentation - Support performance regression detection and reporting
1 parent 4e84308 commit 58768ba

File tree

5 files changed

+934
-1
lines changed

5 files changed

+934
-1
lines changed

.github/workflows/ci.yml

Lines changed: 107 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -139,4 +139,110 @@ jobs:
139139
print('Successfully created resistor:', resistor.name)
140140
manager.close()
141141
print('Integration test passed!')
142-
"
142+
"
143+
144+
benchmark:
145+
runs-on: ubuntu-latest
146+
needs: [test, build]
147+
if: github.ref == 'refs/heads/main' # Only run on main branch
148+
149+
steps:
150+
- uses: actions/checkout@v4
151+
with:
152+
fetch-depth: 0 # Fetch full history for benchmark comparison
153+
154+
- name: Set up Python
155+
uses: actions/setup-python@v4
156+
with:
157+
python-version: '3.11'
158+
159+
- name: Install system dependencies
160+
run: |
161+
sudo apt-get update
162+
sudo apt-get install -y build-essential cmake git curl
163+
164+
# Install XMake
165+
curl -fsSL https://xmake.io/shget.text | bash
166+
echo "$HOME/.local/bin" >> $GITHUB_PATH
167+
168+
- name: Install Python dependencies
169+
run: |
170+
python -m pip install --upgrade pip
171+
pip install -r requirements.txt
172+
173+
- name: Install Google Benchmark
174+
run: |
175+
git clone https://github.com/google/benchmark.git
176+
cd benchmark
177+
cmake -E make_directory "build"
178+
cmake -E chdir "build" cmake -DBENCHMARK_DOWNLOAD_DEPENDENCIES=on -DCMAKE_BUILD_TYPE=Release ../
179+
cmake --build "build" --config Release --target install
180+
cd ..
181+
rm -rf benchmark
182+
183+
- name: Run Benchmarks
184+
run: |
185+
python scripts/run_benchmarks.py --output-dir benchmark_results
186+
187+
- name: Upload benchmark results
188+
uses: actions/upload-artifact@v3
189+
with:
190+
name: benchmark-results
191+
path: benchmark_results/
192+
193+
- name: Compare with previous benchmarks
194+
run: |
195+
# Download previous benchmark results if available
196+
if [ -f "benchmark_results/benchmark_summary.json" ]; then
197+
echo "Benchmark results generated successfully"
198+
cat benchmark_results/benchmark_report.md >> $GITHUB_STEP_SUMMARY
199+
fi
200+
201+
- name: Update documentation
202+
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
203+
run: |
204+
# Configure git for automated commits
205+
git config --local user.email "[email protected]"
206+
git config --local user.name "GitHub Action"
207+
208+
# Check if documentation was updated
209+
if [ -f "docs/benchmarks/performance_results.md" ]; then
210+
git add docs/benchmarks/performance_results.md
211+
212+
# Only commit if there are changes
213+
if ! git diff --staged --quiet; then
214+
git commit -m "📊 Update benchmark results [skip ci]"
215+
git push
216+
fi
217+
fi
218+
219+
documentation:
220+
runs-on: ubuntu-latest
221+
needs: [test, build]
222+
if: github.ref == 'refs/heads/main'
223+
224+
steps:
225+
- uses: actions/checkout@v4
226+
227+
- name: Set up Python
228+
uses: actions/setup-python@v4
229+
with:
230+
python-version: '3.11'
231+
232+
- name: Install dependencies
233+
run: |
234+
sudo apt-get update
235+
sudo apt-get install -y doxygen graphviz
236+
python -m pip install --upgrade pip
237+
pip install -r requirements.txt
238+
239+
- name: Generate API documentation
240+
run: |
241+
doxygen Doxyfile
242+
243+
- name: Deploy to GitHub Pages
244+
uses: peaceiris/actions-gh-pages@v3
245+
if: github.ref == 'refs/heads/main'
246+
with:
247+
github_token: ${{ secrets.GITHUB_TOKEN }}
248+
publish_dir: ./docs/html

.gitignore

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -308,3 +308,12 @@ massif.out*
308308
*.gz
309309
*.bz2
310310
*.xz
311+
312+
# Benchmark artifacts
313+
benchmark_results/
314+
*.benchmark
315+
*.json.bak
316+
benchmark_report_*.md
317+
benchmark_summary_*.json
318+
benchmarks/*.json
319+
benchmarks/results/

scripts/README.md

Lines changed: 206 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,206 @@
1+
# ZLayout Scripts
2+
3+
This directory contains automation scripts for the ZLayout project, including benchmark automation and documentation generation.
4+
5+
## Benchmark Automation
6+
7+
### `run_benchmarks.py`
8+
9+
Comprehensive benchmark automation script that runs C++ benchmarks and generates results in multiple formats for documentation integration.
10+
11+
#### Usage
12+
13+
```bash
14+
# Run all benchmarks with default settings
15+
python scripts/run_benchmarks.py
16+
17+
# Specify custom build and output directories
18+
python scripts/run_benchmarks.py --build-dir build --output-dir my_results
19+
20+
# Run and update documentation
21+
python scripts/run_benchmarks.py --update-docs
22+
```
23+
24+
#### Features
25+
26+
- **Automatic Build**: Configures and builds benchmark executables using XMake
27+
- **Multi-format Output**: Generates JSON, markdown, and summary reports
28+
- **System Information**: Captures hardware and software configuration
29+
- **Documentation Integration**: Automatically updates benchmark results in documentation
30+
- **Error Handling**: Robust error handling and reporting
31+
32+
#### Output Files
33+
34+
- `benchmark_summary.json`: Complete benchmark results with system information
35+
- `benchmark_report.md`: Human-readable markdown report
36+
- `geometry_results.json`: Raw geometry benchmark results
37+
- `quadtree_results.json`: Raw spatial index benchmark results
38+
- `benchmark_report_YYYYMMDD_HHMMSS.md`: Timestamped reports
39+
40+
### `compare_benchmarks.py`
41+
42+
Benchmark comparison script that compares results between different runs and generates performance regression reports.
43+
44+
#### Usage
45+
46+
```bash
47+
# Compare two benchmark result files
48+
python scripts/compare_benchmarks.py baseline.json current.json
49+
50+
# Set custom regression tolerance (default 5%)
51+
python scripts/compare_benchmarks.py baseline.json current.json --tolerance 0.10
52+
53+
# Save comparison report to file
54+
python scripts/compare_benchmarks.py baseline.json current.json --output comparison_report.md
55+
56+
# Exit with error if regressions are detected (useful for CI)
57+
python scripts/compare_benchmarks.py baseline.json current.json --fail-on-regression
58+
```
59+
60+
#### Features
61+
62+
- **Regression Detection**: Automatically identifies performance regressions
63+
- **Improvement Tracking**: Highlights performance improvements
64+
- **Tolerance Settings**: Configurable tolerance for regression detection
65+
- **Detailed Reports**: Comprehensive markdown reports with tables and summaries
66+
- **CI Integration**: Exit codes for automated pipeline integration
67+
68+
#### Report Sections
69+
70+
- **Summary**: Overview of benchmark comparison results
71+
- **Performance Regressions**: Benchmarks that have slowed down
72+
- **Performance Improvements**: Benchmarks that have sped up
73+
- **New Benchmarks**: Benchmarks added since baseline
74+
- **Removed Benchmarks**: Benchmarks removed since baseline
75+
- **Overall Assessment**: Summary of performance impact
76+
77+
## CI Integration
78+
79+
The benchmark automation is integrated into the GitHub Actions workflow:
80+
81+
1. **Benchmark Job**: Runs on every push to main branch
82+
2. **Automatic Documentation Updates**: Updates benchmark results in documentation
83+
3. **Performance Monitoring**: Tracks performance changes over time
84+
4. **Artifact Storage**: Stores benchmark results as CI artifacts
85+
86+
### Workflow Configuration
87+
88+
The CI workflow includes:
89+
90+
```yaml
91+
benchmark:
92+
runs-on: ubuntu-latest
93+
needs: [test, build]
94+
if: github.ref == 'refs/heads/main'
95+
96+
steps:
97+
- name: Run Benchmarks
98+
run: python scripts/run_benchmarks.py --output-dir benchmark_results
99+
100+
- name: Update documentation
101+
run: |
102+
git add docs/benchmarks/performance_results.md
103+
git commit -m "📊 Update benchmark results [skip ci]"
104+
git push
105+
```
106+
107+
## Requirements
108+
109+
### System Requirements
110+
111+
- **Python 3.8+**: Required for running automation scripts
112+
- **XMake 2.6+**: Build system for C++ benchmarks
113+
- **Google Benchmark**: C++ benchmarking framework
114+
- **Git**: Version control (for CI integration)
115+
116+
### Python Dependencies
117+
118+
```bash
119+
# Install from requirements.txt
120+
pip install -r requirements.txt
121+
122+
# Or install individually
123+
pip install json argparse pathlib typing datetime
124+
```
125+
126+
### Building Requirements
127+
128+
```bash
129+
# Install system dependencies (Ubuntu/Debian)
130+
sudo apt-get install build-essential cmake git curl
131+
132+
# Install Google Benchmark
133+
git clone https://github.com/google/benchmark.git
134+
cd benchmark
135+
cmake -E make_directory "build"
136+
cmake -E chdir "build" cmake -DBENCHMARK_DOWNLOAD_DEPENDENCIES=on -DCMAKE_BUILD_TYPE=Release ../
137+
cmake --build "build" --config Release --target install
138+
```
139+
140+
## Best Practices
141+
142+
### Running Benchmarks
143+
144+
1. **Consistent Environment**: Run benchmarks on the same hardware configuration
145+
2. **Minimal Background Load**: Close unnecessary applications during benchmarking
146+
3. **Multiple Runs**: Use benchmark repetitions for statistical significance
147+
4. **Thermal Management**: Allow system to cool between intensive benchmark runs
148+
149+
### Performance Monitoring
150+
151+
1. **Regular Benchmarking**: Run benchmarks on every major change
152+
2. **Baseline Establishment**: Maintain stable baseline results for comparison
153+
3. **Regression Investigation**: Investigate any performance regressions immediately
154+
4. **Performance Budgets**: Set performance budgets for different operations
155+
156+
### Documentation Updates
157+
158+
1. **Automated Updates**: Use CI to automatically update benchmark results
159+
2. **Historical Tracking**: Keep timestamped benchmark reports for trend analysis
160+
3. **Performance Explanations**: Document reasons for significant performance changes
161+
4. **Benchmark Descriptions**: Maintain clear descriptions of what each benchmark measures
162+
163+
## Troubleshooting
164+
165+
### Common Issues
166+
167+
**Build Failures**:
168+
- Ensure XMake is installed and in PATH
169+
- Check system dependencies (build-essential, cmake)
170+
- Verify C++ compiler is available
171+
172+
**Benchmark Failures**:
173+
- Check Google Benchmark installation
174+
- Verify benchmark executables are built correctly
175+
- Ensure sufficient system resources
176+
177+
**Documentation Updates**:
178+
- Check file permissions for documentation directory
179+
- Verify git configuration for automated commits
180+
- Ensure CI has proper repository access
181+
182+
### Getting Help
183+
184+
For issues with benchmark automation:
185+
186+
1. Check the GitHub Actions logs for detailed error messages
187+
2. Run benchmarks locally to isolate issues
188+
3. Review system requirements and dependencies
189+
4. Submit issues to the project repository
190+
191+
## Contributing
192+
193+
When adding new benchmarks:
194+
195+
1. **Add C++ Benchmark**: Create benchmark in `benchmarks/` directory
196+
2. **Update Build Configuration**: Add benchmark target to `xmake.lua`
197+
3. **Test Locally**: Run `python scripts/run_benchmarks.py` to verify
198+
4. **Documentation**: Update this README with any new features
199+
5. **CI Testing**: Ensure benchmarks run correctly in CI environment
200+
201+
For script improvements:
202+
203+
1. **Maintain Compatibility**: Ensure changes work with existing CI setup
204+
2. **Add Tests**: Include unit tests for new functionality
205+
3. **Update Documentation**: Keep this README current with changes
206+
4. **Performance Considerations**: Optimize scripts for CI execution time

0 commit comments

Comments
 (0)