|
| 1 | +# ZLayout Scripts |
| 2 | + |
| 3 | +This directory contains automation scripts for the ZLayout project, including benchmark automation and documentation generation. |
| 4 | + |
| 5 | +## Benchmark Automation |
| 6 | + |
| 7 | +### `run_benchmarks.py` |
| 8 | + |
| 9 | +Comprehensive benchmark automation script that runs C++ benchmarks and generates results in multiple formats for documentation integration. |
| 10 | + |
| 11 | +#### Usage |
| 12 | + |
| 13 | +```bash |
| 14 | +# Run all benchmarks with default settings |
| 15 | +python scripts/run_benchmarks.py |
| 16 | + |
| 17 | +# Specify custom build and output directories |
| 18 | +python scripts/run_benchmarks.py --build-dir build --output-dir my_results |
| 19 | + |
| 20 | +# Run and update documentation |
| 21 | +python scripts/run_benchmarks.py --update-docs |
| 22 | +``` |
| 23 | + |
| 24 | +#### Features |
| 25 | + |
| 26 | +- **Automatic Build**: Configures and builds benchmark executables using XMake |
| 27 | +- **Multi-format Output**: Generates JSON, markdown, and summary reports |
| 28 | +- **System Information**: Captures hardware and software configuration |
| 29 | +- **Documentation Integration**: Automatically updates benchmark results in documentation |
| 30 | +- **Error Handling**: Robust error handling and reporting |
| 31 | + |
| 32 | +#### Output Files |
| 33 | + |
| 34 | +- `benchmark_summary.json`: Complete benchmark results with system information |
| 35 | +- `benchmark_report.md`: Human-readable markdown report |
| 36 | +- `geometry_results.json`: Raw geometry benchmark results |
| 37 | +- `quadtree_results.json`: Raw spatial index benchmark results |
| 38 | +- `benchmark_report_YYYYMMDD_HHMMSS.md`: Timestamped reports |
| 39 | + |
| 40 | +### `compare_benchmarks.py` |
| 41 | + |
| 42 | +Benchmark comparison script that compares results between different runs and generates performance regression reports. |
| 43 | + |
| 44 | +#### Usage |
| 45 | + |
| 46 | +```bash |
| 47 | +# Compare two benchmark result files |
| 48 | +python scripts/compare_benchmarks.py baseline.json current.json |
| 49 | + |
| 50 | +# Set custom regression tolerance (default 5%) |
| 51 | +python scripts/compare_benchmarks.py baseline.json current.json --tolerance 0.10 |
| 52 | + |
| 53 | +# Save comparison report to file |
| 54 | +python scripts/compare_benchmarks.py baseline.json current.json --output comparison_report.md |
| 55 | + |
| 56 | +# Exit with error if regressions are detected (useful for CI) |
| 57 | +python scripts/compare_benchmarks.py baseline.json current.json --fail-on-regression |
| 58 | +``` |
| 59 | + |
| 60 | +#### Features |
| 61 | + |
| 62 | +- **Regression Detection**: Automatically identifies performance regressions |
| 63 | +- **Improvement Tracking**: Highlights performance improvements |
| 64 | +- **Tolerance Settings**: Configurable tolerance for regression detection |
| 65 | +- **Detailed Reports**: Comprehensive markdown reports with tables and summaries |
| 66 | +- **CI Integration**: Exit codes for automated pipeline integration |
| 67 | + |
| 68 | +#### Report Sections |
| 69 | + |
| 70 | +- **Summary**: Overview of benchmark comparison results |
| 71 | +- **Performance Regressions**: Benchmarks that have slowed down |
| 72 | +- **Performance Improvements**: Benchmarks that have sped up |
| 73 | +- **New Benchmarks**: Benchmarks added since baseline |
| 74 | +- **Removed Benchmarks**: Benchmarks removed since baseline |
| 75 | +- **Overall Assessment**: Summary of performance impact |
| 76 | + |
| 77 | +## CI Integration |
| 78 | + |
| 79 | +The benchmark automation is integrated into the GitHub Actions workflow: |
| 80 | + |
| 81 | +1. **Benchmark Job**: Runs on every push to main branch |
| 82 | +2. **Automatic Documentation Updates**: Updates benchmark results in documentation |
| 83 | +3. **Performance Monitoring**: Tracks performance changes over time |
| 84 | +4. **Artifact Storage**: Stores benchmark results as CI artifacts |
| 85 | + |
| 86 | +### Workflow Configuration |
| 87 | + |
| 88 | +The CI workflow includes: |
| 89 | + |
| 90 | +```yaml |
| 91 | +benchmark: |
| 92 | + runs-on: ubuntu-latest |
| 93 | + needs: [test, build] |
| 94 | + if: github.ref == 'refs/heads/main' |
| 95 | + |
| 96 | + steps: |
| 97 | + - name: Run Benchmarks |
| 98 | + run: python scripts/run_benchmarks.py --output-dir benchmark_results |
| 99 | + |
| 100 | + - name: Update documentation |
| 101 | + run: | |
| 102 | + git add docs/benchmarks/performance_results.md |
| 103 | + git commit -m "📊 Update benchmark results [skip ci]" |
| 104 | + git push |
| 105 | +``` |
| 106 | +
|
| 107 | +## Requirements |
| 108 | +
|
| 109 | +### System Requirements |
| 110 | +
|
| 111 | +- **Python 3.8+**: Required for running automation scripts |
| 112 | +- **XMake 2.6+**: Build system for C++ benchmarks |
| 113 | +- **Google Benchmark**: C++ benchmarking framework |
| 114 | +- **Git**: Version control (for CI integration) |
| 115 | +
|
| 116 | +### Python Dependencies |
| 117 | +
|
| 118 | +```bash |
| 119 | +# Install from requirements.txt |
| 120 | +pip install -r requirements.txt |
| 121 | + |
| 122 | +# Or install individually |
| 123 | +pip install json argparse pathlib typing datetime |
| 124 | +``` |
| 125 | + |
| 126 | +### Building Requirements |
| 127 | + |
| 128 | +```bash |
| 129 | +# Install system dependencies (Ubuntu/Debian) |
| 130 | +sudo apt-get install build-essential cmake git curl |
| 131 | + |
| 132 | +# Install Google Benchmark |
| 133 | +git clone https://github.com/google/benchmark.git |
| 134 | +cd benchmark |
| 135 | +cmake -E make_directory "build" |
| 136 | +cmake -E chdir "build" cmake -DBENCHMARK_DOWNLOAD_DEPENDENCIES=on -DCMAKE_BUILD_TYPE=Release ../ |
| 137 | +cmake --build "build" --config Release --target install |
| 138 | +``` |
| 139 | + |
| 140 | +## Best Practices |
| 141 | + |
| 142 | +### Running Benchmarks |
| 143 | + |
| 144 | +1. **Consistent Environment**: Run benchmarks on the same hardware configuration |
| 145 | +2. **Minimal Background Load**: Close unnecessary applications during benchmarking |
| 146 | +3. **Multiple Runs**: Use benchmark repetitions for statistical significance |
| 147 | +4. **Thermal Management**: Allow system to cool between intensive benchmark runs |
| 148 | + |
| 149 | +### Performance Monitoring |
| 150 | + |
| 151 | +1. **Regular Benchmarking**: Run benchmarks on every major change |
| 152 | +2. **Baseline Establishment**: Maintain stable baseline results for comparison |
| 153 | +3. **Regression Investigation**: Investigate any performance regressions immediately |
| 154 | +4. **Performance Budgets**: Set performance budgets for different operations |
| 155 | + |
| 156 | +### Documentation Updates |
| 157 | + |
| 158 | +1. **Automated Updates**: Use CI to automatically update benchmark results |
| 159 | +2. **Historical Tracking**: Keep timestamped benchmark reports for trend analysis |
| 160 | +3. **Performance Explanations**: Document reasons for significant performance changes |
| 161 | +4. **Benchmark Descriptions**: Maintain clear descriptions of what each benchmark measures |
| 162 | + |
| 163 | +## Troubleshooting |
| 164 | + |
| 165 | +### Common Issues |
| 166 | + |
| 167 | +**Build Failures**: |
| 168 | +- Ensure XMake is installed and in PATH |
| 169 | +- Check system dependencies (build-essential, cmake) |
| 170 | +- Verify C++ compiler is available |
| 171 | + |
| 172 | +**Benchmark Failures**: |
| 173 | +- Check Google Benchmark installation |
| 174 | +- Verify benchmark executables are built correctly |
| 175 | +- Ensure sufficient system resources |
| 176 | + |
| 177 | +**Documentation Updates**: |
| 178 | +- Check file permissions for documentation directory |
| 179 | +- Verify git configuration for automated commits |
| 180 | +- Ensure CI has proper repository access |
| 181 | + |
| 182 | +### Getting Help |
| 183 | + |
| 184 | +For issues with benchmark automation: |
| 185 | + |
| 186 | +1. Check the GitHub Actions logs for detailed error messages |
| 187 | +2. Run benchmarks locally to isolate issues |
| 188 | +3. Review system requirements and dependencies |
| 189 | +4. Submit issues to the project repository |
| 190 | + |
| 191 | +## Contributing |
| 192 | + |
| 193 | +When adding new benchmarks: |
| 194 | + |
| 195 | +1. **Add C++ Benchmark**: Create benchmark in `benchmarks/` directory |
| 196 | +2. **Update Build Configuration**: Add benchmark target to `xmake.lua` |
| 197 | +3. **Test Locally**: Run `python scripts/run_benchmarks.py` to verify |
| 198 | +4. **Documentation**: Update this README with any new features |
| 199 | +5. **CI Testing**: Ensure benchmarks run correctly in CI environment |
| 200 | + |
| 201 | +For script improvements: |
| 202 | + |
| 203 | +1. **Maintain Compatibility**: Ensure changes work with existing CI setup |
| 204 | +2. **Add Tests**: Include unit tests for new functionality |
| 205 | +3. **Update Documentation**: Keep this README current with changes |
| 206 | +4. **Performance Considerations**: Optimize scripts for CI execution time |
0 commit comments