This document explains how to configure Prof for CI/CD environments to reduce noise and make performance regression detection more reliable.
Prof's CI/CD configuration allows you to:
- Filter out noisy functions that shouldn't cause CI/CD failures
- Set different thresholds for different benchmarks
- Override command-line settings with configuration files
- Fail on unexpected improvements if needed
The CI/CD configuration is added to your existing config_template.json file under the ci_config section:
{
"function_collection_filter": {
// ... existing function filtering ...
},
"ci_config": {
"global": {
// Global CI/CD settings
},
"benchmarks": {
"BenchmarkName": {
// Benchmark-specific CI/CD settings
}
}
}
}Global settings apply to all benchmarks unless overridden by benchmark-specific settings:
"global": {
"ignore_functions": [
"runtime.gcBgMarkWorker",
"runtime.systemstack",
"testing.(*B).ResetTimer"
],
"ignore_prefixes": [
"runtime.",
"reflect.",
"testing."
],
"min_change_threshold": 5.0,
"max_regression_threshold": 20.0,
"fail_on_improvement": false,
}| Setting | Description | Default |
|---|---|---|
ignore_functions |
Functions to ignore during CI/CD (exact matches) | [] |
ignore_prefixes |
Function prefixes to ignore (e.g., "runtime.") | [] |
min_change_threshold |
Minimum change % to trigger CI/CD failure | 0.0 |
max_regression_threshold |
Maximum acceptable regression % | ∞ |
fail_on_improvement |
Whether to fail on performance improvements | false |
You can override global settings for specific benchmarks:
"benchmarks": {
"BenchmarkMyFunction": {
"ignore_functions": ["BenchmarkMyFunction"],
"min_change_threshold": 3.0,
"max_regression_threshold": 10.0,
"fail_on_improvement": true,
}
}Functions can be ignored by exact name:
"ignore_functions": [
"runtime.gcBgMarkWorker",
"testing.(*B).ResetTimer",
"myproject.BenchmarkFunction"
]Functions can be ignored by package prefix:
"ignore_prefixes": [
"runtime.",
"reflect.",
"testing.",
"syscall.",
"internal/cpu."
]This will ignore all functions from the runtime, reflect, testing, syscall, and internal/cpu packages.
Only functions with changes ≥ this threshold will cause CI/CD failures:
"min_change_threshold": 5.0This prevents CI/CD from failing on minor fluctuations (e.g., 1-2% changes).
This overrides command-line --regression-threshold settings:
"max_regression_threshold": 15.0If a function regresses by 15%, CI/CD will fail regardless of command-line settings.
- Benchmark-specific
max_regression_threshold - Global
max_regression_threshold - Command-line
--regression-threshold
The most restrictive (lowest) threshold wins.
Sometimes you want to detect unexpected performance improvements:
"fail_on_improvement": trueThis is useful when:
- Performance improvements might indicate bugs
- You want to track all significant changes
- You're debugging unexpected behavior
Here's a complete configuration example:
{
"function_collection_filter": {
"*": {
"include_prefixes": ["github.com/myorg/myproject"],
"ignore_functions": ["init", "TestMain"]
}
},
"ci_config": {
"global": {
"ignore_functions": ["runtime.gcBgMarkWorker", "testing.(*B).ResetTimer"],
"ignore_prefixes": ["runtime.", "reflect.", "testing."],
"min_change_threshold": 5.0,
"max_regression_threshold": 20.0,
"fail_on_improvement": false
},
"benchmarks": {
"BenchmarkCriticalPath": {
"min_change_threshold": 1.0,
"max_regression_threshold": 5.0
}
}
}
}name: Performance Regression Check
on: [pull_request]
jobs:
check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: ">=1.24"
- name: Install prof
run: go install github.com/AlexsanderHamir/prof/cmd/prof@latest
- name: Collect baseline
run: |
git fetch origin main --depth=1
git checkout -qf origin/main
prof auto --benchmarks "BenchmarkMyFunction" --profiles "cpu" --count 5 --tag baseline
- name: Collect current
run: |
git checkout -
prof auto --benchmarks "BenchmarkMyFunction" --profiles "cpu" --count 5 --tag PR
- name: Check for regressions
run: |
prof track auto --base baseline --current PR \
--profile-type cpu --bench-name "BenchmarkMyFunction" \
--output-format summaryThe configuration file must be at your project root (same directory as go.mod):
your-project/
├── go.mod
├── config_template.json # ← CI/CD config goes here
├── cmd/
├── internal/
└── ...
Here's a complete example that shows how to set up CI/CD performance tracking without requiring CLI flags:
{
"ci_config": {
"global": {
"ignore_prefixes": ["runtime.", "reflect.", "testing."],
"min_change_threshold": 5.0,
"max_regression_threshold": 15.0,
"fail_on_improvement": false
},
"benchmarks": {
"BenchmarkMyFunction": {
"min_change_threshold": 3.0,
"max_regression_threshold": 10.0,
"ignore_functions": ["setup", "teardown"]
}
}
}
}- name: Check for regressions
run: |
prof track auto --base baseline --current PR \
--profile-type cpu --bench-name "BenchmarkMyFunction" \
--output-format summaryNotice that no --fail-on-regression or --regression-threshold flags are needed. The tool will automatically use the thresholds from your configuration file.
Begin with global settings that apply to all benchmarks:
"global": {
"ignore_prefixes": ["runtime.", "reflect.", "testing."],
"min_change_threshold": 5.0
}When using CI/CD configuration, the --fail-on-regression and --regression-threshold flags become optional:
With CLI flags (overrides config):
prof track auto --base baseline --current PR \
--profile-type cpu --bench-name "BenchmarkMyFunction" \
--output-format summary --fail-on-regression --regression-threshold 5.0Without CLI flags (uses config only):
prof track auto --base baseline --current PR \
--profile-type cpu --bench-name "BenchmarkMyFunction" \
--output-format summaryThe second approach will use the thresholds defined in your config_template.json file. This makes CI/CD pipelines cleaner and more maintainable.
Only override global settings when necessary:
"benchmarks": {
"BenchmarkCriticalPath": {
"min_change_threshold": 1.0 // More sensitive for critical paths
}
}Don't ignore too many functions - you might miss real regressions:
"ignore_functions": [
"runtime.gcBgMarkWorker", // Known noisy function
"testing.(*B).ResetTimer" // Test infrastructure
]min_change_threshold: 5-10% for most casesmax_regression_threshold: 15-25% for most cases- Critical paths: 1-5%
Review CI/CD failures and adjust thresholds based on:
- False positives (too sensitive)
- Missed regressions (not sensitive enough)
- Team feedback
- Configuration not loaded: Ensure
config_template.jsonis at project root - Functions still causing failures: Check
ignore_functionsandignore_prefixes - Thresholds not working: Verify
min_change_thresholdandmax_regression_threshold - Global vs benchmark settings: Benchmark-specific settings override global
- CLI flags vs config: When using CI/CD config,
--fail-on-regressionand--regression-thresholdare optional
Prof logs configuration loading and filtering decisions:
prof track auto --base baseline --current PR --bench-name "BenchmarkMyFunction"Look for logs like:
- "Applied CI/CD configuration filtering"
- "Function ignored by CI/CD config"
- "Performance regression below minimum threshold"
Prof validates configuration on startup. Common validation errors:
- Negative thresholds
- Malformed JSON
If you're currently using command-line flags:
prof track auto --base baseline --current PR \
--bench-name "BenchmarkMyFunction" \
--fail-on-regression --regression-threshold 10.0{
"ci_config": {
"global": {
"max_regression_threshold": 10.0
}
}
}The configuration file provides the same functionality with more flexibility and better maintainability.