-
Notifications
You must be signed in to change notification settings - Fork 36
Add benchmark workflow #153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #153 +/- ##
=======================================
Coverage 91.88% 91.88%
=======================================
Files 14 14
Lines 2404 2404
Branches 2404 2404
=======================================
Hits 2209 2209
Misses 147 147
Partials 48 48 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Signed-off-by: Simon Marty <[email protected]>
simonmarty
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like the upload is not succeeding with
Warning: No files were found with the provided path: aws_secretsmanager_caching/target/criterion/. No artifacts will be uploaded.
…results aren't generated
ThirdEyeSqueegee
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Criterion book has a section about benchmarking async functions. Did we take this into consideration when writing these tests? Not sure how it plays with Tokio.
| // Warm up the cache. | ||
| rt.block_on(cache.get_secret_value("secretid", None, None, false)) | ||
| .unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure we need to do this. Criterion seems to warm up caches automatically (ref)
| if: always() | ||
| uses: actions/upload-artifact@v4 | ||
| with: | ||
| name: benchmark-results-${{ github.sha }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Criterion has built-in support for identifying perf improvements and regressions, it does this by checking older results in the benchmark results folder. We should use the same folder for all benchmark results (or perhaps per-branch folders) (ref).
Issue #, if available:
Description of changes:
This PR builds off of PR #122 and creates a benchmark workflow similar to integration tests workflow that runs the benchmarks added in PR #122
Add two basic benchmarks
Using criterion (used by the AWS SDK for Rust).
This hopefully will serve to catch performance regressions in future code changes and open the door to profiling.
Successful run from fork: https://github.com/reyhankoyun/aws-secretsmanager-agent/actions/runs/20082405961/job/57612354790
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.