-
Notifications
You must be signed in to change notification settings - Fork 240
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ci: Automating scheduling benchmarking testing for PRs #2048
base: main
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: DerekFrank The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Hi @DerekFrank. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Pull Request Test Coverage Report for Build 13710467043Details
💛 - Coveralls |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Absolutely! I wanted to keep the initial CI relatively lightweight so I avoided any specific analysis of the results. We've got lots of paths to take the general concept of CI performance testing, I tend to err on the side of incremental, fast changes that allow us to see how contributors actually use the CI in practice. If a multi-run and benchstat analysis of the results setup is preferred, I would be happy to add that to the PR or to tackle it in a follow up PR |
Fixes #N/A
Description
New CI to automatically run scheduling benchmark tests for both the existing code and proposed changes whenever a PR modifies a
.go
file. The new workflow will comment results into PR comments, and also store CPU and MEM profiles into a downloadable artifacts that reviewers and approvers can use to analyze any performance improvements or regressions. The current setup leverages thematrix
functionality to make setting up new benchmark CI tests simple in the future.This is a first iteration of a CI suite for performance testing. There are a couple, sometimes contradictory natural extensions for the future:
go test
callCI testing will always be a moving target as the needs of the community develop, this is intended to be a quick and dirty approach to gather feedback and allow the CI performance testing to adapt to the needs of developers.
How was this change tested?
Simulated PRs in my own fork to demonstrate the functionality.
PR that shows benchmarking in action:
PR that shows how doc and other non
.go
functionality changes won't get tested:By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.