-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dev: Pytest-split test #1144
dev: Pytest-split test #1144
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
✅ All tests successful. No failed tests found. Additional details and impacted files@@ Coverage Diff @@
## main #1144 +/- ##
========================================
Coverage 96.07% 96.07%
========================================
Files 838 836 -2
Lines 19775 20005 +230
========================================
+ Hits 18998 19220 +222
- Misses 777 785 +8
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Codecov ReportAll modified and coverable lines are covered by tests ✅ ✅ All tests successful. No failed tests found. 📢 Thoughts on this report? Let us know! |
✅ All tests successful. No failed tests were found. 📣 Thoughts on this report? Let Codecov know! | Powered by Codecov |
❌ 1 Tests Failed:
View the top 1 failed tests by shortest run time
To view more test analytics, go to the Test Analytics Dashboard |
|
||
test_env.run_integration: | ||
#docker-compose exec api make test.integration | ||
# @if [ -n "$(GROUP)" ]; then \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
comment this out if we ever have integration tests and it should just work
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you know why is this command commented to begin with?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
THIS IS HUGE
.github/workflows/ci.yml
Outdated
@@ -49,7 +49,7 @@ jobs: | |||
test: | |||
name: Test | |||
needs: [build] | |||
uses: codecov/gha-workflows/.github/workflows/run-tests.yml@v1.2.27 | |||
uses: codecov/gha-workflows/.github/workflows/run-tests-split.yml@285163a75899bad2018fe960ac9dba7530e009fb |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO: update this to the new release hash before merging
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is amazing - thank you for doing it for us!
graphql_api/tests/test_pull.py
Outdated
assert pull == { | ||
"bundleAnalysisCompareWithBase": {"__typename": "MissingBaseReport"} | ||
} | ||
# def test_compare_bundle_analysis_missing_reports(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe leave a comment why this is commented?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah good point, we did create a ticket here so it doesn't get lost: codecov/engineering-team#3358, I can link that to the test directly too
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated
|
||
test_env.run_integration: | ||
#docker-compose exec api make test.integration | ||
# @if [ -n "$(GROUP)" ]; then \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you know why is this command commented to begin with?
.test_durations
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this file how long it took to run our suite? If it gets recommited with every PR, maybe we should just gitignore it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Re: commented out command, it's bc we don't have integration tests in API (yet) 😅
Re: .test_durations, yes it is how long each individual test ran and it only gets "created" or "updated" when we pass in the --test-durations flag or something so it shouldn't be created unless we explicitly want to
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
its actually the --store-durations flag, I misspoke
Purpose/Motivation
This PR tests adding the pytest-split package and running it locally to see if it could potentially help speed up our CI.
On the one hand, it was super straight forward to setup and for our CI it would help a lot with running the tests (if we choose 5 groups for example, each would take ~1-2 minutes to run), and the documentation states that we don't need to run --store-durations all that often, unless our test suite changes a lot since it uses the average test duration for each subsequent test added.
On the other hand, this doesn't help with test runtimes for the full suite locally. While you can be selective with the tests you run, I usually run the full suite initially to see which tests have broken for any PR prior to running individual test suites to fix those tests. If it's found that most folks don't usually run the full suite locally as well then maybe it's not a big deal.
This PR handles everything that's needed from the API side, we'd need to spin up some new GH actions though for each test group that would run after though.
They have a demo repo on their docs with how to set up with a generic test suite
UPDATE:
We got it to work and the results are pretty good! CI times for API look to have gone from ~15:30 -> 6:30, or a 58% reduction in CI run time!
Before run: https://github.com/codecov/codecov-api/actions/runs/13274970819
After run: https://github.com/codecov/codecov-api/actions/runs/13315813179
The test-durations file needs to be committed because it's the thing that pytest-split references when it creates the groups. We can choose to commit it every month or year or anything else, but it's not required outside of the first time since it'll just use the suite's average test runtime for all "new" tests added to the suite.
Again, this doesn't modify anything for your local docker setup either, but you could run the test groups locally if you choose to via
pytest --splits "numSplits" --group "groupNum"
Where to go from here?
Legal Boilerplate
Look, I get it. The entity doing business as "Sentry" was incorporated in the State of Delaware in 2015 as Functional Software, Inc. In 2022 this entity acquired Codecov and as result Sentry is going to need some rights from me in order to utilize my contributions in this PR. So here's the deal: I retain all rights, title and interest in and to my contributions, and by keeping this boilerplate intact I confirm that Sentry can use, modify, copy, and redistribute my contributions, under Sentry's choice of terms.