-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: implement flake processing using timeseries models #1140
Conversation
🚨 Sentry detected 1 potential issue in your recent changes 🚨
Did you find this useful? React with a 👍 or 👎 |
def handle_pass(curr_flakes: dict[bytes, Flake], test_id: bytes): | ||
curr_flakes[test_id].recent_passes_count += 1 | ||
curr_flakes[test_id].count += 1 | ||
if curr_flakes[test_id].recent_passes_count == 30: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
moving this magic constant to the top level would make sense. and an explanation of "after X passes in a row, the test is not marked as flaky anymore"
{"test_id": "test1", "outcome": "pass"}, | ||
{"test_id": "test1", "outcome": "failure"}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
testing with 2x pass
would trigger the error I mentioned above, as you clear the test from the current flakes, and on the second iteration try to access that yet again.
@@ -78,8 +80,13 @@ def run_impl( | |||
extra=dict(repoid=repo_id, commit=commit_id), | |||
) | |||
|
|||
if impl_type == "new" or impl_type == "both": | |||
process_flakes_for_repo(repo_id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in case you use both
, you are locking twice in a row. maybe if you move the locking logic out of the function, and the invocation into the lock here, that could be avoided.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think this is necessary because they're locking different locks and reading from different keys
there's an edge case where a commit is leftover if we lock only once:
call Task 1 with commit A, old_key = [A], new key = [B]
Task 1: takes repo lock
Task 1: completes new invocation, old key = [A], new key = []
call Task 2 with commit B, old key = [A, B], new key = [B]
Task 2: fails to take lock and just drops
Task 1: completes old invocation, old key = [], new key = [B]
B is left over and has to wait for another invocation of process flakes to get processed
85165e6
to
2742228
Compare
Codecov ReportAttention: Patch coverage is
✅ All tests successful. No failed tests found.
Additional details and impacted files@@ Coverage Diff @@
## main #1140 +/- ##
==========================================
- Coverage 97.72% 97.71% -0.01%
==========================================
Files 449 451 +2
Lines 36866 37036 +170
==========================================
+ Hits 36028 36191 +163
- Misses 838 845 +7
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Codecov ReportAttention: Patch coverage is ✅ All tests successful. No failed tests found.
📢 Thoughts on this report? Let us know! |
2742228
to
76a8a8e
Compare
idea is that we'll start with impl_type both for a while which will persist flakes to the current db table and the new one, then at some point we will detect flaky tests in the finisher by checking the new flakes table