Test potential fix for HA tracker failovers on r349#12178
Closed
Test potential fix for HA tracker failovers on r349#12178
Conversation
40ffde4 to
b290354
Compare
Contributor
tacole02
approved these changes
Jul 24, 2025
Co-authored-by: Taylor C <41653732+tacole02@users.noreply.github.com>
Contributor
Author
|
@tacole02 Thank you for the review, applied suggestions! For future reference, it's easier if you update the help text in the .go files directly, e.g. for this one it's pkg/util/validation/limits.go, as the other files with the same texts are auto-generated based on the go code, so any changes there are more likely to be overwritten inadvertently. |
Contributor
|
That's really good to know! Thanks, @zenador ! |
4 tasks
zenador
added a commit
that referenced
this pull request
Oct 7, 2025
#### What this PR does Add a HA tracker sample failover timeout. Defaults to 0 which does not change anything, but when this is set, it adds an additional failover timeout check based on the earliest sample time in the batch. This is meant to address problems with counter resets that appear when there is no actual counter reset, and not meant to be set otherwise. Based on #12178 , which was tested on the customer's cell with their permission, resulting in no fake counter resets while the patch was applied, so we'd like to try and get this into main now. --- Overview: We have user reports of issues with false counter resets, where a counter can decrement when it isn't actually resetting to 0. This happens with users using HA tracker for deduplication, when their replicas restart and elect a new leader fairly often, and when their samples are also fairly delayed when they send writes. So for example, the leader replica A sends a bunch of samples at midnight, and some of those samples also have a timestamp around midnight. If replica A stays healthy and remains the leader, the samples should continue being spaced out by the scrape interval e.g. 15s. However, if right after midnight replica A restarts, we will try to failover to replica B. If the failover timeout is 1 minute, then we failover at 12:01am. If there is no sample delay, then the next samples received would have a timestamp of 12:01am, and the sample interval here would be about a minute, which is okay. But with the regularly high sample delay of this user, let's say we have samples that are about a minute old. So when it failsover at 12:01am, we might receive new samples from replica B at 12:00:01am, and then we end up with a sample interval of a second, and we get 2 samples from different replicas very close together in time. And these are counters, so within a replica they are consistently and strictly increasing or staying the same (assuming no resets). But different replicas may have slightly different values (depending on what time they scrape from their sources etc.) at the same point in time, so when this switch happens, the counter might go down a bit, causing a fake counter reset that shouldn't be there. So to fix this issue, we can add an additional timeout check for the failover based on the earliest sample time instead, and use a per-tenant feature toggle to control that. So in the above scenario, if the earliest sample time in the batch from replica B is 12:00:01am, we would not failover at 12:01am. We would only failover when the earliest sample time is at least 12:01am, so the interval between samples would not be ridiculously close (lower than scrape interval) and cause these fake counter resets. The tradeoff is that it takes longer to failover and we miss out on more samples in the meantime, but this is a toggle so users facing this problem can choose this to avoid many fake counter resets which may be problematic. <details> <summary>More detail regarding how the counter can go down during failovers:</summary> It can sometimes be due to differences in scrape duration (I think we get the value when the scrape begins but attach the timestamp when the scrape ends), or perhaps there could also be some minor clock sync issues involved, though I think theoretically it can still happen even if the clocks are perfectly synced, see this simplified scenario: replica A scrapes at 12:00:00.000am and gets 100, takes 20ms, recorded timestamp 12:00:00.020am replica B scrapes at 12:00:00.700am and gets 102, takes 20ms, recorded timestamp 12:00:00.720am replica A scrapes at 12:30:00.000am and gets 210, takes 1s, recorded timestamp 12:30:01.000am replica B scrapes at 12:30:00.700am and gets 212, takes 20ms, recorded timestamp 12:30:00.720am So let's say replica B is the leader and the last ingested sample was (4). It then goes offline, and when we try to failover to replica A, we receive a batch from A at 12:30:31am, outside the regular failover timeout of 30s, so we accept the failover, but as there is a lot of sample delay, it includes the sample from (3), so we've ingested (4) and (3) together where the counter goes down slightly over a very small time interval, causing a fake reset. </details> #### Which issue(s) this PR fixes or relates to N/A #### Checklist - [x] Tests updated. - [x] Documentation added. - [x] `CHANGELOG.md` updated - the order of entries should be `[CHANGE]`, `[FEATURE]`, `[ENHANCEMENT]`, `[BUGFIX]`. If changelog entry is not needed, please add the `changelog-not-needed` label to the PR. - [ ] [`about-versioning.md`](https://github.com/grafana/mimir/blob/main/docs/sources/mimir/configure/about-versioning.md) updated with experimental features. --------- Co-authored-by: Taylor C <41653732+tacole02@users.noreply.github.com>
Contributor
|
💻 Deploy preview deleted (Test potential fix for HA tracker failovers on r349). |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Not to be merged, just running tests in CI, using it as a base for vendoring and getting feedback.
Based on #12082, then updated to simplify by removing logging we no longer need, consider histogram timestamps as well, and also make the improvement considered there by making the new flag a duration config instead of a bool.