Skip to content

Add HA tracker sample failover timeout#12331

Merged
zenador merged 10 commits intomainfrom
zenador/ha-tracker-sample-failover-timeout
Oct 7, 2025
Merged

Add HA tracker sample failover timeout#12331
zenador merged 10 commits intomainfrom
zenador/ha-tracker-sample-failover-timeout

Conversation

@zenador
Copy link
Copy Markdown
Contributor

@zenador zenador commented Aug 7, 2025

What this PR does

Add a HA tracker sample failover timeout. Defaults to 0 which does not change anything, but when this is set, it adds an additional failover timeout check based on the earliest sample time in the batch. This is meant to address problems with counter resets that appear when there is no actual counter reset, and not meant to be set otherwise.

Based on #12178 , which was tested on the customer's cell with their permission, resulting in no fake counter resets while the patch was applied, so we'd like to try and get this into main now.


Overview: We have user reports of issues with false counter resets, where a counter can decrement when it isn't actually resetting to 0. This happens with users using HA tracker for deduplication, when their replicas restart and elect a new leader fairly often, and when their samples are also fairly delayed when they send writes. So for example, the leader replica A sends a bunch of samples at midnight, and some of those samples also have a timestamp around midnight. If replica A stays healthy and remains the leader, the samples should continue being spaced out by the scrape interval e.g. 15s. However, if right after midnight replica A restarts, we will try to failover to replica B. If the failover timeout is 1 minute, then we failover at 12:01am. If there is no sample delay, then the next samples received would have a timestamp of 12:01am, and the sample interval here would be about a minute, which is okay.

But with the regularly high sample delay of this user, let's say we have samples that are about a minute old. So when it failsover at 12:01am, we might receive new samples from replica B at 12:00:01am, and then we end up with a sample interval of a second, and we get 2 samples from different replicas very close together in time. And these are counters, so within a replica they are consistently and strictly increasing or staying the same (assuming no resets). But different replicas may have slightly different values (depending on what time they scrape from their sources etc.) at the same point in time, so when this switch happens, the counter might go down a bit, causing a fake counter reset that shouldn't be there.

So to fix this issue, we can add an additional timeout check for the failover based on the earliest sample time instead, and use a per-tenant feature toggle to control that. So in the above scenario, if the earliest sample time in the batch from replica B is 12:00:01am, we would not failover at 12:01am. We would only failover when the earliest sample time is at least 12:01am, so the interval between samples would not be ridiculously close (lower than scrape interval) and cause these fake counter resets. The tradeoff is that it takes longer to failover and we miss out on more samples in the meantime, but this is a toggle so users facing this problem can choose this to avoid many fake counter resets which may be problematic.

More detail regarding how the counter can go down during failovers:
It can sometimes be due to differences in scrape duration (I think we get the value when the scrape begins but attach the timestamp when the scrape ends), or perhaps there could also be some minor clock sync issues involved, though I think theoretically it can still happen even if the clocks are perfectly synced, see this simplified scenario:

    replica A scrapes at 12:00:00.000am and gets 100, takes 20ms, recorded timestamp 12:00:00.020am
    replica B scrapes at 12:00:00.700am and gets 102, takes 20ms, recorded timestamp 12:00:00.720am
    replica A scrapes at 12:30:00.000am and gets 210, takes 1s, recorded timestamp 12:30:01.000am
    replica B scrapes at 12:30:00.700am and gets 212, takes 20ms, recorded timestamp 12:30:00.720am

So let's say replica B is the leader and the last ingested sample was (4). It then goes offline, and when we try to failover to replica A, we receive a batch from A at 12:30:31am, outside the regular failover timeout of 30s, so we accept the failover, but as there is a lot of sample delay, it includes the sample from (3), so we've ingested (4) and (3) together where the counter goes down slightly over a very small time interval, causing a fake reset.

Which issue(s) this PR fixes or relates to

N/A

Checklist

  • Tests updated.
  • Documentation added.
  • CHANGELOG.md updated - the order of entries should be [CHANGE], [FEATURE], [ENHANCEMENT], [BUGFIX]. If changelog entry is not needed, please add the changelog-not-needed label to the PR.
  • about-versioning.md updated with experimental features.

@zenador zenador requested review from a team and tacole02 as code owners August 7, 2025 22:53
@zenador zenador requested a review from NickAnge August 7, 2025 22:54
@zenador
Copy link
Copy Markdown
Contributor Author

zenador commented Aug 7, 2025

Tagging @NickAnge since you have some context on this already.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Aug 7, 2025

💻 Deploy preview deleted.

Copy link
Copy Markdown
Contributor

@tacole02 tacole02 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Docs look great! Thanks, @zenador !

Comment thread pkg/distributor/ha_tracker.go
Comment thread pkg/distributor/ha_tracker.go Outdated
Comment thread pkg/distributor/ha_tracker_test.go
@NickAnge
Copy link
Copy Markdown
Contributor

Hey @zenador . I have added a couple of comments. Let me know what you think. I would also add a detailed PR description why is needed and when this edge case can happen. Thank you

@zenador zenador force-pushed the zenador/ha-tracker-sample-failover-timeout branch from dd24d63 to f8309e6 Compare October 1, 2025 13:40
Copy link
Copy Markdown
Contributor

@NickAnge NickAnge left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thank you for your patience :)

@zenador zenador merged commit 35982a5 into main Oct 7, 2025
39 checks passed
@zenador zenador deleted the zenador/ha-tracker-sample-failover-timeout branch October 7, 2025 19:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants