Skip to content

Test potential fix for HA tracker failovers on r349#12178

Closed
zenador wants to merge 6 commits intor349from
r349-ha-failover-fix
Closed

Test potential fix for HA tracker failovers on r349#12178
zenador wants to merge 6 commits intor349from
r349-ha-failover-fix

Conversation

@zenador
Copy link
Copy Markdown
Contributor

@zenador zenador commented Jul 23, 2025

Not to be merged, just running tests in CI, using it as a base for vendoring and getting feedback.

Based on #12082, then updated to simplify by removing logging we no longer need, consider histogram timestamps as well, and also make the improvement considered there by making the new flag a duration config instead of a bool.

@zenador zenador requested review from a team and tacole02 as code owners July 23, 2025 18:47
@zenador zenador marked this pull request as draft July 23, 2025 18:47
@zenador zenador changed the base branch from main to r349 July 23, 2025 18:48
@zenador zenador force-pushed the r349-ha-failover-fix branch from 40ffde4 to b290354 Compare July 23, 2025 19:10
@zenador zenador added the changelog-not-needed PRs that don't need a CHANGELOG.md entry label Jul 23, 2025
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Jul 23, 2025

Comment thread docs/sources/mimir/configure/configuration-parameters/index.md Outdated
Comment thread docs/sources/mimir/configure/configuration-parameters/index.md Outdated
Comment thread docs/sources/mimir/configure/configuration-parameters/index.md Outdated
Comment thread docs/sources/mimir/configure/configuration-parameters/index.md Outdated
Co-authored-by: Taylor C <41653732+tacole02@users.noreply.github.com>
@zenador
Copy link
Copy Markdown
Contributor Author

zenador commented Jul 25, 2025

@tacole02 Thank you for the review, applied suggestions! For future reference, it's easier if you update the help text in the .go files directly, e.g. for this one it's pkg/util/validation/limits.go, as the other files with the same texts are auto-generated based on the go code, so any changes there are more likely to be overwritten inadvertently.

@tacole02
Copy link
Copy Markdown
Contributor

That's really good to know! Thanks, @zenador !

zenador added a commit that referenced this pull request Oct 7, 2025
#### What this PR does

Add a HA tracker sample failover timeout. Defaults to 0 which does not
change anything, but when this is set, it adds an additional failover
timeout check based on the earliest sample time in the batch. This is
meant to address problems with counter resets that appear when there is
no actual counter reset, and not meant to be set otherwise.

Based on #12178 , which was tested
on the customer's cell with their permission, resulting in no fake
counter resets while the patch was applied, so we'd like to try and get
this into main now.

---

Overview: We have user reports of issues with false counter resets,
where a counter can decrement when it isn't actually resetting to 0.
This happens with users using HA tracker for deduplication, when their
replicas restart and elect a new leader fairly often, and when their
samples are also fairly delayed when they send writes. So for example,
the leader replica A sends a bunch of samples at midnight, and some of
those samples also have a timestamp around midnight. If replica A stays
healthy and remains the leader, the samples should continue being spaced
out by the scrape interval e.g. 15s. However, if right after midnight
replica A restarts, we will try to failover to replica B. If the
failover timeout is 1 minute, then we failover at 12:01am. If there is
no sample delay, then the next samples received would have a timestamp
of 12:01am, and the sample interval here would be about a minute, which
is okay.

But with the regularly high sample delay of this user, let's say we have
samples that are about a minute old. So when it failsover at 12:01am, we
might receive new samples from replica B at 12:00:01am, and then we end
up with a sample interval of a second, and we get 2 samples from
different replicas very close together in time. And these are counters,
so within a replica they are consistently and strictly increasing or
staying the same (assuming no resets). But different replicas may have
slightly different values (depending on what time they scrape from their
sources etc.) at the same point in time, so when this switch happens,
the counter might go down a bit, causing a fake counter reset that
shouldn't be there.

So to fix this issue, we can add an additional timeout check for the
failover based on the earliest sample time instead, and use a per-tenant
feature toggle to control that. So in the above scenario, if the
earliest sample time in the batch from replica B is 12:00:01am, we would
not failover at 12:01am. We would only failover when the earliest sample
time is at least 12:01am, so the interval between samples would not be
ridiculously close (lower than scrape interval) and cause these fake
counter resets. The tradeoff is that it takes longer to failover and we
miss out on more samples in the meantime, but this is a toggle so users
facing this problem can choose this to avoid many fake counter resets
which may be problematic.

<details>
<summary>More detail regarding how the counter can go down during
failovers:</summary>

It can sometimes be due to differences in scrape duration (I think we
get the value when the scrape begins but attach the timestamp when the
scrape ends), or perhaps there could also be some minor clock sync
issues involved, though I think theoretically it can still happen even
if the clocks are perfectly synced, see this simplified scenario:

replica A scrapes at 12:00:00.000am and gets 100, takes 20ms, recorded
timestamp 12:00:00.020am
replica B scrapes at 12:00:00.700am and gets 102, takes 20ms, recorded
timestamp 12:00:00.720am
replica A scrapes at 12:30:00.000am and gets 210, takes 1s, recorded
timestamp 12:30:01.000am
replica B scrapes at 12:30:00.700am and gets 212, takes 20ms, recorded
timestamp 12:30:00.720am

So let's say replica B is the leader and the last ingested sample was
(4). It then goes offline, and when we try to failover to replica A, we
receive a batch from A at 12:30:31am, outside the regular failover
timeout of 30s, so we accept the failover, but as there is a lot of
sample delay, it includes the sample from (3), so we've ingested (4) and
(3) together where the counter goes down slightly over a very small time
interval, causing a fake reset.
</details>

#### Which issue(s) this PR fixes or relates to

N/A

#### Checklist

- [x] Tests updated.
- [x] Documentation added.
- [x] `CHANGELOG.md` updated - the order of entries should be
`[CHANGE]`, `[FEATURE]`, `[ENHANCEMENT]`, `[BUGFIX]`. If changelog entry
is not needed, please add the `changelog-not-needed` label to the PR.
- [ ]
[`about-versioning.md`](https://github.com/grafana/mimir/blob/main/docs/sources/mimir/configure/about-versioning.md)
updated with experimental features.

---------

Co-authored-by: Taylor C <41653732+tacole02@users.noreply.github.com>
@zenador zenador closed this Nov 29, 2025
@github-actions
Copy link
Copy Markdown
Contributor

💻 Deploy preview deleted (Test potential fix for HA tracker failovers on r349).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

changelog-not-needed PRs that don't need a CHANGELOG.md entry

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants