Skip to content

Conversation

@mimir-github-bot
Copy link
Contributor

@mimir-github-bot mimir-github-bot bot commented Nov 24, 2025

Merge from prometheus/prometheus

This PR was automatically created by the merge-upstream-prometheus workflow

Details:

Changes:

This PR merges the latest changes from the upstream prometheus/prometheus main branch.


Note

Adopts Remote Write v2 start_timestamp (removing series created_timestamp), hardens histogram handling to return errors (no panics) with callers updated, tweaks feature flags/docs, and adds a UI graph option to start Y‑axis at 0.

  • Protocol/API:
    • Remote Write v2: add Sample.start_timestamp and Histogram.start_timestamp; remove TimeSeries.created_timestamp (field reserved). Regenerated code, marshal/unmarshal, and size logic updated.
    • Storage errors: rename message to “start timestamp out of order”.
  • Ingestion/Remote write:
    • Use per-sample/histogram start_timestamp when ingesting ST-zero samples; handle OOO ST as before.
    • Skip incoming __type__/__unit__ duplicates when adding type/unit labels from metadata (tests added).
    • Remote-read iterator prevalidates and caches histograms; reduces high-resolution schemas with error handling; clear errors for unknown schema or span/bucket mismatches (tests added).
  • Histograms (model/tsdb):
    • ReduceResolution now returns error; new mustReduceResolution helper; callers updated across scrape, remote appender, tsdb chunk decoders, WAL decoder, etc.
    • Iterators hardened against invalid spans/bucket counts to avoid panics; additional tests for missing/spurious buckets.
  • CLI/Docs:
    • --enable-feature=native-histograms becomes a no-op and removed from valid options; docs and migration notes updated accordingly.
    • Parser: relax protobuf unit suffix validation; typo fixes.
  • UI:
    • Graph: new setting to start Y axis at 0; plumbed via URL (y_axis_min) and chart scales.
  • Build/Tooling:
    • Dependabot: group github.com/aws/* and github.com/Azure/* Go modules.
    • Makefile: style/license checks use git ls-files (avoid vendor).
    • Repo sync script: structured per-repo logging helpers.

Written by Cursor Bugbot for commit e7e27ae. This will update automatically on new commits. Configure here.

beorn7 and others added 17 commits November 16, 2025 23:22
Signed-off-by: Laurent Dufresne <[email protected]>
…label feature is on (#17546)

* drop extra label from receiver

Signed-off-by: pipiland2612 <[email protected]>

* used constant

Signed-off-by: pipiland2612 <[email protected]>

---------

Signed-off-by: pipiland2612 <[email protected]>
Improve the repo sync logging output and add some additional logging.
This should help debugging some failed updates.

Signed-off-by: SuperQ <[email protected]>
… downstream projects (#17516)

Methods added:
- `SampleOffset(metric *labels.Labels) float64` to calculate the sample offset for a given label set.
- `AddRatioSampleWithOffset(ratioLimit, sampleOffset float64) bool` to find out whether a given sample offset falls within a given ratio limit.

The already existing method `AddRatioSample(ratioLimit float64, sample *Sample) bool` is now implemented as a simple combination of the two other methods. Exposing these methods helps downstream projects to re-use the implementations including easier testing.

Signed-off-by: Andrew Hall <[email protected]>
cmd: Make feature flag `native-histograms` a no-op.
Currently, iterating over histogram buckets can panic if the spans are
not consistent with the buckets. We aim for validating histograms upon
ingestion, but there might still be data corruptions on disk that
could trigger the panic. While data corruption on disk is really bad
and will lead to all kind of weirdness, we should still avoid
panic'ing.

Note, though, that chunks are secured by checksums, so the corruptions
won't realistically happen because of disk faults, but more likely
because a chunk was generated in a faulty way in the first place, by
a software bug or even maliciously.

This commit prevents panics in the situation where there are fewer
buckets than described by the spans. Note that the missing buckets
will simply not be iterated over. There is no signalling of this
problem. We might still consider this separately, but for now, I would
say that this kind of corruption is exceedingly rare and doesn't
deserve special treatment (which will add a whole lot of complexity to
the code).

Signed-off-by: beorn7 <[email protected]>
…nd style check (#17557)

Also improve find fallback to use -prune for better performance.

Signed-off-by: Julien Pivotto <[email protected]>
model/histogram: Make histogram bucket iterators more robust
To reduce main UI clutter, I added a new settings submenu above the chart
itself for the new setting. So far it only has the one new axis setting, but it
could accommodate further settings in the future.

For now I'm only adding a boolean on/off setting to the UI to set the Y axis to
0 or not. However, the underlying stored URL field is already named
y_axis_min={number} and would support other Y axis minima, in case we want to
support custom values in the UI in the future - but then we'd probably also
want to add an axis maximum and possibly other settings.

Fixes prometheus/prometheus#520

Signed-off-by: Julius Volz <[email protected]>
Reduce the number of dependabot PRs for related udpdates.

Signed-off-by: SuperQ <[email protected]>
…n (#17561)

ReduceResolution is currently called before validation during
ingestion. This will cause a panic if there are not enough buckets in
the histogram. If there are too many buckets, the spurious buckets are
ignored, and therefore the error in the input histogram is masked.

Furthermore, invalid negative offsets might cause problems, too.

Therefore, we need to do some minimal validation in reduceResolution.
Fortunately, it is easy and shouldn't slow things down. Sadly, it
requires to return errors, which triggers a bunch of code changes.
Even here is a bright side, we can get rud of a few panics. (Remember:
Don't panic!)

In different news, we haven't done a full validation of histograms
read via remote-read. This is not so much a security concern (as you
can throw off Prometheus easily by feeding it bogus data via
remote-read) but more that remote-read sources might be makeshift and
could accidentally create invalid histograms. We really don't want to
panic in that case. So this commit does not only add a check of the
spans and buckets as needed for resolution reduction but also a full
validation during remote-read.

Signed-off-by: beorn7 <[email protected]>
* Add a nav title to fix docs website generator.
* Make it more clear that "Prometheus Agent" is a mode, not a seaparate
  service.
* Add to index.
* Cleanup some wording.
* Add a downsides section.

Signed-off-by: SuperQ <[email protected]>
@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
4 out of 10 committers have signed the CLA.

✅ beorn7
✅ ldufr
✅ tcp13equals2
✅ roidelapluie
❌ bwplotka
❌ SuperQ
❌ juliusv
❌ mimir-github-bot[bot]
❌ pipiland2612
❌ verdie-g
You have signed the CLA already but the status is still pending? Let us recheck it.

Copy link
Contributor

@mimir-vendoring mimir-vendoring bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm approving this upstream merge PR.
This PR merges changes from prometheus/prometheus upstream repository.
Related GitHub action is defined here.

@mimir-vendoring mimir-vendoring bot enabled auto-merge November 24, 2025 02:39
@mimir-vendoring mimir-vendoring bot merged commit 6b991b2 into main Nov 24, 2025
31 of 32 checks passed
@mimir-vendoring mimir-vendoring bot deleted the bot/main/merge-upstream-main-202511240238 branch November 24, 2025 03:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

10 participants