Skip to content

Conversation

@mimir-github-bot
Copy link
Contributor

@mimir-github-bot mimir-github-bot bot commented Dec 8, 2025

Merge Conflicts Detected

This PR was automatically created by the merge-upstream-prometheus workflow due to merge conflicts.

For reviewers: After conflicts are resolved, the author should post a comment with the output of git show --remerge-diff. This shows how merge conflicts were manually resolved compared to what Git would have done automatically, making conflict resolution transparent and allowing validation that conflicts were resolved correctly.

Details

Action Required

This closed PR serves as a placeholder that holds the branch and instructions for conflict resolution. Follow these steps to resolve conflicts and reopen this PR:

# 1. Fetch and check out the empty branch created by CI
git fetch origin
git checkout bot/main/merge-upstream-main-202512080237

# 2. Fetch and merge the upstream commit to trigger conflicts
git remote add upstream https://github.com/prometheus/prometheus.git # Omit this step if you already have a remote configured for prometheus/prometheus.
git fetch upstream
git merge 3239723098143242b6ab5419e88e2e9ff75ba14e --no-edit

# 3. If conflicts occur:
#    - Edit conflicted files and resolve conflicts
#    - Look for conflict markers: <<<<<<< HEAD, =======, >>>>>>>
#    - Remove conflict markers after resolving
#    - Run: git add . && git merge --continue

# 4. Push your resolved merge (no force-push needed)
git push

# 5. Re-open the closed PR

# 6. Post the remerge diff as a comment for reviewers:
gh pr comment --body "## Merge Conflict Resolution

You can review how conflicts were resolved using:

\`\`\`bash
git show --remerge-diff
\`\`\`

<details>
<summary>Click to expand remerge diff output</summary>

\`\`\`diff
$(git show --remerge-diff)
\`\`\`

</details>"

Note

Upgrades to Prometheus v3.8.0 and adds TSDB delayed-compaction via upload-tracking, unit-test start timestamps, Azure AD custom OAuth scope, PromQL/UI matching and histogram fixes, plus CI/lint/deps and UI package bumps.

  • Release/Versioning:
    • Bump to VERSION 3.8.0 and update CHANGELOG.md.
  • TSDB/Compaction:
    • New flag --storage.tsdb.delay-compact-file.path to delay compactions for blocks not yet uploaded; wires BlockCompactionExcludeFunc through TSDB and compactor; adds tests and docs.
    • Make DB.Close() idempotent; test helpers refactored.
  • Promtool Unit Tests:
    • Support start_timestamp (RFC3339 or unix) in rule tests; propagate through loader; add tests and docs.
  • PromQL/Parser & UI:
    • Preserve empty ignoring() when grouping is present; printer/serializer/formatter updated with tests.
    • Fix histogram_fraction interpolation for infinite buckets; expand test coverage.
  • Remote Write (Azure AD):
    • Add optional scope to AzureAD config with validation and usage; tests and docs added.
  • Infra/Tooling:
    • CI: bump prometheus/promci to v0.5.3.
    • Lint: bump golangci-lint to v2.6.2; enable modernize (omit omitzero).
    • Deps: github.com/prometheus/common → v0.67.4; assorted minor code modernizations.
  • UI Packages:
    • Bump web UI modules to 0.308.0; align internal dependencies.

Written by Cursor Bugbot for commit 580ba23. This will update automatically on new commits. Configure here.

MohammadAlavi1986 and others added 30 commits November 13, 2025 11:17
Signed-off-by: Jan Fajerski <[email protected]>
…laky (#17534) (#17543)

(cherry picked from commit 35c3232)

Signed-off-by: machine424 <[email protected]>
Signed-off-by: Jan Fajerski <[email protected]>
Co-authored-by: Ayoub Mrini <[email protected]>
chore(deps): bump prometheus/promci from 0.4.7 to 0.5.0
chore(deps): bump prometheus/promci from 0.5.0 to 0.5.1
chore(deps): bump prometheus/promci from 0.5.1 to 0.5.2
chore(deps): bump prometheus/promci from 0.5.2 to 0.5.3
…timestamp) (#17411)

Relates to
prometheus/prometheus#16944 (comment)

Signed-off-by: bwplotka <[email protected]>
(cherry picked from commit cefefc6)
change(prw2): Cherry-pick of RW2 bump to 2.0-rc.4 spec for 3.8.0 release; added changelog for 3.8.0-rc.1
[chore]: bump common dep to support RFC7523 3.1
* Add a nav title to fix docs website generator.
* Make it more clear that "Prometheus Agent" is a mode, not a seaparate
  service.
* Add to index.
* Cleanup some wording.
* Add a downsides section.

Signed-off-by: SuperQ <[email protected]>
(cherry picked from commit d0d2699)
Signed-off-by: Jan Fajerski <[email protected]>
Signed-off-by: Jan Fajerski <[email protected]>
…d in an external JSON file (#17435)

* Delay compactions until Thanos uploads all blocks

Using Thanos sidecar with Prometheus requires us to disable TSDB compactions on Prometheus side by setting --storage.tsdb.min-block-duration and --storage.tsdb.max-block-duration to the same value. See https://thanos.io/tip/components/sidecar.md. The main problem this avoids is that Prometheus might compact given block before Thanos uploads it, creating a gap in Thanos metrics. Thanos does not upload compacted blocks because that would upload the same sample multiple times. You can tell Thanos to upload compacted blocks but that is aimed at one time migrations. This patch creates a bridge between Thanos and Prometheus by allowing Prometheus to read the shipper file Thanos creates, where it tracks which blocks were already uploaded, and using that data delays compaction of blocks until they are marked as uploaded by Thanos. Thanks to this both services can coordinate with each other (in a way) and we can stop disabling compaction on Prometheus side when Thanos uploads are enabled.

The reason to have this is that disabling compactions have very dramatic performance cost. Since most time series exist for longer than a single block duration (2h by default) large chunks of block index will reference the same series, so 10 * 2h blocks will each have an index that is usually fairly big and is almost the same for all 10 blocks. Compaction de-duplicates the index so merging 10 blocks together would leave us with a single index that is around the same size as each of these 10 2h blocks would have (plus some extra for series that only exists in some blocks, but not all). Every range query that iterates over all 10 blocks would then have to read each index and so we're doing 10x more work then if we had a single compacted block.

Signed-off-by: Lukasz Mierzwa <[email protected]>

* Rename structs and functions to make this more generic

Signed-off-by: Lukasz Mierzwa <[email protected]>

* Address review comments

Signed-off-by: Lukasz Mierzwa <[email protected]>

* Cache UploadMeta for 1 minute

Signed-off-by: Lukasz Mierzwa <[email protected]>

---------

Signed-off-by: Lukasz Mierzwa <[email protected]>
 Conflicts:
	storage/remote/write_handler.go
	storage/remote/write_handler_test.go
            Pick `main`

Signed-off-by: Jan Fajerski <[email protected]>
chore: Fix function name typo in createBatchSpan comment
The return value of functions relating to the current time, e.g. time(),
is set by promtool to start at timestamp 0 at the start of a test's
evaluation.

This has the very nice consequence that tests can run reliably without
depending on when they are run.

It does, however, mean that tests will give out results that can be
unexpected by users.

If this behaviour is documented, then users will be empowered to write
tests for their rules that use time-dependent functions.

(Closes: prometheus/docs#1464)

Signed-off-by: Gabriel Filion <[email protected]>
bwplotka and others added 8 commits December 3, 2025 07:55
For tests only, we had various ways of opening DB. Reduced to one
instead of:

* Open
* newTestDB
* newTestDBOpts
* openTestDB

This so prometheus/prometheus#17629 is smaller
and bit easier. Also for test maintainability and consistency.

Signed-off-by: bwplotka <[email protected]>
This commit adds support for configuring a custom start timestamp
for Prometheus unit tests, allowing tests to use realistic timestamps
instead of starting at Unix epoch 0.

Signed-off-by: Julien Pivotto <[email protected]>
Currently both the backend and frontend printers/formatters/serializers
incorrectly transform the following expression:

```
up * ignoring() group_left(__name__) node_boot_time_seconds
```

...into:

```
up * node_boot_time_seconds
```

...which yields a different result (including the metric name in the result
vs. no metric name).

We need to keep empty `ignoring()` modifiers if there is a grouping modifier
present.

Signed-off-by: Julius Volz <[email protected]>
…ation

Fix serialization for empty `ignoring()` in combination with `group_x()`
Remove redundant IsZero check since promqltest.LazyLoader already
handles zero StartTime by defaulting to Unix epoch.

Signed-off-by: Julien Pivotto <[email protected]>
* add modernize check

Signed-off-by: dongjiang1989 <[email protected]>

* fix golangci lint

Signed-off-by: dongjiang1989 <[email protected]>

---------

Signed-off-by: dongjiang1989 <[email protected]>
@github-actions github-actions bot closed this Dec 8, 2025
@github-actions github-actions bot force-pushed the bot/main/merge-upstream-main-202512080237 branch from d8cd7ba to e50e747 Compare December 8, 2025 02:38
@zenador zenador reopened this Dec 8, 2025
@CLAassistant
Copy link

CLAassistant commented Dec 8, 2025

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
4 out of 12 committers have signed the CLA.

✅ JorTurFer
✅ roidelapluie
✅ dongjiang1989
✅ zenador
❌ MohammadAlavi1986
❌ jan--f
❌ bwplotka
❌ zjumathcode
❌ prymitive
❌ Tigger2014
❌ juliusv
❌ SuperQ
You have signed the CLA already but the status is still pending? Let us recheck it.

@zenador zenador force-pushed the bot/main/merge-upstream-main-202512080237 branch from 38efd93 to a637089 Compare December 8, 2025 11:44
@zenador
Copy link
Contributor

zenador commented Dec 8, 2025

Merge Conflict Resolution

You can review how conflicts were resolved using:

git show --remerge-diff
Click to expand remerge diff output
commit a637089a8952afe96adddbbfca2c530acd4fbef7
Merge: e50e74795 323972309
Author: Jeanette Tan <[email protected]>
Date:   Mon Dec 8 19:26:23 2025 +0800

    Merge commit '3239723098143242b6ab5419e88e2e9ff75ba14e' into bot/main/merge-upstream-main-202512080237

diff --git a/tsdb/compact.go b/tsdb/compact.go
remerge CONFLICT (content): Merge conflict in tsdb/compact.go
index 3eccc9cc2..8d30f9d1f 100644
--- a/tsdb/compact.go
+++ b/tsdb/compact.go
@@ -246,11 +246,8 @@ func NewLeveledCompactorWithOptions(ctx context.Context, r prometheus.Registerer
 		postingsEncoder:             pe,
 		postingsDecoderFactory:      opts.PD,
 		enableOverlappingCompaction: opts.EnableOverlappingCompaction,
-<<<<<<< e50e74795 (Merge pull request #1044 from grafana/bot/main/merge-upstream-main-202512010249)
 		concurrencyOpts:             DefaultLeveledCompactorConcurrencyOptions(),
-=======
 		blockExcludeFunc:            opts.BlockExcludeFilter,
->>>>>>> 323972309 (Update golangci-lint and add modernize check (#17640))
 	}, nil
 }
 
diff --git a/tsdb/db.go b/tsdb/db.go
remerge CONFLICT (content): Merge conflict in tsdb/db.go
index e69387908..7292943f3 100644
--- a/tsdb/db.go
+++ b/tsdb/db.go
@@ -318,7 +318,6 @@ type Options struct {
 	// UseUncachedIO allows bypassing the page cache when appropriate.
 	UseUncachedIO bool
 
-<<<<<<< e50e74795 (Merge pull request #1044 from grafana/bot/main/merge-upstream-main-202512010249)
 	// IndexLookupPlannerFunc is a function to return index.LookupPlanner from a BlockReader.
 	// Similar to BlockChunkQuerierFunc, this allows per-block planner creation.
 	// For on-disk blocks, IndexLookupPlannerFunc is invoked once when they are opened.
@@ -326,11 +325,10 @@ type Options struct {
 	IndexLookupPlannerFunc IndexLookupPlannerFunc
 
 	PostingsClonerFactory PostingsClonerFactory
-=======
+
 	// BlockCompactionExcludeFunc is a function which returns true for blocks that should NOT be compacted.
 	// It's passed down to the TSDB compactor.
 	BlockCompactionExcludeFunc BlockExcludeFilterFunc
->>>>>>> 323972309 (Update golangci-lint and add modernize check (#17640))
 }
 
 type NewCompactorFunc func(ctx context.Context, r prometheus.Registerer, l *slog.Logger, ranges []int64, pool chunkenc.Pool, opts *Options) (Compactor, error)
diff --git a/tsdb/db_test.go b/tsdb/db_test.go
remerge CONFLICT (content): Merge conflict in tsdb/db_test.go
index 7b40affd7..68a0229a8 100644
--- a/tsdb/db_test.go
+++ b/tsdb/db_test.go
@@ -136,7 +136,6 @@ func newTestDB(t testing.TB, opts ...testDBOpt) (db *DB) {
 	return db
 }
 
-<<<<<<< e50e74795 (Merge pull request #1044 from grafana/bot/main/merge-upstream-main-202512010249)
 // queryHead is a helper to query the head for a given time range and labelset.
 func queryHead(t testing.TB, head *Head, mint, maxt int64, label labels.Label) (map[string][]chunks.Sample, error) {
 	q, err := NewBlockQuerier(head, mint, maxt)
@@ -144,7 +143,8 @@ func queryHead(t testing.TB, head *Head, mint, maxt int64, label labels.Label) (
 		return nil, err
 	}
 	return query(t, q, labels.MustNewMatcher(labels.MatchEqual, label.Name, label.Value)), nil
-=======
+}
+
 func TestDBClose_AfterClose(t *testing.T) {
 	db := newTestDB(t)
 	require.NoError(t, db.Close())
@@ -154,7 +154,6 @@ func TestDBClose_AfterClose(t *testing.T) {
 	db = newTestDB(t)
 	require.NoError(t, db.Close())
 	require.NoError(t, db.Close())
->>>>>>> 323972309 (Update golangci-lint and add modernize check (#17640))
 }
 
 // query runs a matcher query against the querier and fully expands its data.
@@ -3131,28 +3130,16 @@ func TestCompactHead(t *testing.T) {
 	t.Parallel()
 
 	// Open a DB and append data to the WAL.
-<<<<<<< e50e74795 (Merge pull request #1044 from grafana/bot/main/merge-upstream-main-202512010249)
-	tsdbCfg := DefaultOptions()
-	tsdbCfg.RetentionDuration = int64(time.Hour * 24 * 15 / time.Millisecond)
-	tsdbCfg.NoLockfile = true
-	tsdbCfg.MinBlockDuration = int64(time.Hour * 2 / time.Millisecond)
-	tsdbCfg.MaxBlockDuration = int64(time.Hour * 2 / time.Millisecond)
-	tsdbCfg.WALCompression = compression.Snappy
-	tsdbCfg.HeadPostingsForMatchersCacheMetrics = NewPostingsForMatchersCacheMetrics(nil)
-	tsdbCfg.BlockPostingsForMatchersCacheMetrics = NewPostingsForMatchersCacheMetrics(nil)
-
-	db, err := Open(dbDir, promslog.NewNopLogger(), prometheus.NewRegistry(), tsdbCfg, nil)
-	require.NoError(t, err)
-=======
-	opts := &Options{
-		RetentionDuration: int64(time.Hour * 24 * 15 / time.Millisecond),
-		NoLockfile:        true,
-		MinBlockDuration:  int64(time.Hour * 2 / time.Millisecond),
-		MaxBlockDuration:  int64(time.Hour * 2 / time.Millisecond),
-		WALCompression:    compression.Snappy,
-	}
+	opts := DefaultOptions()
+	opts.RetentionDuration = int64(time.Hour * 24 * 15 / time.Millisecond)
+	opts.NoLockfile = true
+	opts.MinBlockDuration = int64(time.Hour * 2 / time.Millisecond)
+	opts.MaxBlockDuration = int64(time.Hour * 2 / time.Millisecond)
+	opts.WALCompression = compression.Snappy
+	opts.HeadPostingsForMatchersCacheMetrics = NewPostingsForMatchersCacheMetrics(nil)
+	opts.BlockPostingsForMatchersCacheMetrics = NewPostingsForMatchersCacheMetrics(nil)
+
 	db := newTestDB(t, withOpts(opts))
->>>>>>> 323972309 (Update golangci-lint and add modernize check (#17640))
 	ctx := context.Background()
 	app := db.Appender(ctx)
 	var expSamples []sample
@@ -9097,10 +9084,7 @@ func TestNewCompactorFunc(t *testing.T) {
 
 func TestCompactHeadWithoutTruncation(t *testing.T) {
 	setupDB := func() *DB {
-		db := openTestDB(t, nil, nil)
-		t.Cleanup(func() {
-			require.NoError(t, db.Close())
-		})
+		db := newTestDB(t)
 		db.DisableCompactions()
 
 		// Add samples to the head.
@@ -9390,11 +9374,8 @@ func TestBiggerBlocksForOldOOOData(t *testing.T) {
 	opts := DefaultOptions()
 	opts.OutOfOrderTimeWindow = 10 * day
 	opts.EnableBiggerOOOBlockForOldSamples = true
-	db := openTestDB(t, opts, nil)
+	db := newTestDB(t, withOpts(opts))
 	db.DisableCompactions()
-	t.Cleanup(func() {
-		require.NoError(t, db.Close())
-	})
 
 	// 1 in-order sample.
 	app := db.Appender(ctx)
@@ -9420,10 +9401,7 @@ func TestBiggerBlocksForOldOOOData(t *testing.T) {
 	// Check that blocks are alright.
 	// Move all the blocks to a new DB and check for all OOO samples
 	// getting into the new DB and the old DB only has the in-order sample.
-	newDB := openTestDB(t, opts, nil)
-	t.Cleanup(func() {
-		require.NoError(t, newDB.Close())
-	})
+	newDB := newTestDB(t, withOpts(opts))
 	for _, b := range db.Blocks() {
 		err := os.Rename(b.Dir(), path.Join(newDB.Dir(), b.Meta().ULID.String()))
 		require.NoError(t, err)

@zenador zenador marked this pull request as ready for review December 8, 2025 13:11
}
if c.blockExcludeFunc != nil && c.blockExcludeFunc(meta) {
break
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Block exclude filter uses break instead of continue

The blockExcludeFunc check in the Plan function uses break instead of continue when a block is excluded. This causes the entire loop to exit when the first excluded block is encountered, skipping all subsequent blocks even if they should be included in compaction. The intent is to exclude individual blocks from compaction planning, not to stop processing all remaining blocks.

Fix in Cursor Fix in Web

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is actually very valid

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tsdbDelayCompactLastMeta = &uploadMeta
tsdbDelayCompactLastMetaTime = time.Now().UTC()

return !slices.Contains(uploadMeta.Uploaded, meta.ULID.String())
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Global cache variables accessed without synchronization

The tsdbDelayCompactLastMeta and tsdbDelayCompactLastMetaTime global variables are read and written without any synchronization in exludeBlocksPendingUpload. The returned closure can be called concurrently from the compactor, causing data races when reading/writing these shared variables. This could lead to inconsistent cache state or memory corruption.

Fix in Cursor Fix in Web

Copy link
Contributor

@dimitarvdimitrov dimitarvdimitrov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that questionable functionality shouldn't actually break us because we're not using BlockCompactionExcludeFunc. If you're blocked on this feel free to merge, otherwise i'd be more comfortable to wait for a response from the author upstream

@zenador
Copy link
Contributor

zenador commented Dec 8, 2025

Thanks! I'm not blocked, but the original author has replied, are you satisfied with their answer?

@dimitarvdimitrov
Copy link
Contributor

yeah i guess i am

@dimitarvdimitrov dimitarvdimitrov merged commit 843bcf8 into main Dec 8, 2025
49 of 51 checks passed
@dimitarvdimitrov dimitarvdimitrov deleted the bot/main/merge-upstream-main-202512080237 branch December 8, 2025 15:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.