Skip to content

Conversation

@github-actions
Copy link

Update pkg/testutils/release/cockroach_releases.yaml with recent values.

Epic: None
Release note: None

Update pkg/testutils/release/cockroach_releases.yaml with recent values.

Epic: None
Release note: None
emnet-crl pushed a commit that referenced this pull request Jun 2, 2025
144781: roachtest: add operation to probe ranges r=noahstho a=noahstho

Since SRE uses crdb_internal.probe_ranges to test for prod cluster health, we would like to add this as a roach operation
to make the DRT cluster as realistic as possible, and test for potential issues with crdb_internal.probe_ranges, so we know ASAP if our alerting coverage drops.

**Background**
crdb_internal.probe_ranges is a virtual table that quickly probes the entire keyspace of the KV layer to return a table of schema (range_id | error | end_to_end_latency_ms). It has minimal dependencies, so it functions even when a cluster is quite broken. And since it probes the entire keyspace, it is useful when something has already gone wrong, in narrowing down an issue to specific ranges.

**What will this roach operation can catch?**
If this roach operation fails, there is either a bug in `crdb_internal.probe_ranges`, so SRE is short a critical tool, or there is a serious bug is present in KV layer, and KV team will need to know asap. Ideally SRE would be first to know if there is an issue, and can hand off to KV if necessary.

**Testing PR**
Tested that it works on roachtest cluster with
`roachtest run-operation noahthompsoncockroachlabscom-test probe-ranges`, and also was able to test that it successfully failed by forcing a range_error in DB, w/
```./bin/roachtest run-operation noahthompsoncockroachlabscom-test2 probe-ranges

Running operation probe-ranges on noahthompsoncockroachlabscom-test2.

2025/04/29 20:00:46 run_operation.go:145: [1] operation status: checking if operation probe-ranges/read dependencies are met
2025/04/29 20:00:47 run_operation.go:145: [1] operation status: running operation probe-ranges/read with run id 12821170976295052991
2025/04/29 20:00:47 probe_ranges.go:92: [1] operation status: executing crdb_internal.probe-ranges read statement against node 3
2025/04/29 20:00:47 probe_ranges.go:92: [1] operation status: found 1 errors while executing crdb_internal.probe-ranges read statement against node 3
2025/04/29 20:00:47 probe_ranges.go:92: [1] operation status: error on node 3 on range 4: test range error
2025/04/29 20:00:47 operation_impl.go:138: [1] operation failure #1: Found range errors when probing via crdb_internal.probe-ranges read statement against node 3
2025/04/29 20:00:47 run_operation.go:229: recovered from panic: o.Fatal() was called
```

**Future Work**
We would like to also enable KVProber cluster setting to test this from a different angle, this should be a very easy change.

Fixes: cockroachdb#102034
Release note: None
Epic: None

145578: ttljob: add cluster setting to control concurrency r=rafiss a=rafiss

Each processor of the TTL job creates a number of goroutines that operate concurrently to scan for expired rows and delete them.

Previously, the concurrency was always equal to GOMAXPROCS. This new setting allows it to be overriden.

Once this is merged, we should update support runbooks to discuss this setting.

Informs: https://github.com/cockroachlabs/support/issues/3284
Epic: None
Release note: None

Co-authored-by: Noah Thompson <[email protected]>
Co-authored-by: Rafi Shamim <[email protected]>
emnet-crl pushed a commit that referenced this pull request Aug 15, 2025
149479: roachtest: exit with failure on github post errors r=herkolategan,DarrylWong a=williamchoe3

Fixes cockroachdb#147116
### Changes
#### Highlevel Changes
Added a new failure path first by
* adding a new counter in `testRunner` struct which get's incremented when `github.MaybePost()` (called in `testRunner.runWorkers()` and `testRunner.runTests()` )returns an error. When this count > 0,  `testRunner.Run()` will return a new error `errGithubPostFailed` and when `main()` sees that error, it will return a new exit code `12` which will fail the pipeline (unlike exit code 10, 11)
* ^ very similar to how provisioning errors are tracked and returned to `main()`
* does not trigger test short circuiting mechanism because `testRunner.runWorkers()` doesn't return an error 
```
type testRunner struct {
...
// numGithubPostErrs Counts GitHub post errors across all workers
numGithubPostErrs int32
...
}
...
issue, err := github.MaybePost(t, issueInfo, l, output, params) // TODO add cluster specific args here
if err != nil {
    shout(ctx, l, stdout, "failed to post issue: %s", err)
    atomic.AddInt32(&r.numGithubPostErrs, 1)
}
```
#### Design
In order to do verification via unit tests, i'm used to using something like Python's magic mock, but that's not available in GoLang so i opted for a Dependency Injection approach.  (This was the best I could come up with, I wanted to avoid "if unit test, do this" logic. If anyone has any other approaches / suggestions let me know!) 
I made a new interface `GithubPoster` in such a way that the original `githubIssues` implements that new interface.  I then pass this interface in function signatures all the way from `Run()` to `runTests()`. Then in the unit tests, I could pass a different implementation of `GithubPoster` that has a `MaybePost()` that always fails.
`github.go`
```
type GithubPoster interface {
	MaybePost(
		t *testImpl, issueInfo *githubIssueInfo, l *logger.Logger, message string,
		params map[string]string) (
		*issues.TestFailureIssue, error)
}
```
Another issue with this approach is the original `githubIssues` has information that is cluster specific, but because of dependency injection, it's now a shared struct among all the workers, so it doesn't make sense to store certain fields that are worker dependent.
For the fields that are worker specific, I created a new struct `githubIssueInfo` that is created in `runWorkers()`, similar to how `githubIssues` used to be created there.
Note: I don't love the name `githubIssueInfo`, but i wanted to stick with a similar naming convention to `githubIssues`, open to name suggestions

```
// Original githubIssues
type githubIssues struct {
	disable      bool
	cluster      *clusterImpl
	vmCreateOpts *vm.CreateOpts
	issuePoster func(context.Context, issues.Logger, issues.IssueFormatter, issues.PostRequest,
		*issues.Options) (*issues.TestFailureIssue, error)
	teamLoader func() (team.Map, error)
}
// New githubIssues
type githubIssues struct {
	disable      bool
	issuePoster func(context.Context, issues.Logger, issues.IssueFormatter, issues.PostRequest,
		*issues.Options) (*issues.TestFailureIssue, error)
	teamLoader func() (team.Map, error)
}
```

All this was very verbose and didn't love that i had to change all the function signatures to do this, open to other ways to do verification. 

### Misc
Also first time writing in Go in like ~3 years very open to general go semantic feedback / best practices / design patterns


### Verification
Diff of binary I used to manually confirm if you wanna see where I hardcoded to return errors: cockroachdb@611adcc 
#### Manual Test Logs
> ➜  cockroach git:(wchoe/147116-github-err-will-fail-pipeline) ✗ tmp/roachtest run acceptance/build-info --cockroach /Users/wchoe/work/cockroachdb/cockroach/bin_linux/cockroach
> ...
> Running tests which match regex "acceptance/build-info" and are compatible with cloud "gce".
> 
> fallback runner logs in: artifacts/roachtest.crdb.log
> 2025/07/09 00:51:48 run.go:386: test runner logs in: artifacts/_runner-logs/test_runner-1752022308.log
> test runner logs in: artifacts/_runner-logs/test_runner-1752022308.log
> HTTP server listening on port 56238 on localhost: http://localhost:56238/
> 2025/07/09 00:51:48 run.go:148: global random seed: 1949199437086051249
> 2025/07/09 00:51:48 test_runner.go:398: test_run_id: will.choe-1752022308
> test_run_id: will.choe-1752022308
> [w0] 2025/07/09 00:51:48 work_pool.go:198: Acquired quota for 16 CPUs
> [w0] 2025/07/09 00:51:48 cluster.go:3204: Using randomly chosen arch="amd64", acceptance/build-info
> [w0] 2025/07/09 00:51:48 test_runner.go:798: Unable to create (or reuse) cluster for test acceptance/build-info due to: mocking.
> Unable to create (or reuse) cluster for test acceptance/build-info due to: mocking.
> 2025/07/09 00:51:48 test_impl.go:478: test failure #1: full stack retained in failure_1.log: (test_runner.go:873).func4: mocking [owner=test-eng]
> 2025/07/09 00:51:48 test_impl.go:200: Runtime assertions disabled
> [w0] 2025/07/09 00:51:48 test_runner.go:883: failed to post issue: mocking
> failed to post issue: mocking
> [w0] 2025/07/09 00:51:48 test_runner.go:1019: test failed: acceptance/build-info (run 1)
> [w0] 2025/07/09 00:51:48 test_runner.go:732: Releasing quota for 16 CPUs
> [w0] 2025/07/09 00:51:48 test_runner.go:744: No work remaining; runWorker is bailing out...
> No work remaining; runWorker is bailing out...
> [w0] 2025/07/09 00:51:48 test_runner.go:643: Worker exiting; no cluster to destroy.
> 2025/07/09 00:51:48 test_runner.go:460: PASS
> PASS
> 2025/07/09 00:51:48 test_runner.go:465: 1 clusters could not be created and 1 errors occurred while posting to github
> 1 clusters could not be created and 1 errors occurred while posting to github
> 2025/07/09 00:51:48 run.go:200: runTests destroying all clusters
> Error: some clusters could not be created
> failed to POST to GitHub
> ➜  cockroach git:(wchoe/147116-github-err-will-fail-pipeline) ✗ echo $?
> 12




149913: crosscluster/physical: persist standby poller progress r=dt a=msbutler

This patch sets the standby poller job's resolved time to the system time that standby descriptors have been updated to. This allows a reader tenant user to easily check that the poller job is running smoothly via SHOW JOB.

Epic: none

Release note: none

Co-authored-by: William Choe <[email protected]>
Co-authored-by: Michael Butler <[email protected]>
emnet-crl pushed a commit that referenced this pull request Nov 13, 2025
156830:  storeliveness: smear storeliveness heartbeat messages to reduce goroutine spikes at heartbeat interval tick r=miraradeva,iskettaneh a=dodeca12

This PR introduces heartbeat smearing logic that batches and smears Store Liveness heartbeat sends across destination nodes to prevent thundering herd of goroutine spikes.

### Changes

Core changes are within these files:

```sh
pkg/kv/kvserver/storeliveness
├── support_manager.go  # Rename SendAsync→EnqueueMessage, add smearing settings
└── transport.go        # Add smearing sender goroutine to transport.go which takes care of smearing when enabled
```

### Background

Previously, all stores in a cluster sent heartbeats immediately at each heartbeat interval tick. In large clusters with multi-store nodes, this created synchronized bursts of goroutine spikes that caused issues in other parts of the running CRDB process.

### Commits

**Commit: Introduce heartbeat smearing**

- Adds a smearing sender goroutine to `transport.go` that batches enqueued messages
- Smears send signals across queues using `taskpacer` to spread traffic over time
- Configurable via cluster settings (default: enabled)

**How it works:**

1. Messages are enqueued via `EnqueueMessage()` into per-node queues
2. When `SendAllEnqueuedMessages()` is called, transport's smearing sender goroutine waits briefly to batch messages
3. Transport's smearing sender goroutine uses `taskpacer` to pace signaling to each queue over a smear duration
4. Each `processQueue` goroutine drains its queue and sends when signalled

### New Cluster Settings

- `kv.store_liveness.heartbeat_smearing.enabled` (default: true) - Enable/disable smearing
- `kv.store_liveness.heartbeat_smearing.refresh` (default: 10ms) - Batching window duration
- `kv.store_liveness.heartbeat_smearing.smear` (default: 1ms) - Time to spread sends across queues

### Backward Compatibility

- Feature is disabled by setting `kv.store_liveness.heartbeat_smearing.enabled=false`
- When disabled, messages are sent immediately via the existing path (non-smearing mode)

### Testing

- Added comprehensive unit tests verifying:
  - Messages batch correctly across multiple destinations
  - Smearing spreads signals over the configured time window
  - Smearing mode vs immediate mode behaviour
  - Message ordering and reliability

All existing tests updated to call `SendAllEnqueuedMessages()` after enqueuing when smearing is enabled.

#### Roachprod testing

##### Prototype #1

- Ran a prototype with a [similar design](cockroachdb#154942) (TL;DR of prototype, the heartbeats were smeared via `SupportManager` goroutines being put to sleep; this current design ensures that `SupportManager` goroutines do not get blocked) on a roachprod with 150 node cluster to verify smearing works. 

| Before changes (current behaviour on master) | After changes (prototype) |
|--------|--------|
| <img width="2680" height="570" alt="image" src="https://github.com/user-attachments/assets/32fe6ee0-437f-48eb-b3f1-087a3eafe6ac" /> | <img width="2692" height="634" alt="image" src="https://github.com/user-attachments/assets/66b5b82b-bbc4-4f47-a13e-5f6d42a1c6d4" /> |

##### Current changes

- Ran a roachprod test with current changes but without the check for empty queues (more info - https://reviewable.io/reviews/cockroachdb/cockroach/156378#-). This check did end up proving vital, as the test results didn't show the expected smearing behaviour. 

- Ran a mini-roachprod test on [this prototype commit](https://github.com/cockroachdb/cockroach/pull/155317/files#diff-9282b4b1d9a2fe32fae81e5776eb081e58069b4bc7db76718873b75f026e16c1) (where the only real difference between my changes is the inclusion of the length check on the queues that have messages on that commit) showed expected smearing behaviour. 

<img width="1797" height="469" alt="image" src="https://github.com/user-attachments/assets/bd7778ef-9f8d-4dbf-8ed2-dac40e7fb03c" />

Fixes: cockroachdb#148210

Release note: None


Co-authored-by: Swapneeth Gorantla <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants