chore: Use synctest within lib/executor tests#5703
Open
Conversation
Contributor
Author
|
This PR was heavily made with cursor (and to be honest that probably took twice as long as if I just did it by hand ...) The changes are quite minimal actually, but any diffing is quite terrible. I highly recommend checking this out locally and using |
Contributor
|
@mstoykov Can you sync with master, please? Starting to review, but would love to see it clean before giving it a 👍🏻 Nice work, btw! 🚀 🙇🏻 |
Among other things this makes this test hopefully completely not flaky.
It also makes them very fast as now the only not ~0s tests are the ones
that actually do calculations.
This does require a change to arrival rate cal method which calculates
when the next iteration should happen.
This is due to this actually not stopping on being cancelled and just
blocking on a channel. The change unfortunately does have performance
impact as shown by the benchmarks
```
goos: linux
goarch: amd64
pkg: go.k6.io/k6/lib/executor
cpu: AMD Ryzen 7 PRO 6850U with Radeon Graphics
│ old_bench.result │ new_bench.result │
│ iterations/s │ iterations/s vs base │
RampingArrivalRateRun/VUs10-16 1.568M ± 6% 1.258M ± 12% -19.77% (p=0.000 n=10)
RampingArrivalRateRun/VUs100-16 1.937M ± 11% 1.565M ± 7% -19.20% (p=0.000 n=10)
RampingArrivalRateRun/VUs1000-16 1.789M ± 12% 1.542M ± 12% -13.82% (p=0.000 n=10)
RampingArrivalRateRun/VUs10000-16 1.317M ± 11% 1.187M ± 4% -9.88% (p=0.000 n=10)
geomean 1.636M 1.378M -15.76%
│ old_bench.result │ new_bench.result │
│ sec/op │ sec/op vs base │
Cal/1s-16 1.463µ ± 13% 17.690µ ± 8% +1109.16% (p=0.000 n=10)
Cal/1m0s-16 60.79µ ± 39% 1006.74µ ± 4% +1556.06% (p=0.000 n=10)
CalRat/1s-16 7.058m ± 22% 6.713m ± 9% ~ (p=0.739 n=10)
CalRat/1m0s-16 4.421 ± 6% 4.312 ± 9% ~ (p=0.796 n=10)
RampingVUsGetRawExecutionSteps/seq:;segment:/normal-16 105.2µ ± 8% 115.4µ ± 10% +9.67% (p=0.043 n=10)
RampingVUsGetRawExecutionSteps/seq:;segment:/rollercoaster-16 986.0µ ± 11% 1009.5µ ± 42% ~ (p=0.529 n=10)
RampingVUsGetRawExecutionSteps/seq:;segment:0:1/normal-16 103.9µ ± 16% 115.4µ ± 10% +11.04% (p=0.023 n=10)
RampingVUsGetRawExecutionSteps/seq:;segment:0:1/rollercoaster-16 992.5µ ± 12% 1009.5µ ± 16% ~ (p=0.631 n=10)
RampingVUsGetRawExecutionSteps/seq:0,0.3,0.5,0.6,0.7,0.8,0.9,1;segment:0:0.3/normal-16 34.68µ ± 6% 34.98µ ± 26% ~ (p=0.912 n=10)
RampingVUsGetRawExecutionSteps/seq:0,0.3,0.5,0.6,0.7,0.8,0.9,1;segment:0:0.3/rollercoaster-16 321.0µ ± 13% 327.7µ ± 17% ~ (p=0.247 n=10)
RampingVUsGetRawExecutionSteps/seq:0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1;segment:0:0.1/normal-16 10.53µ ± 7% 11.58µ ± 12% ~ (p=0.143 n=10)
RampingVUsGetRawExecutionSteps/seq:0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1;segment:0:0.1/rollercoaster-16 118.7µ ± 19% 124.5µ ± 25% ~ (p=0.631 n=10)
RampingVUsGetRawExecutionSteps/seq:;segment:2/5:4/5/normal-16 44.93µ ± 8% 43.29µ ± 10% ~ (p=0.436 n=10)
RampingVUsGetRawExecutionSteps/seq:;segment:2/5:4/5/rollercoaster-16 438.7µ ± 11% 471.9µ ± 10% ~ (p=0.190 n=10)
RampingVUsGetRawExecutionSteps/seq:;segment:2235/5213:4/5/normal-16 47.26µ ± 6% 48.65µ ± 9% ~ (p=0.684 n=10)
RampingVUsGetRawExecutionSteps/seq:;segment:2235/5213:4/5/rollercoaster-16 425.2µ ± 7% 446.1µ ± 7% +4.91% (p=0.011 n=10)
VUHandleIterations-16 1.001 ± 0% 1.000 ± 0% ~ (p=0.631 n=10)
geomean 398.9µ 559.3µ +40.22%
│ old_bench.result │ new_bench.result │
│ iterations/ns │ iterations/ns vs base │
VUHandleIterations-16 96.91m ± 22% 102.55m ± 24% ~ (p=0.075 n=10)
```
I would argue the RampingArrivalRateRun are more relevant, but this is
still running an empty iteration as fast as possible, so now just the
ceiling is down. I do not think running 1 million iterations on my
machine is feasible in any way to be honest.
On the other hand this is one more case where we do not leave goroutine
hanging so that is nice.
joanlopez
reviewed
Mar 10, 2026
| shownWarning := false | ||
| metricTags := varr.getMetricTags(nil) | ||
| go varr.config.cal(varr.et, ch) | ||
| go varr.config.cal(maxDurationCtx, varr.et, ch) |
Contributor
There was a problem hiding this comment.
What's the reason to use maxDurationCtx instead of regDurationCtx, which is up to when channel results are really being considered?
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What?
Among other things this makes this test hopefully completely not flaky. It also makes them very fast as now the only not ~0s tests are the ones that actually do calculations.
This does require a change to arrival rate cal method which calculates when the next iteration should happen.
This is due to this actually not stopping on being cancelled and just blocking on a channel. The change unfortunately does have performance impact as shown by the benchmarks
I would argue the RampingArrivalRateRun are more relevant, but this is still running an empty iteration as fast as possible, so now just the ceiling is down. I do not think running 1 million iterations on my machine is feasible in any way to be honest.
On the other hand this is one more case where we do not leave goroutine hanging so that is nice.
Why?
This practically makes this test completely not flaky for me locally and I do expect also in the CI.
Checklist
make check) and all pass.Checklist: Documentation (only for k6 maintainers and if relevant)
Please do not merge this PR until the following items are filled out.
Related PR(s)/Issue(s)