Skip to content

chore: Use synctest within lib/executor tests#5703

Open
mstoykov wants to merge 1 commit intomasterfrom
useSyncTest
Open

chore: Use synctest within lib/executor tests#5703
mstoykov wants to merge 1 commit intomasterfrom
useSyncTest

Conversation

@mstoykov
Copy link
Contributor

What?

Among other things this makes this test hopefully completely not flaky. It also makes them very fast as now the only not ~0s tests are the ones that actually do calculations.

This does require a change to arrival rate cal method which calculates when the next iteration should happen.

This is due to this actually not stopping on being cancelled and just blocking on a channel. The change unfortunately does have performance impact as shown by the benchmarks

goos: linux
goarch: amd64
pkg: go.k6.io/k6/lib/executor
cpu: AMD Ryzen 7 PRO 6850U with Radeon Graphics
                                  │ old_bench.result │           new_bench.result           │
                                  │   iterations/s   │ iterations/s  vs base                │
RampingArrivalRateRun/VUs10-16          1.568M ±  6%   1.258M ± 12%  -19.77% (p=0.000 n=10)
RampingArrivalRateRun/VUs100-16         1.937M ± 11%   1.565M ±  7%  -19.20% (p=0.000 n=10)
RampingArrivalRateRun/VUs1000-16        1.789M ± 12%   1.542M ± 12%  -13.82% (p=0.000 n=10)
RampingArrivalRateRun/VUs10000-16       1.317M ± 11%   1.187M ±  4%   -9.88% (p=0.000 n=10)
geomean                                 1.636M         1.378M        -15.76%

                                                                                                          │ old_bench.result │             new_bench.result             │
                                                                                                          │      sec/op      │     sec/op      vs base                  │
Cal/1s-16                                                                                                       1.463µ ± 13%    17.690µ ±  8%  +1109.16% (p=0.000 n=10)
Cal/1m0s-16                                                                                                     60.79µ ± 39%   1006.74µ ±  4%  +1556.06% (p=0.000 n=10)
CalRat/1s-16                                                                                                    7.058m ± 22%     6.713m ±  9%          ~ (p=0.739 n=10)
CalRat/1m0s-16                                                                                                   4.421 ±  6%      4.312 ±  9%          ~ (p=0.796 n=10)
RampingVUsGetRawExecutionSteps/seq:;segment:/normal-16                                                          105.2µ ±  8%     115.4µ ± 10%     +9.67% (p=0.043 n=10)
RampingVUsGetRawExecutionSteps/seq:;segment:/rollercoaster-16                                                   986.0µ ± 11%    1009.5µ ± 42%          ~ (p=0.529 n=10)
RampingVUsGetRawExecutionSteps/seq:;segment:0:1/normal-16                                                       103.9µ ± 16%     115.4µ ± 10%    +11.04% (p=0.023 n=10)
RampingVUsGetRawExecutionSteps/seq:;segment:0:1/rollercoaster-16                                                992.5µ ± 12%    1009.5µ ± 16%          ~ (p=0.631 n=10)
RampingVUsGetRawExecutionSteps/seq:0,0.3,0.5,0.6,0.7,0.8,0.9,1;segment:0:0.3/normal-16                          34.68µ ±  6%     34.98µ ± 26%          ~ (p=0.912 n=10)
RampingVUsGetRawExecutionSteps/seq:0,0.3,0.5,0.6,0.7,0.8,0.9,1;segment:0:0.3/rollercoaster-16                   321.0µ ± 13%     327.7µ ± 17%          ~ (p=0.247 n=10)
RampingVUsGetRawExecutionSteps/seq:0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1;segment:0:0.1/normal-16              10.53µ ±  7%     11.58µ ± 12%          ~ (p=0.143 n=10)
RampingVUsGetRawExecutionSteps/seq:0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1;segment:0:0.1/rollercoaster-16       118.7µ ± 19%     124.5µ ± 25%          ~ (p=0.631 n=10)
RampingVUsGetRawExecutionSteps/seq:;segment:2/5:4/5/normal-16                                                   44.93µ ±  8%     43.29µ ± 10%          ~ (p=0.436 n=10)
RampingVUsGetRawExecutionSteps/seq:;segment:2/5:4/5/rollercoaster-16                                            438.7µ ± 11%     471.9µ ± 10%          ~ (p=0.190 n=10)
RampingVUsGetRawExecutionSteps/seq:;segment:2235/5213:4/5/normal-16                                             47.26µ ±  6%     48.65µ ±  9%          ~ (p=0.684 n=10)
RampingVUsGetRawExecutionSteps/seq:;segment:2235/5213:4/5/rollercoaster-16                                      425.2µ ±  7%     446.1µ ±  7%     +4.91% (p=0.011 n=10)
VUHandleIterations-16                                                                                            1.001 ±  0%      1.000 ±  0%          ~ (p=0.631 n=10)
geomean                                                                                                         398.9µ           559.3µ          +40.22%

                      │ old_bench.result │        new_bench.result         │
                      │  iterations/ns   │ iterations/ns  vs base          │
VUHandleIterations-16       96.91m ± 22%   102.55m ± 24%  ~ (p=0.075 n=10)

I would argue the RampingArrivalRateRun are more relevant, but this is still running an empty iteration as fast as possible, so now just the ceiling is down. I do not think running 1 million iterations on my machine is feasible in any way to be honest.

On the other hand this is one more case where we do not leave goroutine hanging so that is nice.

Why?

This practically makes this test completely not flaky for me locally and I do expect also in the CI.

Checklist

  • I have performed a self-review of my code.
  • I have commented on my code, particularly in hard-to-understand areas.
  • I have added tests for my changes.
  • I have run linter and tests locally (make check) and all pass.

Checklist: Documentation (only for k6 maintainers and if relevant)

Please do not merge this PR until the following items are filled out.

  • I have added the correct milestone and labels to the PR.
  • I have updated the release notes: link
  • I have updated or added an issue to the k6-documentation: grafana/k6-docs#NUMBER if applicable
  • I have updated or added an issue to the TypeScript definitions: grafana/k6-DefinitelyTyped#NUMBER if applicable

Related PR(s)/Issue(s)

@mstoykov mstoykov added this to the v1.7.0 milestone Feb 27, 2026
@mstoykov mstoykov requested a review from a team as a code owner February 27, 2026 10:52
@mstoykov mstoykov requested review from joanlopez and szkiba and removed request for a team February 27, 2026 10:52
@mstoykov
Copy link
Contributor Author

This PR was heavily made with cursor (and to be honest that probably took twice as long as if I just did it by hand ...)

The changes are quite minimal actually, but any diffing is quite terrible.

I highly recommend checking this out locally and using git show --word-diff --ignore-all-space which make the diff mostly to be wrapping code in synctest.Test

@mstoykov mstoykov temporarily deployed to azure-trusted-signing February 27, 2026 10:58 — with GitHub Actions Inactive
@mstoykov mstoykov temporarily deployed to azure-trusted-signing February 27, 2026 11:00 — with GitHub Actions Inactive
Base automatically changed from bumpGo to master March 9, 2026 11:10
@joanlopez
Copy link
Contributor

joanlopez commented Mar 9, 2026

@mstoykov Can you sync with master, please? Starting to review, but would love to see it clean before giving it a 👍🏻

Nice work, btw! 🚀 🙇🏻

Among other things this makes this test hopefully completely not flaky.
It also makes them very fast as now the only not ~0s tests are the ones
that actually do calculations.

This does require a change to arrival rate cal method which calculates
when the next iteration should happen.

This is due to this actually not stopping on being cancelled and just
blocking on a channel. The change unfortunately does have performance
impact as shown by the benchmarks

```
goos: linux
goarch: amd64
pkg: go.k6.io/k6/lib/executor
cpu: AMD Ryzen 7 PRO 6850U with Radeon Graphics
                                  │ old_bench.result │           new_bench.result           │
                                  │   iterations/s   │ iterations/s  vs base                │
RampingArrivalRateRun/VUs10-16          1.568M ±  6%   1.258M ± 12%  -19.77% (p=0.000 n=10)
RampingArrivalRateRun/VUs100-16         1.937M ± 11%   1.565M ±  7%  -19.20% (p=0.000 n=10)
RampingArrivalRateRun/VUs1000-16        1.789M ± 12%   1.542M ± 12%  -13.82% (p=0.000 n=10)
RampingArrivalRateRun/VUs10000-16       1.317M ± 11%   1.187M ±  4%   -9.88% (p=0.000 n=10)
geomean                                 1.636M         1.378M        -15.76%

                                                                                                          │ old_bench.result │             new_bench.result             │
                                                                                                          │      sec/op      │     sec/op      vs base                  │
Cal/1s-16                                                                                                       1.463µ ± 13%    17.690µ ±  8%  +1109.16% (p=0.000 n=10)
Cal/1m0s-16                                                                                                     60.79µ ± 39%   1006.74µ ±  4%  +1556.06% (p=0.000 n=10)
CalRat/1s-16                                                                                                    7.058m ± 22%     6.713m ±  9%          ~ (p=0.739 n=10)
CalRat/1m0s-16                                                                                                   4.421 ±  6%      4.312 ±  9%          ~ (p=0.796 n=10)
RampingVUsGetRawExecutionSteps/seq:;segment:/normal-16                                                          105.2µ ±  8%     115.4µ ± 10%     +9.67% (p=0.043 n=10)
RampingVUsGetRawExecutionSteps/seq:;segment:/rollercoaster-16                                                   986.0µ ± 11%    1009.5µ ± 42%          ~ (p=0.529 n=10)
RampingVUsGetRawExecutionSteps/seq:;segment:0:1/normal-16                                                       103.9µ ± 16%     115.4µ ± 10%    +11.04% (p=0.023 n=10)
RampingVUsGetRawExecutionSteps/seq:;segment:0:1/rollercoaster-16                                                992.5µ ± 12%    1009.5µ ± 16%          ~ (p=0.631 n=10)
RampingVUsGetRawExecutionSteps/seq:0,0.3,0.5,0.6,0.7,0.8,0.9,1;segment:0:0.3/normal-16                          34.68µ ±  6%     34.98µ ± 26%          ~ (p=0.912 n=10)
RampingVUsGetRawExecutionSteps/seq:0,0.3,0.5,0.6,0.7,0.8,0.9,1;segment:0:0.3/rollercoaster-16                   321.0µ ± 13%     327.7µ ± 17%          ~ (p=0.247 n=10)
RampingVUsGetRawExecutionSteps/seq:0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1;segment:0:0.1/normal-16              10.53µ ±  7%     11.58µ ± 12%          ~ (p=0.143 n=10)
RampingVUsGetRawExecutionSteps/seq:0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1;segment:0:0.1/rollercoaster-16       118.7µ ± 19%     124.5µ ± 25%          ~ (p=0.631 n=10)
RampingVUsGetRawExecutionSteps/seq:;segment:2/5:4/5/normal-16                                                   44.93µ ±  8%     43.29µ ± 10%          ~ (p=0.436 n=10)
RampingVUsGetRawExecutionSteps/seq:;segment:2/5:4/5/rollercoaster-16                                            438.7µ ± 11%     471.9µ ± 10%          ~ (p=0.190 n=10)
RampingVUsGetRawExecutionSteps/seq:;segment:2235/5213:4/5/normal-16                                             47.26µ ±  6%     48.65µ ±  9%          ~ (p=0.684 n=10)
RampingVUsGetRawExecutionSteps/seq:;segment:2235/5213:4/5/rollercoaster-16                                      425.2µ ±  7%     446.1µ ±  7%     +4.91% (p=0.011 n=10)
VUHandleIterations-16                                                                                            1.001 ±  0%      1.000 ±  0%          ~ (p=0.631 n=10)
geomean                                                                                                         398.9µ           559.3µ          +40.22%

                      │ old_bench.result │        new_bench.result         │
                      │  iterations/ns   │ iterations/ns  vs base          │
VUHandleIterations-16       96.91m ± 22%   102.55m ± 24%  ~ (p=0.075 n=10)
```

I would argue the RampingArrivalRateRun are more relevant, but this is
still running an empty iteration as fast as possible, so now just the
ceiling is down. I do not think running 1 million iterations on my
machine is feasible in any way to be honest.

On the other hand this is one more case where we do not leave goroutine
hanging so that is nice.
@mstoykov mstoykov temporarily deployed to azure-trusted-signing March 9, 2026 23:03 — with GitHub Actions Inactive
@mstoykov mstoykov temporarily deployed to azure-trusted-signing March 9, 2026 23:05 — with GitHub Actions Inactive
shownWarning := false
metricTags := varr.getMetricTags(nil)
go varr.config.cal(varr.et, ch)
go varr.config.cal(maxDurationCtx, varr.et, ch)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the reason to use maxDurationCtx instead of regDurationCtx, which is up to when channel results are really being considered?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants