Skip to content

Optimizes SpillingGrouper for high cardinality dimension(s) group by with large memory footprint aggregators #20727

Optimizes SpillingGrouper for high cardinality dimension(s) group by with large memory footprint aggregators

Optimizes SpillingGrouper for high cardinality dimension(s) group by with large memory footprint aggregators #20727

Triggered via pull request April 21, 2026 06:00
Status Failure
Total duration 36m 38s
Artifacts 13
unit tests  /  ...  /  validate-dist
26m 37s
unit tests / validate-dist / validate-dist
Matrix: unit tests / run-separated-tests
Matrix: unit tests / unit tests(main)
unit tests  /  ...  /  execute
unit tests / coverage-jacoco / execute
docker-tests  /  Run Docker tests
docker-tests / Run Docker tests
actions-timeline
5s
actions-timeline
Fit to window
Zoom out
Zoom in

Annotations

12 errors and 1 warning
unit tests / unit tests(main) (25, M*,P*,O*) / test-jdk25-[M*,P*,O*]
❌ Tests reported 1 failures
MultiStageQueryTest.testClusterByNestedVirtualColumn: embedded-tests/target/test-classes/org/apache/druid/testing/embedded/msq/MultiStageQueryTest.class#L310
Cannot invoke "java.util.Map.get(Object)" because the return value of "org.apache.druid.msq.counters.CounterSnapshotsTree.snapshotForStage(int)" is null
GroupByQueryRunnerTest.testMaxSpillFileCountLimitThroughContextOverride[config=v2SmallBuffer, runner=mMappedTestIndex, vectorize=true]: processing/target/test-classes/org/apache/druid/query/groupby/GroupByQueryRunnerTest.class#L2972
Expected: (an instance of org.apache.druid.query.ResourceLimitExceededException and exception with message a string containing "Maximum number of spill files reached for this query. Try raising druid.query.groupBy.maxSpillFileCount.") but: an instance of org.apache.druid.query.ResourceLimitExceededException <java.lang.RuntimeException: java.lang.RuntimeException: org.apache.druid.query.groupby.epinephelinae.TemporaryStorageFileLimitException: Cannot write to disk, hit spill file count limit of 1.> is a java.lang.RuntimeException Stacktrace was: java.lang.RuntimeException: java.lang.RuntimeException: org.apache.druid.query.groupby.epinephelinae.TemporaryStorageFileLimitException: Cannot write to disk, hit spill file count limit of 1. at org.apache.druid.query.groupby.epinephelinae.ConcurrentGrouper.parallelSortAndGetGroupersIterator(ConcurrentGrouper.java:455) at org.apache.druid.query.groupby.epinephelinae.ConcurrentGrouper.iterator(ConcurrentGrouper.java:350) at org.apache.druid.query.groupby.epinephelinae.RowBasedGrouperHelper.makeGrouperIterator(RowBasedGrouperHelper.java:633) at org.apache.druid.query.groupby.epinephelinae.RowBasedGrouperHelper.makeGrouperIterator(RowBasedGrouperHelper.java:587) at org.apache.druid.query.groupby.epinephelinae.GroupByMergingQueryRunner$1.make(GroupByMergingQueryRunner.java:310) at org.apache.druid.query.groupby.epinephelinae.GroupByMergingQueryRunner$1.make(GroupByMergingQueryRunner.java:185) at org.apache.druid.java.util.common.guava.BaseSequence.accumulate(BaseSequence.java:39) at org.apache.druid.common.guava.CombiningSequence.accumulate(CombiningSequence.java:62) at org.apache.druid.java.util.common.guava.WrappingSequence$1.get(WrappingSequence.java:50) at org.apache.druid.java.util.common.guava.SequenceWrapper.wrap(SequenceWrapper.java:55) at org.apache.druid.java.util.common.guava.WrappingSequence.accumulate(WrappingSequence.java:45) at org.apache.druid.java.util.common.guava.MappedSequence.accumulate(MappedSequence.java:43) at org.apache.druid.java.util.common.guava.Sequence.toList(Sequence.java:87) at org.apache.druid.query.groupby.GroupByQueryRunnerTestHelper.runQuery(GroupByQueryRunnerTestHelper.java:59) at org.apache.druid.query.groupby.GroupByQueryRunnerTest.testMaxSpillFileCountLimitThroughContextOverride(GroupByQueryRunnerTest.java:2972) at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104) at java.base/java.lang.reflect.Method.invoke(Method.java:565) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258) at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.run
GroupByQueryRunnerTest.testNotEnoughDiskSpaceThroughContextOverride[config=v2SmallBuffer, runner=mMappedTestIndex, vectorize=true]: processing/target/test-classes/org/apache/druid/query/groupby/GroupByQueryRunnerTest.class#L3000
Expected: (an instance of org.apache.druid.query.ResourceLimitExceededException and exception with message a string containing "Not enough disk space to execute this query") but: an instance of org.apache.druid.query.ResourceLimitExceededException <java.lang.RuntimeException: java.lang.RuntimeException: org.apache.druid.query.groupby.epinephelinae.TemporaryStorageFullException: Cannot write to disk, hit limit of 1 bytes.> is a java.lang.RuntimeException Stacktrace was: java.lang.RuntimeException: java.lang.RuntimeException: org.apache.druid.query.groupby.epinephelinae.TemporaryStorageFullException: Cannot write to disk, hit limit of 1 bytes. at org.apache.druid.query.groupby.epinephelinae.ConcurrentGrouper.parallelSortAndGetGroupersIterator(ConcurrentGrouper.java:455) at org.apache.druid.query.groupby.epinephelinae.ConcurrentGrouper.iterator(ConcurrentGrouper.java:350) at org.apache.druid.query.groupby.epinephelinae.RowBasedGrouperHelper.makeGrouperIterator(RowBasedGrouperHelper.java:633) at org.apache.druid.query.groupby.epinephelinae.RowBasedGrouperHelper.makeGrouperIterator(RowBasedGrouperHelper.java:587) at org.apache.druid.query.groupby.epinephelinae.GroupByMergingQueryRunner$1.make(GroupByMergingQueryRunner.java:310) at org.apache.druid.query.groupby.epinephelinae.GroupByMergingQueryRunner$1.make(GroupByMergingQueryRunner.java:185) at org.apache.druid.java.util.common.guava.BaseSequence.accumulate(BaseSequence.java:39) at org.apache.druid.common.guava.CombiningSequence.accumulate(CombiningSequence.java:62) at org.apache.druid.java.util.common.guava.WrappingSequence$1.get(WrappingSequence.java:50) at org.apache.druid.java.util.common.guava.SequenceWrapper.wrap(SequenceWrapper.java:55) at org.apache.druid.java.util.common.guava.WrappingSequence.accumulate(WrappingSequence.java:45) at org.apache.druid.java.util.common.guava.MappedSequence.accumulate(MappedSequence.java:43) at org.apache.druid.java.util.common.guava.Sequence.toList(Sequence.java:87) at org.apache.druid.query.groupby.GroupByQueryRunnerTestHelper.runQuery(GroupByQueryRunnerTestHelper.java:59) at org.apache.druid.query.groupby.GroupByQueryRunnerTest.testNotEnoughDiskSpaceThroughContextOverride(GroupByQueryRunnerTest.java:3000) at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104) at java.base/java.lang.reflect.Method.invoke(Method.java:565) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258) at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$10
GroupByQueryRunnerTest.testMaxSpillFileCountLimitThroughContextOverride[config=v2SmallBuffer, runner=mMappedTestIndex, vectorize=false]: processing/target/test-classes/org/apache/druid/query/groupby/GroupByQueryRunnerTest.class#L2972
Expected: (an instance of org.apache.druid.query.ResourceLimitExceededException and exception with message a string containing "Maximum number of spill files reached for this query. Try raising druid.query.groupBy.maxSpillFileCount.") but: an instance of org.apache.druid.query.ResourceLimitExceededException <java.lang.RuntimeException: java.lang.RuntimeException: org.apache.druid.query.groupby.epinephelinae.TemporaryStorageFileLimitException: Cannot write to disk, hit spill file count limit of 1.> is a java.lang.RuntimeException Stacktrace was: java.lang.RuntimeException: java.lang.RuntimeException: org.apache.druid.query.groupby.epinephelinae.TemporaryStorageFileLimitException: Cannot write to disk, hit spill file count limit of 1. at org.apache.druid.query.groupby.epinephelinae.ConcurrentGrouper.parallelSortAndGetGroupersIterator(ConcurrentGrouper.java:455) at org.apache.druid.query.groupby.epinephelinae.ConcurrentGrouper.iterator(ConcurrentGrouper.java:350) at org.apache.druid.query.groupby.epinephelinae.RowBasedGrouperHelper.makeGrouperIterator(RowBasedGrouperHelper.java:633) at org.apache.druid.query.groupby.epinephelinae.RowBasedGrouperHelper.makeGrouperIterator(RowBasedGrouperHelper.java:587) at org.apache.druid.query.groupby.epinephelinae.GroupByMergingQueryRunner$1.make(GroupByMergingQueryRunner.java:310) at org.apache.druid.query.groupby.epinephelinae.GroupByMergingQueryRunner$1.make(GroupByMergingQueryRunner.java:185) at org.apache.druid.java.util.common.guava.BaseSequence.accumulate(BaseSequence.java:39) at org.apache.druid.common.guava.CombiningSequence.accumulate(CombiningSequence.java:62) at org.apache.druid.java.util.common.guava.WrappingSequence$1.get(WrappingSequence.java:50) at org.apache.druid.java.util.common.guava.SequenceWrapper.wrap(SequenceWrapper.java:55) at org.apache.druid.java.util.common.guava.WrappingSequence.accumulate(WrappingSequence.java:45) at org.apache.druid.java.util.common.guava.MappedSequence.accumulate(MappedSequence.java:43) at org.apache.druid.java.util.common.guava.Sequence.toList(Sequence.java:87) at org.apache.druid.query.groupby.GroupByQueryRunnerTestHelper.runQuery(GroupByQueryRunnerTestHelper.java:59) at org.apache.druid.query.groupby.GroupByQueryRunnerTest.testMaxSpillFileCountLimitThroughContextOverride(GroupByQueryRunnerTest.java:2972) at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104) at java.base/java.lang.reflect.Method.invoke(Method.java:565) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258) at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.run
GroupByQueryRunnerTest.testNotEnoughDiskSpaceThroughContextOverride[config=v2SmallBuffer, runner=mMappedTestIndex, vectorize=false]: processing/target/test-classes/org/apache/druid/query/groupby/GroupByQueryRunnerTest.class#L3000
Expected: (an instance of org.apache.druid.query.ResourceLimitExceededException and exception with message a string containing "Not enough disk space to execute this query") but: an instance of org.apache.druid.query.ResourceLimitExceededException <java.lang.RuntimeException: java.lang.RuntimeException: org.apache.druid.query.groupby.epinephelinae.TemporaryStorageFullException: Cannot write to disk, hit limit of 1 bytes.> is a java.lang.RuntimeException Stacktrace was: java.lang.RuntimeException: java.lang.RuntimeException: org.apache.druid.query.groupby.epinephelinae.TemporaryStorageFullException: Cannot write to disk, hit limit of 1 bytes. at org.apache.druid.query.groupby.epinephelinae.ConcurrentGrouper.parallelSortAndGetGroupersIterator(ConcurrentGrouper.java:455) at org.apache.druid.query.groupby.epinephelinae.ConcurrentGrouper.iterator(ConcurrentGrouper.java:350) at org.apache.druid.query.groupby.epinephelinae.RowBasedGrouperHelper.makeGrouperIterator(RowBasedGrouperHelper.java:633) at org.apache.druid.query.groupby.epinephelinae.RowBasedGrouperHelper.makeGrouperIterator(RowBasedGrouperHelper.java:587) at org.apache.druid.query.groupby.epinephelinae.GroupByMergingQueryRunner$1.make(GroupByMergingQueryRunner.java:310) at org.apache.druid.query.groupby.epinephelinae.GroupByMergingQueryRunner$1.make(GroupByMergingQueryRunner.java:185) at org.apache.druid.java.util.common.guava.BaseSequence.accumulate(BaseSequence.java:39) at org.apache.druid.common.guava.CombiningSequence.accumulate(CombiningSequence.java:62) at org.apache.druid.java.util.common.guava.WrappingSequence$1.get(WrappingSequence.java:50) at org.apache.druid.java.util.common.guava.SequenceWrapper.wrap(SequenceWrapper.java:55) at org.apache.druid.java.util.common.guava.WrappingSequence.accumulate(WrappingSequence.java:45) at org.apache.druid.java.util.common.guava.MappedSequence.accumulate(MappedSequence.java:43) at org.apache.druid.java.util.common.guava.Sequence.toList(Sequence.java:87) at org.apache.druid.query.groupby.GroupByQueryRunnerTestHelper.runQuery(GroupByQueryRunnerTestHelper.java:59) at org.apache.druid.query.groupby.GroupByQueryRunnerTest.testNotEnoughDiskSpaceThroughContextOverride(GroupByQueryRunnerTest.java:3000) at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104) at java.base/java.lang.reflect.Method.invoke(Method.java:565) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258) at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$10
GroupByQueryRunnerTest.testMaxSpillFileCountLimitThroughContextOverride[config=v2SmallBuffer, runner=noRollupRtIndex, vectorize=false]: processing/target/test-classes/org/apache/druid/query/groupby/GroupByQueryRunnerTest.class#L2972
Expected: (an instance of org.apache.druid.query.ResourceLimitExceededException and exception with message a string containing "Maximum number of spill files reached for this query. Try raising druid.query.groupBy.maxSpillFileCount.") but: an instance of org.apache.druid.query.ResourceLimitExceededException <java.lang.RuntimeException: java.lang.RuntimeException: org.apache.druid.query.groupby.epinephelinae.TemporaryStorageFileLimitException: Cannot write to disk, hit spill file count limit of 1.> is a java.lang.RuntimeException Stacktrace was: java.lang.RuntimeException: java.lang.RuntimeException: org.apache.druid.query.groupby.epinephelinae.TemporaryStorageFileLimitException: Cannot write to disk, hit spill file count limit of 1. at org.apache.druid.query.groupby.epinephelinae.ConcurrentGrouper.parallelSortAndGetGroupersIterator(ConcurrentGrouper.java:455) at org.apache.druid.query.groupby.epinephelinae.ConcurrentGrouper.iterator(ConcurrentGrouper.java:350) at org.apache.druid.query.groupby.epinephelinae.RowBasedGrouperHelper.makeGrouperIterator(RowBasedGrouperHelper.java:633) at org.apache.druid.query.groupby.epinephelinae.RowBasedGrouperHelper.makeGrouperIterator(RowBasedGrouperHelper.java:587) at org.apache.druid.query.groupby.epinephelinae.GroupByMergingQueryRunner$1.make(GroupByMergingQueryRunner.java:310) at org.apache.druid.query.groupby.epinephelinae.GroupByMergingQueryRunner$1.make(GroupByMergingQueryRunner.java:185) at org.apache.druid.java.util.common.guava.BaseSequence.accumulate(BaseSequence.java:39) at org.apache.druid.common.guava.CombiningSequence.accumulate(CombiningSequence.java:62) at org.apache.druid.java.util.common.guava.WrappingSequence$1.get(WrappingSequence.java:50) at org.apache.druid.java.util.common.guava.SequenceWrapper.wrap(SequenceWrapper.java:55) at org.apache.druid.java.util.common.guava.WrappingSequence.accumulate(WrappingSequence.java:45) at org.apache.druid.java.util.common.guava.MappedSequence.accumulate(MappedSequence.java:43) at org.apache.druid.java.util.common.guava.Sequence.toList(Sequence.java:87) at org.apache.druid.query.groupby.GroupByQueryRunnerTestHelper.runQuery(GroupByQueryRunnerTestHelper.java:59) at org.apache.druid.query.groupby.GroupByQueryRunnerTest.testMaxSpillFileCountLimitThroughContextOverride(GroupByQueryRunnerTest.java:2972) at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104) at java.base/java.lang.reflect.Method.invoke(Method.java:565) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258) at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.run
GroupByQueryRunnerTest.testNotEnoughDiskSpaceThroughContextOverride[config=v2SmallBuffer, runner=noRollupRtIndex, vectorize=false]: processing/target/test-classes/org/apache/druid/query/groupby/GroupByQueryRunnerTest.class#L3000
Expected: (an instance of org.apache.druid.query.ResourceLimitExceededException and exception with message a string containing "Not enough disk space to execute this query") but: an instance of org.apache.druid.query.ResourceLimitExceededException <java.lang.RuntimeException: java.lang.RuntimeException: org.apache.druid.query.groupby.epinephelinae.TemporaryStorageFullException: Cannot write to disk, hit limit of 1 bytes.> is a java.lang.RuntimeException Stacktrace was: java.lang.RuntimeException: java.lang.RuntimeException: org.apache.druid.query.groupby.epinephelinae.TemporaryStorageFullException: Cannot write to disk, hit limit of 1 bytes. at org.apache.druid.query.groupby.epinephelinae.ConcurrentGrouper.parallelSortAndGetGroupersIterator(ConcurrentGrouper.java:455) at org.apache.druid.query.groupby.epinephelinae.ConcurrentGrouper.iterator(ConcurrentGrouper.java:350) at org.apache.druid.query.groupby.epinephelinae.RowBasedGrouperHelper.makeGrouperIterator(RowBasedGrouperHelper.java:633) at org.apache.druid.query.groupby.epinephelinae.RowBasedGrouperHelper.makeGrouperIterator(RowBasedGrouperHelper.java:587) at org.apache.druid.query.groupby.epinephelinae.GroupByMergingQueryRunner$1.make(GroupByMergingQueryRunner.java:310) at org.apache.druid.query.groupby.epinephelinae.GroupByMergingQueryRunner$1.make(GroupByMergingQueryRunner.java:185) at org.apache.druid.java.util.common.guava.BaseSequence.accumulate(BaseSequence.java:39) at org.apache.druid.common.guava.CombiningSequence.accumulate(CombiningSequence.java:62) at org.apache.druid.java.util.common.guava.WrappingSequence$1.get(WrappingSequence.java:50) at org.apache.druid.java.util.common.guava.SequenceWrapper.wrap(SequenceWrapper.java:55) at org.apache.druid.java.util.common.guava.WrappingSequence.accumulate(WrappingSequence.java:45) at org.apache.druid.java.util.common.guava.MappedSequence.accumulate(MappedSequence.java:43) at org.apache.druid.java.util.common.guava.Sequence.toList(Sequence.java:87) at org.apache.druid.query.groupby.GroupByQueryRunnerTestHelper.runQuery(GroupByQueryRunnerTestHelper.java:59) at org.apache.druid.query.groupby.GroupByQueryRunnerTest.testNotEnoughDiskSpaceThroughContextOverride(GroupByQueryRunnerTest.java:3000) at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104) at java.base/java.lang.reflect.Method.invoke(Method.java:565) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258) at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$10
GroupByQueryRunnerTest.testMaxSpillFileCountLimitThroughContextOverride[config=v2SmallBuffer, runner=rtIndexPartialSchemaStringDiscovery, vectorize=false]: processing/target/test-classes/org/apache/druid/query/groupby/GroupByQueryRunnerTest.class#L2972
Expected: (an instance of org.apache.druid.query.ResourceLimitExceededException and exception with message a string containing "Maximum number of spill files reached for this query. Try raising druid.query.groupBy.maxSpillFileCount.") but: an instance of org.apache.druid.query.ResourceLimitExceededException <java.lang.RuntimeException: java.lang.RuntimeException: org.apache.druid.query.groupby.epinephelinae.TemporaryStorageFileLimitException: Cannot write to disk, hit spill file count limit of 1.> is a java.lang.RuntimeException Stacktrace was: java.lang.RuntimeException: java.lang.RuntimeException: org.apache.druid.query.groupby.epinephelinae.TemporaryStorageFileLimitException: Cannot write to disk, hit spill file count limit of 1. at org.apache.druid.query.groupby.epinephelinae.ConcurrentGrouper.parallelSortAndGetGroupersIterator(ConcurrentGrouper.java:455) at org.apache.druid.query.groupby.epinephelinae.ConcurrentGrouper.iterator(ConcurrentGrouper.java:350) at org.apache.druid.query.groupby.epinephelinae.RowBasedGrouperHelper.makeGrouperIterator(RowBasedGrouperHelper.java:633) at org.apache.druid.query.groupby.epinephelinae.RowBasedGrouperHelper.makeGrouperIterator(RowBasedGrouperHelper.java:587) at org.apache.druid.query.groupby.epinephelinae.GroupByMergingQueryRunner$1.make(GroupByMergingQueryRunner.java:310) at org.apache.druid.query.groupby.epinephelinae.GroupByMergingQueryRunner$1.make(GroupByMergingQueryRunner.java:185) at org.apache.druid.java.util.common.guava.BaseSequence.accumulate(BaseSequence.java:39) at org.apache.druid.common.guava.CombiningSequence.accumulate(CombiningSequence.java:62) at org.apache.druid.java.util.common.guava.WrappingSequence$1.get(WrappingSequence.java:50) at org.apache.druid.java.util.common.guava.SequenceWrapper.wrap(SequenceWrapper.java:55) at org.apache.druid.java.util.common.guava.WrappingSequence.accumulate(WrappingSequence.java:45) at org.apache.druid.java.util.common.guava.MappedSequence.accumulate(MappedSequence.java:43) at org.apache.druid.java.util.common.guava.Sequence.toList(Sequence.java:87) at org.apache.druid.query.groupby.GroupByQueryRunnerTestHelper.runQuery(GroupByQueryRunnerTestHelper.java:59) at org.apache.druid.query.groupby.GroupByQueryRunnerTest.testMaxSpillFileCountLimitThroughContextOverride(GroupByQueryRunnerTest.java:2972) at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104) at java.base/java.lang.reflect.Method.invoke(Method.java:565) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258) at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.run
GroupByQueryRunnerTest.testNotEnoughDiskSpaceThroughContextOverride[config=v2SmallBuffer, runner=rtIndexPartialSchemaStringDiscovery, vectorize=false]: processing/target/test-classes/org/apache/druid/query/groupby/GroupByQueryRunnerTest.class#L3000
Expected: (an instance of org.apache.druid.query.ResourceLimitExceededException and exception with message a string containing "Not enough disk space to execute this query") but: an instance of org.apache.druid.query.ResourceLimitExceededException <java.lang.RuntimeException: java.lang.RuntimeException: org.apache.druid.query.groupby.epinephelinae.TemporaryStorageFullException: Cannot write to disk, hit limit of 1 bytes.> is a java.lang.RuntimeException Stacktrace was: java.lang.RuntimeException: java.lang.RuntimeException: org.apache.druid.query.groupby.epinephelinae.TemporaryStorageFullException: Cannot write to disk, hit limit of 1 bytes. at org.apache.druid.query.groupby.epinephelinae.ConcurrentGrouper.parallelSortAndGetGroupersIterator(ConcurrentGrouper.java:455) at org.apache.druid.query.groupby.epinephelinae.ConcurrentGrouper.iterator(ConcurrentGrouper.java:350) at org.apache.druid.query.groupby.epinephelinae.RowBasedGrouperHelper.makeGrouperIterator(RowBasedGrouperHelper.java:633) at org.apache.druid.query.groupby.epinephelinae.RowBasedGrouperHelper.makeGrouperIterator(RowBasedGrouperHelper.java:587) at org.apache.druid.query.groupby.epinephelinae.GroupByMergingQueryRunner$1.make(GroupByMergingQueryRunner.java:310) at org.apache.druid.query.groupby.epinephelinae.GroupByMergingQueryRunner$1.make(GroupByMergingQueryRunner.java:185) at org.apache.druid.java.util.common.guava.BaseSequence.accumulate(BaseSequence.java:39) at org.apache.druid.common.guava.CombiningSequence.accumulate(CombiningSequence.java:62) at org.apache.druid.java.util.common.guava.WrappingSequence$1.get(WrappingSequence.java:50) at org.apache.druid.java.util.common.guava.SequenceWrapper.wrap(SequenceWrapper.java:55) at org.apache.druid.java.util.common.guava.WrappingSequence.accumulate(WrappingSequence.java:45) at org.apache.druid.java.util.common.guava.MappedSequence.accumulate(MappedSequence.java:43) at org.apache.druid.java.util.common.guava.Sequence.toList(Sequence.java:87) at org.apache.druid.query.groupby.GroupByQueryRunnerTestHelper.runQuery(GroupByQueryRunnerTestHelper.java:59) at org.apache.druid.query.groupby.GroupByQueryRunnerTest.testNotEnoughDiskSpaceThroughContextOverride(GroupByQueryRunnerTest.java:3000) at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104) at java.base/java.lang.reflect.Method.invoke(Method.java:565) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258) at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$10
GroupByQueryRunnerTest.testMaxSpillFileCountLimitThroughContextOverride[config=v2SmallBuffer, runner=rtIndex, vectorize=false]: processing/target/test-classes/org/apache/druid/query/groupby/GroupByQueryRunnerTest.class#L2972
Expected: (an instance of org.apache.druid.query.ResourceLimitExceededException and exception with message a string containing "Maximum number of spill files reached for this query. Try raising druid.query.groupBy.maxSpillFileCount.") but: an instance of org.apache.druid.query.ResourceLimitExceededException <java.lang.RuntimeException: java.lang.RuntimeException: org.apache.druid.query.groupby.epinephelinae.TemporaryStorageFileLimitException: Cannot write to disk, hit spill file count limit of 1.> is a java.lang.RuntimeException Stacktrace was: java.lang.RuntimeException: java.lang.RuntimeException: org.apache.druid.query.groupby.epinephelinae.TemporaryStorageFileLimitException: Cannot write to disk, hit spill file count limit of 1. at org.apache.druid.query.groupby.epinephelinae.ConcurrentGrouper.parallelSortAndGetGroupersIterator(ConcurrentGrouper.java:455) at org.apache.druid.query.groupby.epinephelinae.ConcurrentGrouper.iterator(ConcurrentGrouper.java:350) at org.apache.druid.query.groupby.epinephelinae.RowBasedGrouperHelper.makeGrouperIterator(RowBasedGrouperHelper.java:633) at org.apache.druid.query.groupby.epinephelinae.RowBasedGrouperHelper.makeGrouperIterator(RowBasedGrouperHelper.java:587) at org.apache.druid.query.groupby.epinephelinae.GroupByMergingQueryRunner$1.make(GroupByMergingQueryRunner.java:310) at org.apache.druid.query.groupby.epinephelinae.GroupByMergingQueryRunner$1.make(GroupByMergingQueryRunner.java:185) at org.apache.druid.java.util.common.guava.BaseSequence.accumulate(BaseSequence.java:39) at org.apache.druid.common.guava.CombiningSequence.accumulate(CombiningSequence.java:62) at org.apache.druid.java.util.common.guava.WrappingSequence$1.get(WrappingSequence.java:50) at org.apache.druid.java.util.common.guava.SequenceWrapper.wrap(SequenceWrapper.java:55) at org.apache.druid.java.util.common.guava.WrappingSequence.accumulate(WrappingSequence.java:45) at org.apache.druid.java.util.common.guava.MappedSequence.accumulate(MappedSequence.java:43) at org.apache.druid.java.util.common.guava.Sequence.toList(Sequence.java:87) at org.apache.druid.query.groupby.GroupByQueryRunnerTestHelper.runQuery(GroupByQueryRunnerTestHelper.java:59) at org.apache.druid.query.groupby.GroupByQueryRunnerTest.testMaxSpillFileCountLimitThroughContextOverride(GroupByQueryRunnerTest.java:2972) at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104) at java.base/java.lang.reflect.Method.invoke(Method.java:565) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258) at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.run
GroupByQueryRunnerTest.testNotEnoughDiskSpaceThroughContextOverride[config=v2SmallBuffer, runner=rtIndex, vectorize=false]: processing/target/test-classes/org/apache/druid/query/groupby/GroupByQueryRunnerTest.class#L3000
Expected: (an instance of org.apache.druid.query.ResourceLimitExceededException and exception with message a string containing "Not enough disk space to execute this query") but: an instance of org.apache.druid.query.ResourceLimitExceededException <java.lang.RuntimeException: java.lang.RuntimeException: org.apache.druid.query.groupby.epinephelinae.TemporaryStorageFullException: Cannot write to disk, hit limit of 1 bytes.> is a java.lang.RuntimeException Stacktrace was: java.lang.RuntimeException: java.lang.RuntimeException: org.apache.druid.query.groupby.epinephelinae.TemporaryStorageFullException: Cannot write to disk, hit limit of 1 bytes. at org.apache.druid.query.groupby.epinephelinae.ConcurrentGrouper.parallelSortAndGetGroupersIterator(ConcurrentGrouper.java:455) at org.apache.druid.query.groupby.epinephelinae.ConcurrentGrouper.iterator(ConcurrentGrouper.java:350) at org.apache.druid.query.groupby.epinephelinae.RowBasedGrouperHelper.makeGrouperIterator(RowBasedGrouperHelper.java:633) at org.apache.druid.query.groupby.epinephelinae.RowBasedGrouperHelper.makeGrouperIterator(RowBasedGrouperHelper.java:587) at org.apache.druid.query.groupby.epinephelinae.GroupByMergingQueryRunner$1.make(GroupByMergingQueryRunner.java:310) at org.apache.druid.query.groupby.epinephelinae.GroupByMergingQueryRunner$1.make(GroupByMergingQueryRunner.java:185) at org.apache.druid.java.util.common.guava.BaseSequence.accumulate(BaseSequence.java:39) at org.apache.druid.common.guava.CombiningSequence.accumulate(CombiningSequence.java:62) at org.apache.druid.java.util.common.guava.WrappingSequence$1.get(WrappingSequence.java:50) at org.apache.druid.java.util.common.guava.SequenceWrapper.wrap(SequenceWrapper.java:55) at org.apache.druid.java.util.common.guava.WrappingSequence.accumulate(WrappingSequence.java:45) at org.apache.druid.java.util.common.guava.MappedSequence.accumulate(MappedSequence.java:43) at org.apache.druid.java.util.common.guava.Sequence.toList(Sequence.java:87) at org.apache.druid.query.groupby.GroupByQueryRunnerTestHelper.runQuery(GroupByQueryRunnerTestHelper.java:59) at org.apache.druid.query.groupby.GroupByQueryRunnerTest.testNotEnoughDiskSpaceThroughContextOverride(GroupByQueryRunnerTest.java:3000) at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104) at java.base/java.lang.reflect.Method.invoke(Method.java:565) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258) at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$10
actions-timeline
Node.js 20 actions are deprecated. The following actions are running on Node.js 20 and may not work as expected: Kesin11/actions-timeline@54d513e0b5ff1158f1cf8321108d666a5a6c1fca. Actions will be forced to run with Node.js 24 by default starting June 2nd, 2026. Node.js 20 will be removed from the runner on September 16th, 2026. Please check if updated versions of these actions are available that support Node.js 24. To opt into Node.js 24 now, set the FORCE_JAVASCRIPT_ACTIONS_TO_NODE24=true environment variable on the runner or in your workflow file. Once Node.js 24 becomes the default, you can temporarily opt out by setting ACTIONS_ALLOW_USE_UNSECURE_NODE_VERSION=true. For more information see: https://github.blog/changelog/2025-09-19-deprecation-of-node-20-on-github-actions-runners/

Artifacts

Produced during runtime
Name Size Digest
unit-test-reports-jdk25-000320bd
5.25 MB
sha256:7b3c6893c3f7de01edcc16426d4b793b627c314603f9bcfa4033e9536c2b2b88
unit-test-reports-jdk25-55e4156d
5.85 MB
sha256:092f372de3ec54b9aedbba123e6125da214a82b7a240e7254bd72aca7714ff2f
unit-test-reports-jdk25-76111d85
5.37 MB
sha256:3b0c7ae56525b35d6edc873fe5ecf35915a55a343e1cb3602d50dbe0c0d2d5ee
unit-test-reports-jdk25-8beab97d
786 KB
sha256:0f6849e3b3d0cbed28be222db91a06e2a4d1ec37c7c7c1df2b2acdc9140187ad
unit-test-reports-jdk25-9ac534dd
674 KB
sha256:5ef467d25d7e5fab2077add516084e26de4ec98b711b5d9f83737dee7dab5a41
unit-test-reports-jdk25-c8e226de
6.34 MB
sha256:11d63544b7a2b1ea6416bca19687cfa570a8da4691cac6bd417feeedca75c734
unit-test-reports-jdk25-c9979d9f
7.99 MB
sha256:b0a1ff62857b00e29ab918615daeb054597d484e68d85a5fe1e167b6e81b0762
unit-test-reports-jdk25-caf97795
652 KB
sha256:6c954217186d59e82aab420ce92d7210f9f2062db220bf3284c1b4e6846e5548
unit-test-reports-jdk25-f2004319
6.62 MB
sha256:8c290a1127615d3df92f8f3b2e7836556f7dd5bd3c1761496025a4f4e527428d
unit-test-reports-jdk25-f306ca6f
6.3 MB
sha256:1642efd5304ea0c31efa57ca3ec4309066428a26682e4ced39effec7fff93809
unit-test-reports-jdk25-f7b3ee25
5.57 MB
sha256:74efcde34ffefad539f3bc38356e089d274c48b7e5fe9aedb406d3d00a011f30
unit-test-reports-jdk25-f9a2f848
815 KB
sha256:f97e733252f3f5ce08f2c5fbaf7d74cec8d92c343a56ca2f2dcf42d36ff17944
unit-test-reports-jdk25-fac1432c
3.61 MB
sha256:d38940ae4adbdb01b442d6c4cc76317db7598a9a0b1901b6756ac0be0fd781de