Skip to content

Upstream: 40f968d41a150f9d37019a8db3342f9d59df24a3#744

Merged
kgyrtkirk merged 15 commits intomasterfrom
up-40f968d41a150f9d37019a8db3342f9d59df24a3
Mar 25, 2025
Merged

Upstream: 40f968d41a150f9d37019a8db3342f9d59df24a3#744
kgyrtkirk merged 15 commits intomasterfrom
up-40f968d41a150f9d37019a8db3342f9d59df24a3

Conversation

@kgyrtkirk
Copy link
Copy Markdown
Owner

No description provided.

cryptoe and others added 15 commits March 18, 2025 19:39
…#17806)

* Revert "Run JDK 21 workflows with latest JDK. (apache#17694)"

This reverts commit 31ede5c

* Review comments.

* Review comments.
…rlier task is still publishing (apache#17509)"

This reverts commit aca56d6.
)

When a nil clusterBy is used, we have no way of achieving a particular
target size, so we need to fall back to a "mix" spec (unsorted single
partition).

This comes up for queries like "SELECT COUNT(*) FROM FOO LIMIT 1" when
results use a target size, such as when we are inserting into another
table or when we are writing to durable storage.
Add minor checks in jetty utils class
…7818)

MSQ tests had their own way of creating the segments/etc - this have lead to that custom datasets didn't worked with them.
This patch alters a few things to make it possible to access CompleteSegment for the active segments - which fixed the issue and also enabled the removal of the extra loading codes.
This PR adds the sql-native unnest tests to quidem. This set of tests has 6392 queries in total, with 5247 positive tests and 1145 negative tests.
* show loader on aux queries

* show supervisors if not on page 0

* refactor

* fix bug fetching data when columns are added or removed

* update test
…e#17782)

Changes
---------
- Remove runtime property object `CompactionSupervisorConfig`
- Add fields `useSupervisors` and `engine` to cluster-level compaction dynamic config
- Remove unused field `useAutoScaleSlots`
apache#17802 reverted a retry of failed segment publish actions.

This patch attempts to address the original issue by retrying the segment publish task actions
on the client (i.e. task) side without holding any locks so that other transactions are not blocked.
Changes

    Add retries to TransactionalSegmentPublisher
    Add field retryable to SegmentPublishResult
    Remove class DataStoreMetadataUpdateResult and use SegmentPublishResult instead
Add the capability to set Historicals into a turbo loading mode,
to focus on loading segments at the cost of query performance.

Context
--------
Currently, when a new Historical is started, it initially starts out using a bootstrap thread pool.
It uses this thread pool to load any existing cached segments and broadcast segments.
Once it loads any segments from both these sources, the historical switches to a smaller thread-pool
and begins to serve queries.

In certain cases, it would be useful to have the historical switch back to this mode,
and focus on loading segments, either to continue loading the initial non-bootstrap segments,
or to catch up with assigned segments.

This PR adds a coordinator dynamic config that allows servers to be configured to use
the larger bootstrap threadpool to load segments faster.

Changes
---------
- Added a new dynamic coordinator configuration, `turboLoadingNodes`.
- Ignore  `druid.coordinator.loadqueuepeon.http.batchSize` for servers in `turboLoadingNodes`
- Add API on historical to return loading capabilities i.e. num loading threads in normal and turbo mode
…esult cache (apache#17823)

* Fix resource leak for GroupBy query merge buffer when match result cache

* Fix resource leak for GroupBy query merge buffer when match result cache

* Add test

* Add test

* Add comment

* Add test
Changes
---------
- Add field `loadingMode` to `SegmentChangeStatus`
- Including loading mode in `DataSegmentChangeResponse`
- Include loading mode in the `description` of metrics emitted from `HttpLoadQueuePeon`
- Add simulation test to verify loading mode metrics

@MethodSource("data")
@ParameterizedTest(name = "{index}:with context {0}")
public void testInsertOnFoo1NoDimensionsWithLimit(String contextName, Map<String, Object> context)

Check notice

Code scanning / CodeQL

Useless parameter Note test

The parameter 'contextName' is never used.

Copilot Autofix

AI about 1 year ago

To fix the problem, we should remove the unused contextName parameter from the testInsertOnFoo1NoDimensionsWithLimit method. This will simplify the method signature and eliminate the unnecessary parameter. We need to ensure that the method still functions correctly without this parameter.

Suggested changeset 1
extensions-core/multi-stage-query/src/test/java/org/apache/druid/msq/exec/MSQInsertTest.java

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/extensions-core/multi-stage-query/src/test/java/org/apache/druid/msq/exec/MSQInsertTest.java b/extensions-core/multi-stage-query/src/test/java/org/apache/druid/msq/exec/MSQInsertTest.java
--- a/extensions-core/multi-stage-query/src/test/java/org/apache/druid/msq/exec/MSQInsertTest.java
+++ b/extensions-core/multi-stage-query/src/test/java/org/apache/druid/msq/exec/MSQInsertTest.java
@@ -1610,3 +1610,3 @@
   @ParameterizedTest(name = "{index}:with context {0}")
-  public void testInsertOnFoo1NoDimensionsWithLimit(String contextName, Map<String, Object> context)
+  public void testInsertOnFoo1NoDimensionsWithLimit(Map<String, Object> context)
   {
EOF
@@ -1610,3 +1610,3 @@
@ParameterizedTest(name = "{index}:with context {0}")
public void testInsertOnFoo1NoDimensionsWithLimit(String contextName, Map<String, Object> context)
public void testInsertOnFoo1NoDimensionsWithLimit(Map<String, Object> context)
{
Copilot is powered by AI and may make mistakes. Always verify output.
@Path("/loadCapabilities")
@Produces({MediaType.APPLICATION_JSON, SmileMediaTypes.APPLICATION_JACKSON_SMILE})
public Response getSegmentLoadingCapabilities(
@Context final HttpServletRequest req

Check notice

Code scanning / CodeQL

Useless parameter Note

The parameter 'req' is never used.

Copilot Autofix

AI about 1 year ago

To fix the problem, we need to remove the unused parameter req from the getSegmentLoadingCapabilities method. This involves:

  • Removing the @Context final HttpServletRequest req parameter from the method signature.
  • Ensuring that the method still functions correctly without this parameter.

No additional methods, imports, or definitions are needed to implement this change.

Suggested changeset 1
server/src/main/java/org/apache/druid/server/http/SegmentListerResource.java

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/server/src/main/java/org/apache/druid/server/http/SegmentListerResource.java b/server/src/main/java/org/apache/druid/server/http/SegmentListerResource.java
--- a/server/src/main/java/org/apache/druid/server/http/SegmentListerResource.java
+++ b/server/src/main/java/org/apache/druid/server/http/SegmentListerResource.java
@@ -335,5 +335,3 @@
   @Produces({MediaType.APPLICATION_JSON, SmileMediaTypes.APPLICATION_JACKSON_SMILE})
-  public Response getSegmentLoadingCapabilities(
-      @Context final HttpServletRequest req
-  )
+  public Response getSegmentLoadingCapabilities()
   {
EOF
@@ -335,5 +335,3 @@
@Produces({MediaType.APPLICATION_JSON, SmileMediaTypes.APPLICATION_JACKSON_SMILE})
public Response getSegmentLoadingCapabilities(
@Context final HttpServletRequest req
)
public Response getSegmentLoadingCapabilities()
{
Copilot is powered by AI and may make mistakes. Always verify output.
= verifyAndGetPayload(resource.getCompactionConfig(), DruidCompactionConfig.class);

Response response = resource.setCompactionTaskLimit(0.5, 9, true, mockHttpServletRequest);
Response response = resource.setCompactionTaskLimit(0.5, 9, mockHttpServletRequest);

Check notice

Code scanning / CodeQL

Deprecated method or constructor invocation Note test

Invoking
CoordinatorCompactionConfigsResource.setCompactionTaskLimit
should be avoided because it has been deprecated.

Copilot Autofix

AI about 1 year ago

To fix the problem, we need to replace the usage of the deprecated method setCompactionTaskLimit with the recommended alternative method. We should look for the alternative method in the CoordinatorCompactionConfigsResource class or its documentation. If an alternative method is found, we will replace the deprecated method call with the new method call, ensuring that the functionality remains the same.

Suggested changeset 1
server/src/test/java/org/apache/druid/server/http/CoordinatorCompactionConfigsResourceTest.java

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/server/src/test/java/org/apache/druid/server/http/CoordinatorCompactionConfigsResourceTest.java b/server/src/test/java/org/apache/druid/server/http/CoordinatorCompactionConfigsResourceTest.java
--- a/server/src/test/java/org/apache/druid/server/http/CoordinatorCompactionConfigsResourceTest.java
+++ b/server/src/test/java/org/apache/druid/server/http/CoordinatorCompactionConfigsResourceTest.java
@@ -138,3 +138,6 @@
 
-    Response response = resource.setCompactionTaskLimit(0.5, 9, mockHttpServletRequest);
+    Response response = resource.updateClusterCompactionConfig(
+        new ClusterCompactionConfig(0.5, 9, null, defaultConfig.isUseSupervisors(), defaultConfig.getEngine()),
+        mockHttpServletRequest
+    );
     verifyStatus(Response.Status.OK, response);
EOF
@@ -138,3 +138,6 @@

Response response = resource.setCompactionTaskLimit(0.5, 9, mockHttpServletRequest);
Response response = resource.updateClusterCompactionConfig(
new ClusterCompactionConfig(0.5, 9, null, defaultConfig.isUseSupervisors(), defaultConfig.getEngine()),
mockHttpServletRequest
);
verifyStatus(Response.Status.OK, response);
Copilot is powered by AI and may make mistakes. Always verify output.
@kgyrtkirk kgyrtkirk merged commit 5cc79ab into master Mar 25, 2025
74 of 76 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.