| id | automatic-compaction-api |
|---|---|
| title | Automatic compaction API |
| sidebar_label | Automatic compaction |
import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem';
This topic describes the status and configuration API endpoints for automatic compaction using Coordinator duties in Apache Druid. You can configure automatic compaction in the Druid web console or API.
:::info[Experimental]
Instead of the automatic compaction API, you can use the supervisor API to submit auto-compaction jobs using compaction supervisors. For more information, see Auto-compaction using compaction supervisors.
:::
In this topic, http://ROUTER_IP:ROUTER_PORT is a placeholder for your Router service address and port. Replace it with the information for your deployment. For example, use http://localhost:8888 for quickstart deployments.
Creates or updates the automatic compaction configuration for a datasource. Pass the automatic compaction as a JSON object in the request body.
The automatic compaction configuration requires only the dataSource property. Druid fills all other properties with default values if not specified. See Automatic compaction dynamic configuration for configuration details.
Note that this endpoint returns an HTTP 200 OK message code even if the datasource name does not exist.
POST /druid/coordinator/v1/config/compaction
Successfully submitted auto compaction configuration
The following example creates an automatic compaction configuration for the datasource wikipedia_hour, which was ingested with HOUR segment granularity. This automatic compaction configuration performs compaction on wikipedia_hour, resulting in compacted segments that represent a day interval of data.
In this example:
wikipedia_houris a datasource withHOURsegment granularity.skipOffsetFromLatestis set toPT0S, meaning that no data is skipped.partitionsSpecis set to the defaultdynamic, allowing Druid to dynamically determine the optimal partitioning strategy.typeis set toindex_parallel, meaning that parallel indexing is used.segmentGranularityis set toDAY, meaning that each compacted segment is a day of data.
curl "http://ROUTER_IP:ROUTER_PORT/druid/coordinator/v1/config/compaction"\
--header 'Content-Type: application/json' \
--data '{
"dataSource": "wikipedia_hour",
"skipOffsetFromLatest": "PT0S",
"tuningConfig": {
"partitionsSpec": {
"type": "dynamic"
},
"type": "index_parallel"
},
"granularitySpec": {
"segmentGranularity": "DAY"
}
}'POST /druid/coordinator/v1/config/compaction HTTP/1.1
Host: http://ROUTER_IP:ROUTER_PORT
Content-Type: application/json
Content-Length: 281
{
"dataSource": "wikipedia_hour",
"skipOffsetFromLatest": "PT0S",
"tuningConfig": {
"partitionsSpec": {
"type": "dynamic"
},
"type": "index_parallel"
},
"granularitySpec": {
"segmentGranularity": "DAY"
}
}A successful request returns an HTTP 200 OK message code and an empty response body.
Removes the automatic compaction configuration for a datasource. This updates the compaction status of the datasource to "Not enabled."
DELETE /druid/coordinator/v1/config/compaction/{dataSource}
Successfully deleted automatic compaction configuration
Datasource does not have automatic compaction or invalid datasource name
curl --request DELETE "http://ROUTER_IP:ROUTER_PORT/druid/coordinator/v1/config/compaction/wikipedia_hour"DELETE /druid/coordinator/v1/config/compaction/wikipedia_hour HTTP/1.1
Host: http://ROUTER_IP:ROUTER_PORTA successful request returns an HTTP 200 OK message code and an empty response body.
:::info This API is now deprecated. Use Update cluster-level compaction config instead. :::
Updates the capacity for compaction tasks. The minimum number of compaction tasks is 1 and the maximum is 2147483647.
Note that while the max compaction tasks can theoretically be set to 2147483647, the practical limit is determined by the available cluster capacity and is capped at 10% of the cluster's total capacity.
POST /druid/coordinator/v1/config/compaction/taskslots
To limit the maximum number of compaction tasks, use the optional query parameters ratio and max:
ratio(optional)- Type: Float
- Default: 0.1
- Limits the ratio of the total task slots to compaction task slots.
max(optional)- Type: Int
- Default: 2147483647
- Limits the maximum number of task slots for compaction tasks.
Successfully updated compaction configuration
Invalid max value
curl --request POST "http://ROUTER_IP:ROUTER_PORT/druid/coordinator/v1/config/compaction/taskslots?ratio=0.2&max=250000"POST /druid/coordinator/v1/config/compaction/taskslots?ratio=0.2&max=250000 HTTP/1.1
Host: http://ROUTER_IP:ROUTER_PORTA successful request returns an HTTP 200 OK message code and an empty response body.
Retrieves all automatic compaction configurations. Returns a compactionConfigs object containing the active automatic compaction configurations of all datasources.
You can use this endpoint to retrieve compactionTaskSlotRatio and maxCompactionTaskSlots values for managing resource allocation of compaction tasks.
GET /druid/coordinator/v1/config/compaction
Successfully retrieved automatic compaction configurations
curl "http://ROUTER_IP:ROUTER_PORT/druid/coordinator/v1/config/compaction"GET /druid/coordinator/v1/config/compaction HTTP/1.1
Host: http://ROUTER_IP:ROUTER_PORTView the response
{
"compactionConfigs": [
{
"dataSource": "wikipedia_hour",
"taskPriority": 25,
"inputSegmentSizeBytes": 100000000000000,
"maxRowsPerSegment": null,
"skipOffsetFromLatest": "PT0S",
"tuningConfig": {
"maxRowsInMemory": null,
"appendableIndexSpec": null,
"maxBytesInMemory": null,
"maxTotalRows": null,
"splitHintSpec": null,
"partitionsSpec": {
"type": "dynamic",
"maxRowsPerSegment": 5000000,
"maxTotalRows": null
},
"indexSpec": null,
"indexSpecForIntermediatePersists": null,
"maxPendingPersists": null,
"pushTimeout": null,
"segmentWriteOutMediumFactory": null,
"maxNumConcurrentSubTasks": null,
"maxRetry": null,
"taskStatusCheckPeriodMs": null,
"chatHandlerTimeout": null,
"chatHandlerNumRetries": null,
"maxNumSegmentsToMerge": null,
"totalNumMergeTasks": null,
"maxColumnsToMerge": null,
"type": "index_parallel",
"forceGuaranteedRollup": false
},
"granularitySpec": {
"segmentGranularity": "DAY",
"queryGranularity": null,
"rollup": null
},
"dimensionsSpec": null,
"metricsSpec": null,
"transformSpec": null,
"ioConfig": null,
"taskContext": null
},
{
"dataSource": "wikipedia",
"taskPriority": 25,
"inputSegmentSizeBytes": 100000000000000,
"maxRowsPerSegment": null,
"skipOffsetFromLatest": "PT0S",
"tuningConfig": {
"maxRowsInMemory": null,
"appendableIndexSpec": null,
"maxBytesInMemory": null,
"maxTotalRows": null,
"splitHintSpec": null,
"partitionsSpec": {
"type": "dynamic",
"maxRowsPerSegment": 5000000,
"maxTotalRows": null
},
"indexSpec": null,
"indexSpecForIntermediatePersists": null,
"maxPendingPersists": null,
"pushTimeout": null,
"segmentWriteOutMediumFactory": null,
"maxNumConcurrentSubTasks": null,
"maxRetry": null,
"taskStatusCheckPeriodMs": null,
"chatHandlerTimeout": null,
"chatHandlerNumRetries": null,
"maxNumSegmentsToMerge": null,
"totalNumMergeTasks": null,
"maxColumnsToMerge": null,
"type": "index_parallel",
"forceGuaranteedRollup": false
},
"granularitySpec": {
"segmentGranularity": "DAY",
"queryGranularity": null,
"rollup": null
},
"dimensionsSpec": null,
"metricsSpec": null,
"transformSpec": null,
"ioConfig": null,
"taskContext": null
}
],
"compactionTaskSlotRatio": 0.1,
"maxCompactionTaskSlots": 2147483647,
}Retrieves the automatic compaction configuration for a datasource.
GET /druid/coordinator/v1/config/compaction/{dataSource}
Successfully retrieved configuration for datasource
Invalid datasource or datasource does not have automatic compaction enabled
The following example retrieves the automatic compaction configuration for datasource wikipedia_hour.
curl "http://ROUTER_IP:ROUTER_PORT/druid/coordinator/v1/config/compaction/wikipedia_hour"GET /druid/coordinator/v1/config/compaction/wikipedia_hour HTTP/1.1
Host: http://ROUTER_IP:ROUTER_PORTView the response
{
"dataSource": "wikipedia_hour",
"taskPriority": 25,
"inputSegmentSizeBytes": 100000000000000,
"maxRowsPerSegment": null,
"skipOffsetFromLatest": "PT0S",
"tuningConfig": {
"maxRowsInMemory": null,
"appendableIndexSpec": null,
"maxBytesInMemory": null,
"maxTotalRows": null,
"splitHintSpec": null,
"partitionsSpec": {
"type": "dynamic",
"maxRowsPerSegment": 5000000,
"maxTotalRows": null
},
"indexSpec": null,
"indexSpecForIntermediatePersists": null,
"maxPendingPersists": null,
"pushTimeout": null,
"segmentWriteOutMediumFactory": null,
"maxNumConcurrentSubTasks": null,
"maxRetry": null,
"taskStatusCheckPeriodMs": null,
"chatHandlerTimeout": null,
"chatHandlerNumRetries": null,
"maxNumSegmentsToMerge": null,
"totalNumMergeTasks": null,
"maxColumnsToMerge": null,
"type": "index_parallel",
"forceGuaranteedRollup": false
},
"granularitySpec": {
"segmentGranularity": "DAY",
"queryGranularity": null,
"rollup": null
},
"dimensionsSpec": null,
"metricsSpec": null,
"transformSpec": null,
"ioConfig": null,
"taskContext": null
}Retrieves the history of the automatic compaction configuration for a datasource. Returns an empty list if the datasource does not exist or there is no compaction history for the datasource.
The response contains a list of objects with the following keys:
globalConfig: A JSON object containing automatic compaction configuration that applies to the entire cluster.compactionConfig: A JSON object containing the automatic compaction configuration for the datasource.auditInfo: A JSON object containing information about the change made, such asauthor,commentorip.auditTime: The date and time when the change was made.
GET /druid/coordinator/v1/config/compaction/{dataSource}/history
interval(optional)- Type: ISO-8601
- Limits the results within a specified interval. Use
/as the delimiter for the interval string.
count(optional)- Type: Int
- Limits the number of results.
Successfully retrieved configuration history
Invalid count value
curl "http://ROUTER_IP:ROUTER_PORT/druid/coordinator/v1/config/compaction/wikipedia_hour/history"GET /druid/coordinator/v1/config/compaction/wikipedia_hour/history HTTP/1.1
Host: http://ROUTER_IP:ROUTER_PORTView the response
[
{
"globalConfig": {
"compactionTaskSlotRatio": 0.1,
"maxCompactionTaskSlots": 2147483647,
"compactionPolicy": {
"type": "newestSegmentFirst",
"priorityDatasource": "wikipedia"
},
"useSupervisors": true,
"engine": "native"
},
"compactionConfig": {
"dataSource": "wikipedia_hour",
"taskPriority": 25,
"inputSegmentSizeBytes": 100000000000000,
"maxRowsPerSegment": null,
"skipOffsetFromLatest": "P1D",
"tuningConfig": null,
"granularitySpec": {
"segmentGranularity": "DAY",
"queryGranularity": null,
"rollup": null
},
"dimensionsSpec": null,
"metricsSpec": null,
"transformSpec": null,
"ioConfig": null,
"taskContext": null
},
"auditInfo": {
"author": "",
"comment": "",
"ip": "127.0.0.1"
},
"auditTime": "2023-07-31T18:15:19.302Z"
},
{
"globalConfig": {
"compactionTaskSlotRatio": 0.1,
"maxCompactionTaskSlots": 2147483647,
"compactionPolicy": {
"type": "newestSegmentFirst"
},
"useSupervisors": false,
"engine": "native"
},
"compactionConfig": {
"dataSource": "wikipedia_hour",
"taskPriority": 25,
"inputSegmentSizeBytes": 100000000000000,
"maxRowsPerSegment": null,
"skipOffsetFromLatest": "PT0S",
"tuningConfig": {
"maxRowsInMemory": null,
"appendableIndexSpec": null,
"maxBytesInMemory": null,
"maxTotalRows": null,
"splitHintSpec": null,
"partitionsSpec": {
"type": "dynamic",
"maxRowsPerSegment": 5000000,
"maxTotalRows": null
},
"indexSpec": null,
"indexSpecForIntermediatePersists": null,
"maxPendingPersists": null,
"pushTimeout": null,
"segmentWriteOutMediumFactory": null,
"maxNumConcurrentSubTasks": null,
"maxRetry": null,
"taskStatusCheckPeriodMs": null,
"chatHandlerTimeout": null,
"chatHandlerNumRetries": null,
"maxNumSegmentsToMerge": null,
"totalNumMergeTasks": null,
"maxColumnsToMerge": null,
"type": "index_parallel",
"forceGuaranteedRollup": false
},
"granularitySpec": {
"segmentGranularity": "DAY",
"queryGranularity": null,
"rollup": null
},
"dimensionsSpec": null,
"metricsSpec": null,
"transformSpec": null,
"ioConfig": null,
"taskContext": null
},
"auditInfo": {
"author": "",
"comment": "",
"ip": "127.0.0.1"
},
"auditTime": "2023-07-31T18:16:16.362Z"
}
]Returns the total size of segments awaiting compaction for a given datasource. Returns a 404 response if a datasource does not have automatic compaction enabled.
GET /druid/coordinator/v1/compaction/progress?dataSource={dataSource}
dataSource(required)- Type: String
- Name of the datasource for this status information.
Successfully retrieved segment size awaiting compaction
Unknown datasource name or datasource does not have automatic compaction enabled
The following example retrieves the remaining segments to be compacted for datasource wikipedia_hour.
curl "http://ROUTER_IP:ROUTER_PORT/druid/coordinator/v1/compaction/progress?dataSource=wikipedia_hour"GET /druid/coordinator/v1/compaction/progress?dataSource=wikipedia_hour HTTP/1.1
Host: http://ROUTER_IP:ROUTER_PORTView the response
{
"remainingSegmentSize": 7615837
}Retrieves an array of latestStatus objects representing the status and statistics from the latest automatic compaction run for all datasources with automatic compaction enabled.
The latestStatus object has the following properties:
dataSource: Name of the datasource for this status information.scheduleStatus: Automatic compaction scheduling status. Possible values areNOT_ENABLEDandRUNNING. ReturnsRUNNINGif the datasource has an active automatic compaction configuration submitted. Otherwise, returnsNOT_ENABLED.bytesAwaitingCompaction: Total bytes of this datasource waiting to be compacted by the automatic compaction (only consider intervals/segments that are eligible for automatic compaction).bytesCompacted: Total bytes of this datasource that are already compacted with the spec set in the automatic compaction configuration.bytesSkipped: Total bytes of this datasource that are skipped (not eligible for automatic compaction) by the automatic compaction.segmentCountAwaitingCompaction: Total number of segments of this datasource waiting to be compacted by the automatic compaction (only consider intervals/segments that are eligible for automatic compaction).segmentCountCompacted: Total number of segments of this datasource that are already compacted with the spec set in the automatic compaction configuration.segmentCountSkipped: Total number of segments of this datasource that are skipped (not eligible for automatic compaction) by the automatic compaction.intervalCountAwaitingCompaction: Total number of intervals of this datasource waiting to be compacted by the automatic compaction (only consider intervals/segments that are eligible for automatic compaction).intervalCountCompacted: Total number of intervals of this datasource that are already compacted with the spec set in the automatic compaction configuration.intervalCountSkipped: Total number of intervals of this datasource that are skipped (not eligible for automatic compaction) by the automatic compaction.
GET /druid/coordinator/v1/compaction/status
dataSource(optional)- Type: String
- Filter the result by name of a specific datasource.
Successfully retrieved latestStatus object
curl "http://ROUTER_IP:ROUTER_PORT/druid/coordinator/v1/compaction/status"GET /druid/coordinator/v1/compaction/status HTTP/1.1
Host: http://ROUTER_IP:ROUTER_PORTView the response
{
"latestStatus": [
{
"dataSource": "wikipedia_api",
"scheduleStatus": "RUNNING",
"bytesAwaitingCompaction": 0,
"bytesCompacted": 0,
"bytesSkipped": 64133616,
"segmentCountAwaitingCompaction": 0,
"segmentCountCompacted": 0,
"segmentCountSkipped": 8,
"intervalCountAwaitingCompaction": 0,
"intervalCountCompacted": 0,
"intervalCountSkipped": 1
},
{
"dataSource": "wikipedia_hour",
"scheduleStatus": "RUNNING",
"bytesAwaitingCompaction": 0,
"bytesCompacted": 5998634,
"bytesSkipped": 0,
"segmentCountAwaitingCompaction": 0,
"segmentCountCompacted": 1,
"segmentCountSkipped": 0,
"intervalCountAwaitingCompaction": 0,
"intervalCountCompacted": 1,
"intervalCountSkipped": 0
}
]
}This section describes the new unified compaction APIs which can be used regardless of whether compaction supervisors are enabled (i.e. useSupervisors is true) or not in the compaction dynamic config.
- If compaction supervisors are disabled, the APIs read or write the compaction dynamic config, same as the Coordinator-based compaction APIs above.
- If compaction supervisors are enabled, the APIs read or write the corresponding compaction supervisors. In conjunction with the APIs described below, the supervisor APIs may also be used to read or write the compaction supervisors as they offer greater flexibility and also serve information related to supervisor and task statuses.
Updates cluster-level configuration for compaction tasks which applies to all datasources, unless explicitly overridden in the datasource compaction config. This includes the following fields:
| Config | Description | Default value |
|---|---|---|
compactionTaskSlotRatio |
Ratio of number of slots taken up by compaction tasks to the number of total task slots across all workers. | 0.1 |
maxCompactionTaskSlots |
Maximum number of task slots that can be taken up by compaction tasks and sub-tasks. Minimum number of task slots available for compaction is 1. When using MSQ engine or Native engine with range partitioning, a single compaction job occupies more than one task slot. In this case, the minimum is 2 so that at least one compaction job can always run in the cluster. | 2147483647 (i.e. total task slots) |
compactionPolicy |
Policy to choose intervals for compaction. Supported policies are Newest segment first, Most fragmented first, and Fixed interval order. | Newest segment first |
useSupervisors |
Whether compaction should be run on Overlord using supervisors instead of Coordinator duties. | false |
engine |
Engine used for running compaction tasks, unless overridden in the datasource-level compaction config. Possible values are native and msq. msq engine can be used for compaction only if useSupervisors is true. |
native |
storeCompactionStatePerSegment |
This configuration only takes effect if useSupervisors is true. Whether to persist the full compaction state in segment metadata. When true (default), compaction state is stored in both the segment metadata and the indexing states table. This is historically how Druid has worked. When false, only a fingerprint reference is stored in the segment metadata, reducing storage overhead in the segments table. The actual compaction state is stored in the indexing states table and can be referenced with the aforementioned fingerprint. Eventually this configuration will be removed and all compaction will use the fingerprint method only. This configuration exists for operators to opt into this future pattern early. WARNING: if you set this to false and then compact data, rolling back to a Druid version that predates indexing state fingerprinting (< Druid 37) will result in missing compaction states and trigger compaction on segments that may already be compacted. |
true |
| Field | Description | Default value |
|---|---|---|
type |
This must always be newestSegmentFirst |
|
priorityDatasource |
Datasource to prioritize for compaction. The intervals of this datasource are chosen for compaction before the intervals of any other datasource. Within this datasource, the intervals are prioritized based on the chosen compaction policy. | None |
This experimental policy prioritizes compaction of intervals with the largest number of small uncompacted segments. It favors cluster stability by reducing segment count over performance of queries on newer intervals.
| Field | Description | Default value |
|---|---|---|
type |
This must always be mostFragmentedFirst |
|
priorityDatasource |
Datasource to prioritize for compaction. The intervals of this datasource are chosen for compaction before the intervals of any other datasource. Within this datasource, the intervals are prioritized based on the chosen compaction policy. | None |
minUncompactedCount |
Minimum number of uncompacted segments that must be present in an interval to make it eligible for compaction. Must be greater than 0. | 100 |
minUncompactedBytes |
Minimum total bytes of uncompacted segments that must be present in an interval to make it eligible for compaction. Human-readable byte format (e.g., "10MiB"). | 10 MiB |
maxAverageUncompactedBytesPerSegment |
Maximum average size of uncompacted segments in an interval eligible for compaction. Human-readable byte format (e.g., "2GiB"). | 2 GiB |
minUncompactedBytesPercentForFullCompaction |
Threshold percentage (0-100) of uncompacted bytes to total bytes below which minor compaction is eligible instead of full compaction. | 0 |
minUncompactedRowsPercentForFullCompaction |
Threshold percentage (0-100) of uncompacted rows to total rows below which minor compaction is eligible instead of full compaction. | 0 |
This policy specifies the datasources and intervals eligible for compaction and their order. It is primarily used for integration tests.
| Field | Description | Default value |
|---|---|---|
type |
This must always be fixedIntervalOrder |
|
eligibleCandidates |
List of datasource-interval pairs eligible for compaction. Each entry contains datasource (string) and interval (ISO-8601 interval) fields. Compaction processes candidates in the order specified. |
None |
POST /druid/indexer/v1/compaction/config/cluster
Successfully updated compaction configuration
Invalid max value
curl --request POST "http://ROUTER_IP:ROUTER_PORT/druid/coordinator/v1/config/compaction/cluster" \
--header 'Content-Type: application/json' \
--data '{
"compactionTaskSlotRatio": 0.5,
"maxCompactionTaskSlots": 1500,
"compactionPolicy": {
"type": "newestSegmentFirst",
"priorityDatasource": "wikipedia"
},
"useSupervisors": true,
"engine": "msq"
}'
POST /druid/indexer/v1/compaction/config/cluster HTTP/1.1
Host: http://ROUTER_IP:ROUTER_PORT
Content-Type: application/json
{
"compactionTaskSlotRatio": 0.5,
"maxCompactionTaskSlots": 1500,
"compactionPolicy": {
"type": "newestSegmentFirst",
"priorityDatasource": "wikipedia"
},
"useSupervisors": true,
"engine": "msq"
}A successful request returns an HTTP 200 OK message code and an empty response body.
Retrieves cluster-level configuration for compaction tasks which applies to all datasources, unless explicitly overridden in the datasource compaction config. This includes all the fields listed in Update cluster-level compaction config.
GET /druid/indexer/v1/compaction/config/cluster
Successfully retrieved cluster compaction configuration
curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/compaction/config/cluster"GET /druid/indexer/v1/compaction/config/cluster HTTP/1.1
Host: http://ROUTER_IP:ROUTER_PORTView the response
{
"compactionTaskSlotRatio": 0.5,
"maxCompactionTaskSlots": 1500,
"compactionPolicy": {
"type": "newestSegmentFirst",
"priorityDatasource": "wikipedia"
},
"useSupervisors": true,
"engine": "msq"
}Retrieves all datasource compaction configurations.
GET /druid/indexer/v1/compaction/config/datasources
Successfully retrieved automatic compaction configurations
curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/compaction/config/datasources"GET /druid/indexer/v1/compaction/config/datasources HTTP/1.1
Host: http://ROUTER_IP:ROUTER_PORTView the response
{
"compactionConfigs": [
{
"dataSource": "wikipedia_hour",
"taskPriority": 25,
"inputSegmentSizeBytes": 100000000000000,
"maxRowsPerSegment": null,
"skipOffsetFromLatest": "PT0S",
"tuningConfig": {
"maxRowsInMemory": null,
"appendableIndexSpec": null,
"maxBytesInMemory": null,
"maxTotalRows": null,
"splitHintSpec": null,
"partitionsSpec": {
"type": "dynamic",
"maxRowsPerSegment": 5000000,
"maxTotalRows": null
},
"indexSpec": null,
"indexSpecForIntermediatePersists": null,
"maxPendingPersists": null,
"pushTimeout": null,
"segmentWriteOutMediumFactory": null,
"maxNumConcurrentSubTasks": null,
"maxRetry": null,
"taskStatusCheckPeriodMs": null,
"chatHandlerTimeout": null,
"chatHandlerNumRetries": null,
"maxNumSegmentsToMerge": null,
"totalNumMergeTasks": null,
"maxColumnsToMerge": null,
"type": "index_parallel",
"forceGuaranteedRollup": false
},
"granularitySpec": {
"segmentGranularity": "DAY",
"queryGranularity": null,
"rollup": null
},
"dimensionsSpec": null,
"metricsSpec": null,
"transformSpec": null,
"ioConfig": null,
"taskContext": null
},
{
"dataSource": "wikipedia",
"taskPriority": 25,
"inputSegmentSizeBytes": 100000000000000,
"maxRowsPerSegment": null,
"skipOffsetFromLatest": "PT0S",
"tuningConfig": {
"maxRowsInMemory": null,
"appendableIndexSpec": null,
"maxBytesInMemory": null,
"maxTotalRows": null,
"splitHintSpec": null,
"partitionsSpec": {
"type": "dynamic",
"maxRowsPerSegment": 5000000,
"maxTotalRows": null
},
"indexSpec": null,
"indexSpecForIntermediatePersists": null,
"maxPendingPersists": null,
"pushTimeout": null,
"segmentWriteOutMediumFactory": null,
"maxNumConcurrentSubTasks": null,
"maxRetry": null,
"taskStatusCheckPeriodMs": null,
"chatHandlerTimeout": null,
"chatHandlerNumRetries": null,
"maxNumSegmentsToMerge": null,
"totalNumMergeTasks": null,
"maxColumnsToMerge": null,
"type": "index_parallel",
"forceGuaranteedRollup": false
},
"granularitySpec": {
"segmentGranularity": "DAY",
"queryGranularity": null,
"rollup": null
},
"dimensionsSpec": null,
"metricsSpec": null,
"transformSpec": null,
"ioConfig": null,
"taskContext": null
}
]
}Retrieves the automatic compaction configuration for a datasource.
GET /druid/indexer/v1/compaction/config/datasources/{dataSource}
Successfully retrieved configuration for datasource
Invalid datasource or datasource does not have automatic compaction enabled
The following example retrieves the automatic compaction configuration for datasource wikipedia_hour.
curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/compaction/config/datasources/wikipedia_hour"GET /druid/indexer/v1/compaction/config/datasources/wikipedia_hour HTTP/1.1
Host: http://ROUTER_IP:ROUTER_PORTView the response
{
"dataSource": "wikipedia_hour",
"taskPriority": 25,
"inputSegmentSizeBytes": 100000000000000,
"maxRowsPerSegment": null,
"skipOffsetFromLatest": "PT0S",
"tuningConfig": {
"maxRowsInMemory": null,
"appendableIndexSpec": null,
"maxBytesInMemory": null,
"maxTotalRows": null,
"splitHintSpec": null,
"partitionsSpec": {
"type": "dynamic",
"maxRowsPerSegment": 5000000,
"maxTotalRows": null
},
"indexSpec": null,
"indexSpecForIntermediatePersists": null,
"maxPendingPersists": null,
"pushTimeout": null,
"segmentWriteOutMediumFactory": null,
"maxNumConcurrentSubTasks": null,
"maxRetry": null,
"taskStatusCheckPeriodMs": null,
"chatHandlerTimeout": null,
"chatHandlerNumRetries": null,
"maxNumSegmentsToMerge": null,
"totalNumMergeTasks": null,
"maxColumnsToMerge": null,
"type": "index_parallel",
"forceGuaranteedRollup": false
},
"granularitySpec": {
"segmentGranularity": "DAY",
"queryGranularity": null,
"rollup": null
},
"dimensionsSpec": null,
"metricsSpec": null,
"transformSpec": null,
"ioConfig": null,
"taskContext": null
}Creates or updates the automatic compaction configuration for a datasource. Pass the automatic compaction as a JSON object in the request body.
The automatic compaction configuration requires only the dataSource property. Druid fills all other properties with default values if not specified. See Automatic compaction dynamic configuration for configuration details.
Note that this endpoint returns an HTTP 200 OK message code even if the datasource name does not exist.
POST /druid/indexer/v1/compaction/config/datasources/wikipedia_hour
Successfully submitted auto compaction configuration
The following example creates an automatic compaction configuration for the datasource wikipedia_hour, which was ingested with HOUR segment granularity. This automatic compaction configuration performs compaction on wikipedia_hour, resulting in compacted segments that represent a day interval of data.
In this example:
wikipedia_houris a datasource withHOURsegment granularity.skipOffsetFromLatestis set toPT0S, meaning that no data is skipped.partitionsSpecis set to the defaultdynamic, allowing Druid to dynamically determine the optimal partitioning strategy.typeis set toindex_parallel, meaning that parallel indexing is used.segmentGranularityis set toDAY, meaning that each compacted segment is a day of data.
curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/compaction/config/datasources/wikipedia_hour"\
--header 'Content-Type: application/json' \
--data '{
"dataSource": "wikipedia_hour",
"skipOffsetFromLatest": "PT0S",
"tuningConfig": {
"partitionsSpec": {
"type": "dynamic"
},
"type": "index_parallel"
},
"granularitySpec": {
"segmentGranularity": "DAY"
}
}'POST /druid/indexer/v1/compaction/config/datasources/wikipedia_hour HTTP/1.1
Host: http://ROUTER_IP:ROUTER_PORT
Content-Type: application/json
Content-Length: 281
{
"dataSource": "wikipedia_hour",
"skipOffsetFromLatest": "PT0S",
"tuningConfig": {
"partitionsSpec": {
"type": "dynamic"
},
"type": "index_parallel"
},
"granularitySpec": {
"segmentGranularity": "DAY"
}
}A successful request returns an HTTP 200 OK message code and an empty response body.
Removes the automatic compaction configuration for a datasource. This updates the compaction status of the datasource to "Not enabled."
DELETE /druid/indexer/v1/compaction/config/datasources/{dataSource}
Successfully deleted automatic compaction configuration
Datasource does not have automatic compaction or invalid datasource name
curl --request DELETE "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/compaction/config/datasources/wikipedia_hour"DELETE /druid/indexer/v1/compaction/config/wikipedia_hour HTTP/1.1
Host: http://ROUTER_IP:ROUTER_PORTA successful request returns an HTTP 200 OK message code and an empty response body.
Retrieves an array of latestStatus objects representing the status and statistics from the latest automatic compaction run for all the datasources to which the user has read access.
The response payload is in the same format as Compaction status response.
GET /druid/indexer/v1/compaction/status/datasources
Successfully retrieved latestStatus object
curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/compaction/status/datasources"GET /druid/indexer/v1/compaction/status/datasources HTTP/1.1
Host: http://ROUTER_IP:ROUTER_PORTView the response
{
"latestStatus": [
{
"dataSource": "wikipedia_api",
"scheduleStatus": "RUNNING",
"bytesAwaitingCompaction": 0,
"bytesCompacted": 0,
"bytesSkipped": 64133616,
"segmentCountAwaitingCompaction": 0,
"segmentCountCompacted": 0,
"segmentCountSkipped": 8,
"intervalCountAwaitingCompaction": 0,
"intervalCountCompacted": 0,
"intervalCountSkipped": 1
},
{
"dataSource": "wikipedia_hour",
"scheduleStatus": "RUNNING",
"bytesAwaitingCompaction": 0,
"bytesCompacted": 5998634,
"bytesSkipped": 0,
"segmentCountAwaitingCompaction": 0,
"segmentCountCompacted": 1,
"segmentCountSkipped": 0,
"intervalCountAwaitingCompaction": 0,
"intervalCountCompacted": 1,
"intervalCountSkipped": 0
}
]
}Retrieves the latest status from the latest automatic compaction run for a datasource. The response payload is in the same format as Compaction status response with zero or one entry.
GET /druid/indexer/v1/compaction/status/datasources/{dataSource}
Successfully retrieved latestStatus object
curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/compaction/status/datasources/wikipedia_hour"GET /druid/indexer/v1/compaction/status/datasources/wikipedia_hour HTTP/1.1
Host: http://ROUTER_IP:ROUTER_PORTView the response
{
"latestStatus": [
{
"dataSource": "wikipedia_hour",
"scheduleStatus": "RUNNING",
"bytesAwaitingCompaction": 0,
"bytesCompacted": 5998634,
"bytesSkipped": 0,
"segmentCountAwaitingCompaction": 0,
"segmentCountCompacted": 1,
"segmentCountSkipped": 0,
"intervalCountAwaitingCompaction": 0,
"intervalCountCompacted": 1,
"intervalCountSkipped": 0
}
]
}