Compactor uses unreasonably high amounts of disk space #8697
Replies: 1 comment 1 reply
-
|
You are correct, we could indeed not download anything on disk but rather stream from object storage. Because the implementation is easier when you read files off disk, it was chosen a long time ago. There's also this very old issue #3416. Prometheus runs compaction based on what retention you have set. IIRC if it's longer than 3 or 4 days then only in such case local compaction happens. We uploaded compaction level 1 blocks only to have data available as soon as possible in case of incidents. The newest Thanos version now also supports running with local compaction enabled because previously Prometheus could've started compacting blocks locally before Thanos was able to upload them. For compactor's progress, you can check the "todo" metrics. There is a loop where it tries to constantly plan the work. Through that you can deduce how much work it needs to do. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
We switched away from a pure Prometheus setup (on block storage) to primarily save block storage space.
But the Compactor seems to need more and more disk space. First 70GB were sufficient, and over a few weeks it halted again and again, and I just went from 200 to 250GB and it ate them up quickly and is halted again.
It runs (via the Helm Chart) with
The largest Folder (the 01xxxxxxx hashed top-level prefixes) in the object store is at 112GB; as I understand it that's the data unit the compactor works on at a time. As it's called "compactor", not "expander", the compacted data it produces should be less than the original, so in theory 250GB should be sufficient.
I have read #7198 and #7197, but found nothing that could help me.
I took a look at one "run" after resuming. With approx. Timestamps:
So the questions are
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions