Merged
Conversation
|
LGTM but I'd have a look at the tests |
Author
|
Thanks @marcello33 Yes, the changes are these ones. I already launched the tag since it was a minor change so I assumed this review to cover that one too. Some of the tests were failing before but now I see some news were failing. I'll check it today |
|
marcello33
approved these changes
Nov 10, 2025
avalkov
approved these changes
Nov 10, 2025
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.



After our first simulations we saw a big challenge on Pruning and Compaction the node's data while keeping it online.
As we can see on the image bellow which follows CPU and memory. We enabled compaction and prune at the second start on a simulation which led to a memory spike of 50GB and also shows that after that the consumption still higher than usual scenario.
So the consequences were:
After some research we realized the reason was because:
.Compact(nil,nil)which runs through the whole db when it should be split in small prefix rangesCompactSharded256,CompactPrefixHex256andCompactIntShardedcometbft-dbfrom0.14.1to0.14.1-polygonbatch.Write()instead ofbatch.WriteSync()and some sleep. Which does not let the fls flush from L0 to lower levelsdefersin loop1instead ofinitialHeightWith data we reached a solution where no spikes are found while compacting 4.7 million of blocks in a heimdall mainnet node as we can see in the image below.
PS.: The only spike we can see is the one at start, which is usual and happens both with pruning enable or not
PS2.: Note that the memory usage still almost the same from heimdall.
And finally here's is it how it looks like a pruning and deletion on a heimdall mainnet node:
PR checklist
.changelog(we use unclog to manage our changelog)docs/orspec/) and code comments