Skip to content

Optimize ZSTD compression level 6 parameters#4646

Open
BiplabRaut wants to merge 1 commit intofacebook:devfrom
amd:zstd_l6_clevel
Open

Optimize ZSTD compression level 6 parameters#4646
BiplabRaut wants to merge 1 commit intofacebook:devfrom
amd:zstd_l6_clevel

Conversation

@BiplabRaut
Copy link
Copy Markdown

  • Reduce minMatch from 3 to 2 and searchLog from 4 to 2 for default parameters
  • Reduce searchLog from 4 to 3 for 16KB dictionary parameters
  • Update corresponding test expectations for determinism tests

These parameter adjustments improve compression efficiency for level 6 while maintaining the lazy search strategy.

Performance Results

Test Setup: AMD Genoa (Zen4), SMT OFF, GCC 14.2, single-threaded. ~2% run-to-run variation expected.

Level 6 (optimized)

Dataset Baseline CS (MB/s) Optimized CS (MB/s) CS Speedup Ratio Baseline Ratio Optimized Ratio Change DS Change
silesia.tar 111.47 122.22 +9.6% 3.462 3.419 -1.2% -2.6%
calgary.tar 94.31 104.77 +11.1% 3.309 3.270 -1.2% -3.5%
canterbury.tar 111.16 124.73 +12.2% 4.708 4.650 -1.2% -2.7%
freeBSD-13.2 421.26 443.69 +5.3% 1.610 1.608 -0.1% -0.8%

Summary

  • ~10% compression speed improvement at level 6 across all datasets
  • ~1.2% ratio cost on text-heavy datasets, negligible on binary data
  • ~2.5% decompression regression on text datasets, minimal on binary
  • Level 5 and all other levels are completely unaffected
  • Best suited for workloads where level 6 compression speed is a bottleneck and the slight ratio and decompression tradeoffs are acceptable

- Reduce minMatch from 3 to 2 and searchLog from 4 to 2 for default parameters
- Reduce searchLog from 4 to 3 for 16KB dictionary parameters
- Update corresponding test expectations for determinism tests

These parameter adjustments improve compression efficiency for level 6
while maintaining the lazy search strategy.
@meta-cla meta-cla Bot added the CLA Signed label Apr 17, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants