Skip to content

Commit 719b719

Browse files
committed
Add blog post for CDC
1 parent 1aa827c commit 719b719

13 files changed

Lines changed: 638 additions & 5 deletions

website/blog/content-defined-chunking.md

Lines changed: 257 additions & 0 deletions
Large diffs are not rendered by default.

website/blog/why-was-my-bazel-build-so-slow.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -42,6 +42,8 @@ Once you’ve added these flags, you’ll see a line like `Streaming build resul
4242
- This flag can reduce uploads by not uploading locally executed action outputs to the remote cache. This will reduce the cache hit rate for future runs, but can be desirable if upload speed is constrained (due to a poor network connection, for example).
4343
- `--remote_cache_compression`
4444
- This flag can improve remote cache throughput by compressing cache blobs.
45+
- <code className="flag">--experimental_remote_cache_chunking</code>
46+
- This flag can improve remote cache throughput for large outputs by uploading and downloading them in content-defined chunks, when supported by the remote cache. This is available in Bazel 8.7 and 9.1+.
4547
- `--digest_function=BLAKE3`
4648
- This flag can improve the performance of digest calculation of large files by using a faster hashing algorithm. This is available for Bazel 6.4+.
4749
- `--jobs`

website/changelog/bazel-remote-cache-cdc.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,19 +7,18 @@ tags: [bazel, featured]
77

88
We're excited to announce end-to-end content-defined chunking (CDC) support for Bazel remote caching in BuildBuddy.
99

10-
With Bazel 9.1's new `--experimental_remote_cache_chunking` support, large outputs like linker artifacts can be uploaded and downloaded in content-defined chunks instead of as monolithic blobs. That lets BuildBuddy deduplicate similar artifacts across builds, reducing upload bandwidth and storage usage. In one benchmark on the BuildBuddy repo, this showed roughly 40% less uploaded data and a roughly 40% smaller disk cache. Breaking large blobs into smaller reusable pieces also means fewer long-running RPCs and more granular retries.
10+
With Bazel 9.1's new <code className="flag">--experimental_remote_cache_chunking</code> support, large outputs like linker artifacts can be uploaded and downloaded in content-defined chunks instead of as monolithic blobs. That lets BuildBuddy deduplicate similar artifacts across builds, reducing upload bandwidth and storage usage. In one benchmark on the BuildBuddy repo, this showed roughly 40% less uploaded data and a roughly 40% smaller disk cache. Breaking large blobs into smaller reusable pieces also means fewer long-running RPCs and more granular retries.
1111

12-
To enable it, add these flags to your `.bazelrc`:
12+
To enable it, add this flag to your `.bazelrc`:
1313

1414
```text
1515
common --experimental_remote_cache_chunking
16-
common --remote_header=x-buildbuddy-cdc-enabled=true
1716
```
1817

1918
To see the download-side savings, you should also set `--disk_cache`, since the downloaded chunks need to be stored somewhere in order to be reused locally. We also recommend setting <code className="flag">--experimental_disk_cache_gc_max_age</code> to a value below your remote cache TTL—for example, `3h`, or `1d` if your remote TTL is longer.
2019

21-
Bazel 9.1 is required today, and this support will also be backported to Bazel 8.7.
20+
Bazel 9.1 and 8.7 support this flag.
2221

2322
For more background, see [bazelbuild/bazel#28437](https://github.com/bazelbuild/bazel/pull/28437).
2423

25-
We'll share a longer blog post soon with more details on the technical journey behind this work.
24+
For a deeper dive, see [No More Large Outputs: Introducing Remote Cache CDC](/blog/content-defined-chunking).
179 KB
Loading
Lines changed: 39 additions & 0 deletions
Loading

0 commit comments

Comments
 (0)