Skip to content

Commit 1945d9d

Browse files
committed
> fix click
1 parent d3ed0dc commit 1945d9d

File tree

1 file changed

+15
-1
lines changed

1 file changed

+15
-1
lines changed

packages/2025-06-11-kubecon-hk/slides.md

Lines changed: 15 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1381,10 +1381,11 @@ glowSeed: 150
13811381

13821382
<div mt-6 grid grid-cols-3 gap-4>
13831383
<div
1384+
v-click
13841385
border="2 solid indigo-800" bg="indigo-800/20"
13851386
rounded-lg overflow-hidden
13861387
>
1387-
<div v-click bg="indigo-800/40" px-4 py-2 flex items-center justify-center>
1388+
<div bg="indigo-800/40" px-4 py-2 flex items-center justify-center>
13881389
<div i-carbon:archive text-indigo-300 text-xl mr-2 />
13891390
<span font-bold>1: Fetching</span>
13901391
</div>
@@ -1485,6 +1486,19 @@ glowSeed: 150
14851486
</div>
14861487
</div>
14871488

1489+
<!--
1490+
Now let's talk about our intelligent cache strategy. Building these environments can be heavy - I mean, compiling PyTorch with CUDA? That's not trivial!
1491+
1492+
We use a three-layer caching approach.
1493+
[click] First, we cache downloads - all those source packages, with SHA verification and mirror fallback for reliability.
1494+
1495+
[click] Second, we cache builds - the compiled binaries and wheels. This is huge because compilation is where most time is spent. We deduplicate at the file level, so if two environments share libraries, we only store them once.
1496+
1497+
[click] Third, we cache metadata - environment configs, dependency resolution results. This makes environment creation lightning fast.
1498+
1499+
[click] Look at the time difference! Traditional CUDA setup takes 45-60 minutes. PyTorch another 20-30. With our caching? First setup is 10-15 minutes, and after that? Seconds! Just seconds to spin up a complete ML environment. That's the power of intelligent caching!
1500+
-->
1501+
14881502
---
14891503
class: py-4
14901504
glowSeed: 275

0 commit comments

Comments
 (0)