-
Notifications
You must be signed in to change notification settings - Fork 2.3k
Content-address distributions in the archive #16816
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
d42bbf0 to
0bf8b5c
Compare
|
Still a few things I want to improve here. |
0bf8b5c to
3bf79e2
Compare
| Ok(temp_dir) | ||
| } | ||
| }) | ||
| .await??; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change I am a little worried about, because it will be a regression to move away from our parallel synchronous zip reader (https://github.com/GoogleChrome/ripunzip) to streaming. On the other hand, it means we'll no longer have two zip implementations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately I probably need to benchmark this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A huge performance degradation for large files (300ms to 1.3s):
unzip_sync_small time: [5.4237 ms 5.5621 ms 5.7425 ms]
change: [-13.499% -7.9934% -2.6150%] (p = 0.00 < 0.05)
Performance has improved.
Found 6 outliers among 100 measurements (6.00%)
1 (1.00%) high mild
5 (5.00%) high severe
unzip_sync_medium time: [12.587 ms 13.109 ms 13.788 ms]
change: [+3.3102% +8.3958% +15.174%] (p = 0.00 < 0.05)
Performance has regressed.
Found 8 outliers among 100 measurements (8.00%)
2 (2.00%) high mild
6 (6.00%) high severe
Benchmarking unzip_sync_large: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 34.9s, or reduce sample count to 10.
unzip_sync_large time: [328.01 ms 331.20 ms 334.39 ms]
change: [-2.9970% +0.4267% +3.5014%] (p = 0.80 > 0.05)
No change in performance detected.
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high mild
unzip_stream_small time: [5.5436 ms 5.6239 ms 5.7148 ms]
change: [-9.3797% -6.8046% -4.0946%] (p = 0.00 < 0.05)
Performance has improved.
Found 2 outliers among 100 measurements (2.00%)
1 (1.00%) high mild
1 (1.00%) high severe
unzip_stream_medium time: [16.696 ms 17.381 ms 18.161 ms]
change: [+16.309% +21.820% +27.116%] (p = 0.00 < 0.05)
Performance has regressed.
Found 14 outliers among 100 measurements (14.00%)
4 (4.00%) high mild
10 (10.00%) high severe
Benchmarking unzip_stream_large: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 118.7s, or reduce sample count to 10.
unzip_stream_large time: [1.2877 s 1.3015 s 1.3146 s]
change: [+12.395% +14.194% +15.823%] (p = 0.00 < 0.05)
Performance has regressed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What we could do instead is: keep our parallel unzip, then use blake3's parallelized mmap hash for files that we have on-disk (at least for wheels build ourselves, since we never validate hashes for those).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess this wouldn't work for path-based wheels that are provided in uv add though. (Although in that case, we already do hash them for uv add even if not in uv pip install, so the regression only affects uv pip.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could consider always computing the blake3 instead of always computing the sha256 (so, compute both the blake3 and the sha256 if the sha256 is needed).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah that's pretty unfortunate. I'm curious about content addressing by blake3 and just computing the sha256 where needed, that idea sounds compelling.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We'd still have to compute the SHA256 for any local wheels used in uv add or similar (unless we changed the lockfile to also use Blake3, which could be a good idea?). The only benefit would be for wheels we build ourselves (since we'd no longer need to hash those).
cfa9f83 to
0b8c764
Compare
48ca3d5 to
084d601
Compare
084d601 to
d111e5c
Compare
| raise BackendUnavailable(data.get('traceback', '')) | ||
| pip._vendor.pyproject_hooks._impl.BackendUnavailable: Traceback (most recent call last): | ||
| File "/Users/example/.cache/uv/archive-v0/3783IbOdglemN3ieOULx2/lib/python3.13/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 77, in _build_backend | ||
| File "/Users/example/.cache/uv/archive-v0/97de8790030bbd5c2d96b7ec782fc2f7820ef8dba6db909ccf95449f2d062d4b/lib/python3.13/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 77, in _build_backend |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another risk here is that this is significantly longer which hurts path length.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could base64.urlsafe_b64encode it which would be ~43 characters (less than the 64 here, but more than the 21 we used before).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few ideas...
- base64 encoding seems reasonable
- we might want to store it as
{:2}/{2:}? git and npm do this to shard directories. I guess we don't have that problem today but if we're changing it maybe we should consider it? It looks like you did in 3bf79e2 ? - We could do a truncated hash with a package id for collisions?
{:8}/{package-id}(I guess thepackage-idcould come first?). We'd could persist the full hash to a file for a safety check too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. I did it as {:2}/{2:4}/{4:} in an earlier commit then rolled it back because it makes various things more complicated (e.g., for cache prune we have to figure out if we can prune the directories recursively). I can re-add it if it seems compelling.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could do a truncated hash with a package id for collisions?
I'd prefer not to couple the content-addressed storage to a concept like "package names" if possible. It's meant to be more general (e.g., we also use it for cached environments).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
({:2}/{2:4}/{4:} is what PyPI uses; it looks like pip does {:2}/{2:4}/{4:6}/{6:}?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
then rolled it back because it makes various things more complicated
Fair enough. I think people do it to avoid directory size limits (i.e., the number of items allowed in a single directory). I think we'd have had this problem already though if it was a concern for us? It seems fairly trivial to check both locations in the future if we determine we need it.
I'd prefer not to couple the content-addressed storage to a concept like "package names" if possible.
I think the idea that there's a "disambiguating" component for collisions if we truncate the hash doesn't need to be tied to "package names" specifically. The most generic way to do it would be to have /0, /1, ... directories with /{id}/HASH files and iterate over them? I sort of don't like that though :)
It's broadly unclear to me how much engineering we should do to avoid a long path length.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It may not really matter. I can't remember the specifics but what ends up happening here is: we create a temp dir, unzip it, then we move the temp dir into this location and hardlink from this location. So I don't think we end up referencing paths within these archives?
Summary
The idea here is to always compute at least a SHA256 hash for all wheels, then store unzipped wheels in a content-address location in the archive directory. This will help with disk space (since we'll avoid storing multiple copies of the same wheel contents) and cache reuse, since we can now reuse unzipped distributions from
uv pip installinuv synccommands (which always require hashes already).Closes #1061.
Closes #13995.
Closes #16786.