Description
Efficiently cache crate dependencies.
I'd like to discuss the future of cargo-wharf
and to this extent some ideas I'd like to collaborate on.
Cache integration and the case for a community-backed global cache
Recent versions of docker build
support --output=PATH
which copies files out of an image. This allows for writing the compilation results of each dependency to the filesystem of the local machine or of a CI cache.
cargo
has a way of specifying where to look for build artifacts other than the sometimes-empty ./target/
dir: CARGO_TARGET_DIR
.
More on CARGO_TARGET_DIR
Per https://stackoverflow.com/a/37472558/1418165 it turns out that a shared CARGO_TARGET_DIR
(or CARGO_BUILD_TARGET_DIR
)
- is thread safe, since Concurrent usage of cargo will generally result in badness rust-lang/cargo#354 is closed
- names that folder
"target"
per Allow specifying a custom output directory rust-lang/cargo#1657 (comment) - however cargo provides no hermeticity garanties WRT
- feature flags
- compiler version
- platform triplet
These would have to be part of the hashed name of each dependency being built (the dependency path or the docker tag).
To solve hermeticity issues, see
cross
cross
already does a good job of building Rust projects (on various platform triplets) using docker (docker run
) and QEMU.
This work should be adapted (in a way that can be most easily maintainable) to use BuildKit, its QEMU integration, its rootless capabilities and its ability to run the compute graph with maximum parallelization.
Conclusion
So if cargo-wharf
where to create hermetic BuildKit targets for each dependency, leveraging the work on cross
, I think there'd be a seamless way to integrate both local and global caches for dependencies. This global cache (basically a Docker Registry) could then be paid for by the community and profit the community.
To get there I see these development steps:
- get the list of dependencies from
cargo-wharf
, hashed and hermetic - "generate" a Dockerfile with these, based on
cross
's.- Each dep is a stage in this file. Stage name = hashed recipe
- When linking, dependencies are bind
--mount=from=HASHEDDEP,source=...,target=...
as read-only
docker build
this Dockerfile as thecargo build
equivalent. Same forcargo test
.- In a local cache setting
- each hashed dependency build results would live in a centralized folder, ready for reuse by another project. Thus lowering initial build times.
- If using the global cache
- each hashed dependency build results would live as a single-layer docker image, holding files, in the local docker registry as well as the global networked one.
- New builds should be received by the global registry and checked for hermeticity before adding them to its cache.
Note that that global docker registry
- can easily be switched to a private instance
- could be used to directly build dependencies and/or profit from cache locality by setting it as the docker host in about this way:
DOCKER_HOST=ssh://lotsa.oompf.machine.com cargo build
, only the final build results would then be transferred over the network.
Ideas, thoughts, notes, criticism please shoot.
Activity