Skip to content

Conversation

@abussy
Copy link
Contributor

@abussy abussy commented Jan 13, 2026

Upgraded the build_tarball.jl script to accommodate CUDA v13.

From v13 on, NVIDIA does not ship NVVM as part of the NVCC redist. For a successful build on aarch64, NVVM must now be downloaded separately. The platforms/cuda.jl file was modified in order to make these changes available to other packages.

@fingolfin fingolfin requested a review from ejmeitz January 16, 2026 20:35
@ejmeitz
Copy link
Collaborator

ejmeitz commented Jan 17, 2026

I mean looks fine to me. Follows what I've done in the past to get things working on ARM. I do know @maleadt said using the redistributed sources was not the "right" way. That said I know what was done here works.

@maleadt
Copy link
Member

maleadt commented Jan 19, 2026

All this code is bad and should go: https://github.com/abussy/Yggdrasil/blob/7426d9476148211fa13046f2670596fe00b9daa9/platforms/cuda.jl#L241-L326

Can you explain what the missing pieces are? For a downstream recipe like this, you should rely on any of our JLLs to provide the necessary build-time or run-time artifacts. Worst case, you can use the get_sources functionality to fetch things that aren't packaged by JLLs. But old-style manually maintained links to redist sources are not OK.

@abussy
Copy link
Contributor Author

abussy commented Jan 19, 2026

Originally, the code you highlighted (https://github.com/abussy/Yggdrasil/blob/7426d9476148211fa13046f2670596fe00b9daa9/platforms/cuda.jl#L241-L326) comes from the necessity of building CUDA code for the aarch64 architecture from BinaryBuilder (which runs on x86_64). This was introduced in #9910, and generalized from the Reactant recipe by @giordano . At that point, that was the accepted way for cross-compilation of CUDA code, and it has been adopted by other packages: Faiss, legate, MAGMA and a few others.

I am happy to try and get a working nvcc compiler through other means than an explicit redist download, but if I don't manage, I think this solution needs to stay.

@maleadt
Copy link
Member

maleadt commented Jan 19, 2026

I am happy to try and get a working nvcc compiler through other means than an explicit redist download, but if I don't manage, I think this solution needs to stay.

In any case, you should use get_sources to automatically fetch stuff from the redist servers instead of hardcoding. A JLL-based solution is even better, of course, but may be tricky because of the platform mismatches.

@abussy
Copy link
Contributor Author

abussy commented Jan 19, 2026

I tried building with JLLs but did not manage, as the host architecture is x86_64-linux-musl, and the CUDA_SDK_jll are not distributed for musl. Therefore, using a HostBuildDependency is not possible.

Switched to get_sources from C/CUDA/common.jl to download the required redist. Other packages still used the cuda_nvcc_redist_source, but I think this refactoring should be its own PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants