You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Updated Torch recipe
* Changed to provide lazy artifacts.
* Replaced CUDA_full sources with CUDA module dependencies (and similar for CUDA 11.3).
* Updated cuda-part of script accordingly.
* Added xcrun executable for macOS which can handle `--show-sdk-path`.
* Using LLVM v13 (on macOS, v17 has issues).
* Adding cuda="none" tag to non-CUDA platforms with matching CUDA platforms.
* Added comment to fancy_toys.jl `should_build_platform`
`should_build_platform` returns true, e.g. for "x86_64-linux-gnu-cxx11" when ARGS = ["x86_64-linux-gnu-cxx11-cuda+10.2"].
* Added NVTX RuntimeDependency
* Using CUDA.is_supported for tagging CUDA-supported non-CUDA platforms with cuda="none"
if [[ $bb_full_target == *cuda* ]] && [[ $cuda_version != none ]]; then
183
+
configure || configure
186
184
else
187
185
configure
188
-
configure || configure
189
186
fi
190
187
cmake --build . -- -j $nproc
191
188
make install
192
189
install_license ../LICENSE
193
190
"""
194
191
195
-
# These are the platforms we will build for by default, unless further
196
-
# platforms are passed in on the command line
197
192
platforms = supported_platforms()
198
193
filter!(p ->!(Sys.islinux(p) && libc(p) =="musl"), platforms) # musl fails due to conflicting declaration of C function ‘void __assert_fail(const char*, const char*, int, const char*) - between /opt/x86_64-linux-musl/x86_64-linux-musl/include/c++/8.1.0/cassert:44 and /opt/x86_64-linux-musl/x86_64-linux-musl/sys-root/usr/include/assert.h
199
194
filter!(!Sys.iswindows, platforms) # ONNX does not support cross-compiling for w64-mingw32 on linux
@@ -202,6 +197,19 @@ filter!(p -> arch(p) != "armv7l", platforms) # armv7l is not supported by XNNPAC
202
197
filter!(p -> arch(p) !="powerpc64le", platforms) # PowerPC64LE is not supported by XNNPACK
# Dependency("MKL_jll"; platforms = mkl_platforms), # MKL is avoided for all platforms
249
245
# BuildDependency("MKL_Headers_jll"; platforms = mkl_platforms), # MKL is avoided for all platforms
246
+
247
+
# libtorch, libtorch_cuda, and libtorch_global_deps all link with `libnvToolsExt`
248
+
# maleadt: `libnvToolsExt is not shipped by CUDA anymore, so the best solution is definitely static linking. CUDA 10.2 shipped it, later it became a header-only library which we do compile into a dynamic one for use with NVTX.jl, but there's no guarantees that the library we build has the same symbols as the "old" libnvToolsExt shipped by CUDA 10.2
249
+
RuntimeDependency("NVTX_jll"), #TODO: Replace RuntimeDependency with static linking.
Dependency("CUDA_Runtime_jll", v"0.7.0"), # Using v"0.7.0" to get support for cuda = "11.3" - using Dependency rather than RuntimeDependency to be sure to pass audit
0 commit comments