You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add ggml-backend-meta.cpp to GPU runtime build scripts
Upstream's ggml-backend.cpp now references ggml_backend_buffer_is_meta
(line 133, 2006) and ggml-alloc.c references ggml_backend_buft_is_meta
(line 1240). Both functions are defined in the new ggml-backend-meta.cpp
which upstream made part of ggml-base.
Without this, the runtime-built GPU DSOs (ggml-cuda.so/.dll, ggml-rocm.dll,
ggml-vulkan.dll) and the on-the-fly Metal dylib build would link with
undefined references.
Updated:
- llamafile/build-functions.sh (Linux CUDA + ROCm via cuda.sh / rocm.sh)
- llamafile/cuda.bat, llamafile/cuda_parallel.bat
- llamafile/rocm.bat, llamafile/rocm_parallel.bat
- llamafile/vulkan.bat
- llamafile/metal.c (yoink + extracted-files map + compile list)
- llamafile/BUILD.mk (add ggml-backend-meta.cpp.zip.o to LLAMAFILE_METAL_SOURCES)
0 commit comments