This repository provides a standalone bundle of ROCm 7.2.1 runtime libraries, enabling you to run llama.cpp binaries with AMD GPU (HIP) support without requiring a full ROCm installation on your system.
The llama.cpp project distributes pre-built binaries for ROCm 7.2.1, but these binaries require ROCm runtime libraries to function. Installing the full ROCm stack (~5GB) can be cumbersome and may conflict with existing system configurations.
This project solves that problem by:
- Extracting only the essential ROCm 7.2.1 runtime libraries from the official ROCm container
- Packaging them as a portable, self-contained bundle (~500MB-1GB)
- Providing simple environment setup scripts for immediate use
Download the latest release from the Releases page:
# Download ROCm runtime bundle
wget https://github.com/lemonade-sdk/rocm-stable/releases/latest/download/rocm-7.2.1-runtime-libs.tar.gz
# Download llama.cpp ROCm binaries (example)
wget https://github.com/ggml-org/llama.cpp/releases/download/b8192/llama-b8192-bin-ubuntu-rocm-7.2.1-x64.tar.gz
# Extract both
tar -xzf llama-b8192-bin-ubuntu-rocm-7.2.1-x64.tar.gz
tar -xzf rocm-7.2.1-runtime-libs.tar.gz
# Run with ROCm support
cd llama-b8192-bin-ubuntu-rocm-7.2.1-x64
source ../rocm-7.2.1-runtime/setup-env.sh
./llama-cli --versionCheck that ROCm libraries are properly loaded:
# Check binary dependencies
ldd ./llama-cli | grep -i rocm
# Test GPU detection
./llama-cli --versionThe runtime bundle contains:
libamdhip64.so*- HIP runtime for AMD GPUslibhsa-runtime64.so*- Heterogeneous System Architecture runtimelibhipblas.so*,libhipblaslt.so*- HIP BLAS librarieslibrocblas.so*- ROCm BLAS implementationlibrocsparse.so*- ROCm sparse linear algebralibamd_comgr.so*- AMD Code Object Managerlibhsakmt.so*- HSA Kernel Mode Thunklibdrm.so*,libdrm_amdgpu.so*- Direct Rendering Manager
rocblas/- Pre-compiled GPU kernels for BLAS operationshipblaslt/- Pre-compiled GPU kernels for BLAS LT operations
setup-env.sh- Automatic environment configurationREADME.md- Detailed usage instructions
- GPU: AMD GPU with ROCm support (see compatibility list)
- OS: Linux with AMDGPU kernel driver
- Kernel: Recent Linux kernel (5.15+) with AMDGPU driver loaded
- No ROCm installation required on the host system
Check if your GPU is detected:
# Check if AMDGPU driver is loaded
lsmod | grep amdgpu
# List GPU devices
ls -la /dev/dri/
# Get GPU information (if rocminfo is installed)
rocminfo | grep gfxThe bundle includes a setup-env.sh script that configures your environment:
source rocm-7.2.1-runtime/setup-env.shThis script:
- Sets
LD_LIBRARY_PATHto include the bundled libraries - Sets
ROCM_PATHto point to the bundle directory - Displays configuration information
If you prefer manual setup:
export LD_LIBRARY_PATH=/path/to/rocm-7.2.1-runtime:${LD_LIBRARY_PATH}
export ROCM_PATH=/path/to/rocm-7.2.1-runtimeThis bundle is compatible with llama.cpp binaries built for ROCm 7.2 and 7.2.1. Check the llama.cpp release notes to ensure you're downloading the correct binaries.
Error:
error while loading shared libraries: libamdhip64.so.6: cannot open shared object file
Solution: Ensure you've sourced the setup script:
source rocm-7.2.1-runtime/setup-env.shOr manually set LD_LIBRARY_PATH:
export LD_LIBRARY_PATH=/path/to/rocm-7.2.1-runtime:${LD_LIBRARY_PATH}This repository's build scripts and documentation are released under the MIT License.
The ROCm libraries themselves are licensed under MIT and Apache 2.0 licenses by AMD. See the ROCm repository for details.
This is an unofficial redistribution of ROCm runtime libraries for convenience. For official ROCm releases and support, visit:
- AMD for developing and maintaining ROCm
- llama.cpp team for providing excellent AMD GPU support
- The open-source community for testing and feedback
- Issues with this bundle: Open an issue
- llama.cpp questions: Visit llama.cpp repository
- ROCm support: Visit ROCm documentation