hipBlas and ROCm #1214
Replies: 4 comments 8 replies
-
It would b great for AMD support also for Ubuntu 22.04., for us less fortunate. Was trying unsucesfully with: |
Beta Was this translation helpful? Give feedback.
-
Ok I've desided to waste another day and managed to run PrivateGPT on WIndows 10 using HIPBLAS=1. I used MINGW64 command line interface that goes together with git installation "Git Bash"
Sad thing is that... it works really slow in the web interface,
log after Ctrl+C if I load the same model using https://github.com/YellowRoseCx/koboldcpp-rocm/releases/download/1.48.1.yr2-ROCm/koboldcpp_rocm_only.exe
|
Beta Was this translation helpful? Give feedback.
-
İt is great |
Beta Was this translation helpful? Give feedback.
-
Ubuntu successfull 2...work in progress, suggestions welcome :) Install
env in .bashrc:export CMAKE_PREFIX_PATH=/opt/rocm export ROCM_PATH=/opt/rocm-5.7.1 export CC=/usr/bin/amdclang BuildCMAKE_ARGS='-G Ninja -DLLAMA_HIPBLAS=on -DCMAKE_C_COMPILER=amdclang -DCMAKE_CXX_COMPILER=amdclang++ -DAMDGPU_TARGETS=gfx1030' poetry run pip install --force-reinstall --no-cache-dir llama-cpp-python Note: DAMDGPU_TARGETS=gfx1030 ...RDNA2 (e.g. 6000 series) ResultAVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | |
Beta Was this translation helpful? Give feedback.
-
Have anybody managed to launch PrivateGPT on Windows with AMD ROCm technology? Because I wasted a day trying
Beta Was this translation helpful? Give feedback.
All reactions