This guide explains how to set up Ollama to use an AMD discrete GPU (like RX6700S, gfx1032 for G14 2022).
-
Open:
%localappdata%\Ollama\server.log -
Look for the
gpu_typefield — you want gfx10xx (example:gfx1032for G14 2022). -
Important:
-
There are 2 GPUs — integrated and discrete. GPU 0 was the discrete GPU for me (you want to run it on your dGPU).
-
Open Task Manager (
Ctrl + Shift + Esc), go to the Performance tab and check the GPU type if you aren't sure.
-
Go to:
C:\Users%username%.ollama -
Make a copy of the
.ollamafolder and rename it with_backupat the end. -
(Optional but recommended): Create a Windows System Restore Point.
- Download from: Ollama for AMD Releases
- Close Ollama if it’s running (system tray → Task Manager).
- Run the installer.
(It’s unsigned — click "More Info" → "Run Anyway"). - After installation, check the hidden system tray to close the Ollama service that is running.
- Go to: ROCmLibs Latest Release
- Download the latest version (e.g., 0.6.2.4 currently).
- Find your gfx version's zip file and download it.
- Some GPUs have multiple versions — read carefully before downloading.
If the link isn't working:
- Visit the Ollama for AMD Wiki.
- Navigate to the Demo Release Version section.
- Find the ROCm libraries for your version (e.g.,
rocm.gfx1035.for.hip.sdk.6.1.2.7z).
-
Extract the
.7zfile.
You should find:- A
rocblas.dllfile. - A
libraryfolder.
- A
-
Navigate to:
%localappdata%\Programs\Ollama\lib\ollama -
Locate the current
rocblas.dll: -
It may be in the main folder or inside a subfolder like
rocm. -
Backup the original files (rename them with
.oldor_oldif you wish). -
Copy:
rocblas.dlllibraryfolder (usually insiderocm\rocblas\).
Folder structure may vary slightly from guide references.
- This is a brief overview — it's recommended to read the Wiki for any changes or updates.
- Important:
- Do not update Ollama normally via the app.
- Always manually re-download and reinstall
- Automatic updater is available byronleeee's releases.
- Issues?
- Open an issue on the likelovewant GitHub page.
- Restoring Models:
- If you backed up
.ollama, you can restore previous models by copying back themodelsfolder. - Test GPU Usage:
- Start with a small model.
- Monitor GPU usage via Task Manager.
- Note: If the model is too large, some load might fall back to the CPU.