Skip to content

Flatpak: Ollama doesn't use AMD plugin #1046

@gk-abosserhoff

Description

@gk-abosserhoff

Describe the bug

Ollama doesn't see the AMD plugin even though it is installed. The GPU will not be used.

Workaround

If I manually copy the folder:
/var/lib/flatpak/runtime/com.jeffser.Alpaca.Plugins.AMD/x86_64/stable/active/files/lib/ollama/rocm to
/var/lib/flatpak/runtime/com.jeffser.Alpaca.Plugins.Ollama/x86_64/stable/active/files/lib/ollama/ it finds the GPU again.

There also seems to be another issue with the ROCm libraries because it crashes with large models when the model should overflow into RAM:

rocblaslt error: Could not load /app/plugins/Ollama/lib/ollama/rocm/hipblaslt/library/TensileLibrary_lazy_gfx1201.dat
hipBLASLt error: Heuristic Fetch Failed!
This message will be only be displayed once, unless the ROCBLAS_VERBOSE_HIPBLASLT_ERROR environment variable is set.

rocBLAS warning: hipBlasLT failed, falling back to tensile. 
This message will be only be displayed once, unless the ROCBLAS_VERBOSE_TENSILE_ERROR environment variable is set.
graph_reserve: failed to allocate compute buffers
SIGSEGV: segmentation violation
PC=0x7f5813988faa m=9 sigcode=1 addr=0x7f5a02823ce8
signal arrived during cgo execution

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions