|
| 1 | +name: Bug Report |
| 2 | +description: Report an issue with llamacpp-rocm |
| 3 | +labels: ["bug"] |
| 4 | +body: |
| 5 | + - type: markdown |
| 6 | + attributes: |
| 7 | + value: | |
| 8 | + Thanks for taking the time to report an issue! Please fill in the details below so we can help you as quickly as possible. |
| 9 | +
|
| 10 | + - type: dropdown |
| 11 | + id: os |
| 12 | + attributes: |
| 13 | + label: Operating System |
| 14 | + description: Which OS are you running? |
| 15 | + options: |
| 16 | + - Windows 10 |
| 17 | + - Windows 11 |
| 18 | + - Ubuntu 22.04 |
| 19 | + - Ubuntu 24.04 |
| 20 | + - Fedora |
| 21 | + - Arch Linux |
| 22 | + - Other Linux (specify below) |
| 23 | + validations: |
| 24 | + required: true |
| 25 | + |
| 26 | + - type: input |
| 27 | + id: os-other |
| 28 | + attributes: |
| 29 | + label: OS Details (if "Other" or additional info) |
| 30 | + description: Kernel version, distro version, or any other relevant OS details. |
| 31 | + placeholder: e.g. Debian 12, kernel 6.1 |
| 32 | + |
| 33 | + - type: input |
| 34 | + id: gpu |
| 35 | + attributes: |
| 36 | + label: GPU Model |
| 37 | + description: Which AMD GPU are you using? |
| 38 | + placeholder: e.g. RX 7900 XTX, RX 6800 XT, MI300X |
| 39 | + validations: |
| 40 | + required: true |
| 41 | + |
| 42 | + - type: input |
| 43 | + id: build-version |
| 44 | + attributes: |
| 45 | + label: llamacpp-rocm Build Version |
| 46 | + description: Which release tag or build did you use? Check the [releases page](https://github.com/lemonade-sdk/llamacpp-rocm/releases). |
| 47 | + placeholder: e.g. b1223 |
| 48 | + validations: |
| 49 | + required: true |
| 50 | + |
| 51 | + - type: input |
| 52 | + id: model |
| 53 | + attributes: |
| 54 | + label: Model Used |
| 55 | + description: Provide the Hugging Face link or checkpoint/GGUF filename. |
| 56 | + placeholder: e.g. https://huggingface.co/TheBloke/Llama-2-7B-GGUF or llama-2-7b.Q4_K_M.gguf |
| 57 | + validations: |
| 58 | + required: true |
| 59 | + |
| 60 | + - type: dropdown |
| 61 | + id: vulkan-repro |
| 62 | + attributes: |
| 63 | + label: Does this issue also occur with Vulkan on upstream llama.cpp? |
| 64 | + description: | |
| 65 | + Try reproducing with the official [ggml-org/llama.cpp](https://github.com/ggml-org/llama.cpp) using the Vulkan backend. |
| 66 | + This helps us determine if the issue is ROCm-specific. |
| 67 | + options: |
| 68 | + - "Yes — also broken on Vulkan / upstream llama.cpp" |
| 69 | + - "No — only happens with this ROCm build" |
| 70 | + - "Not tested" |
| 71 | + validations: |
| 72 | + required: true |
| 73 | + |
| 74 | + - type: textarea |
| 75 | + id: description |
| 76 | + attributes: |
| 77 | + label: Issue Description |
| 78 | + description: Describe what happened and what you expected to happen. |
| 79 | + placeholder: When I run ... I get ... |
| 80 | + validations: |
| 81 | + required: true |
| 82 | + |
| 83 | + - type: textarea |
| 84 | + id: additional-info |
| 85 | + attributes: |
| 86 | + label: Additional Information |
| 87 | + description: | |
| 88 | + Any other context that might help — logs, screenshots, command-line flags, quantisation type, etc. |
| 89 | + render: text |
0 commit comments