Skip to content

Vulkan/AMDVLK: Device memory allocation fails when single compute buffer > ~2 GiB (same model works on RADV) #413

@kyuz0

Description

@kyuz0

Problem description & steps to reproduce

There is an open issue on llama.ccp (ggml-org/llama.cpp#15054) for a bug on systems using AMDVLK. Some GGUF models that load and run under RADV fail during model/context initialization in llama.cpp’s Vulkan backend with VK_ERROR_OUT_OF_DEVICE_MEMORY:

ggml_vulkan: Device memory allocation of size 2819260416 failed.
ggml_vulkan: Requested buffer size exceeds device memory allocation limit: ErrorOutOfDeviceMemory
alloc_tensor_range: failed to allocate Vulkan0 buffer of size 2819260416

The failures correlate with a single Vulkan allocation (the ggml_vulkan compute buffer) exceeding the driver’s per-allocation cap VkPhysicalDeviceLimits::maxMemoryAllocationSize. On the affected machine this is 0x80000000 (≈2 GiB). The backend occasionally needs to request ~2.0–2.6 GiB for this buffer, which succeeds on RADV but is rejected by AMDVLK, indicating the issue is the per-allocation limit, not total VRAM.

The following response from the original llama.cpp issue made me think it was a good idea to raise this here:

Yes, the driver sets a maximum allocation size and a maximum buffer size limit. This is 2GB on amdvlk and the proprietary AMD drivers, and 4GB on RADV. We have no control over this, I don't know why AMD keeps the limit below the theoretical maximum that all the other major Vulkan drivers use.

Example (Gemma 3 27B BF16)

For example, this happens with gemma-3-27b-it-BF16/gemma-3-27b-it-BF16 (https://huggingface.co/unsloth/gemma-3-27b-it-GGUF):

$ llama-cli -ngl 99 -fa -m models/gemma-3-27b-it-BF16/gemma-3-27b-it-BF16-00001-of-00002.gguf 
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = Radeon 8060S Graphics (AMD open-source driver) | uma: 1 | fp16: 1 | bf16: 0 | warp size: 64 | shared memory: 32768 | int dot: 1 | matrix cores: KHR_coopmat
build: 6060 (9c35706b) with cc (GCC) 15.1.1 20250719 (Red Hat 15.1.1-5) for x86_64-redhat-linux
main: llama backend init
main: load the model and apply lora adapter, if any
llama_model_load_from_file_impl: using device Vulkan0 (Radeon 8060S Graphics) - 85720 MiB free
llama_model_loader: additional 1 GGUFs metadata loaded.
llama_model_loader: loaded meta data with 39 key-value pairs and 808 tensors from models/gemma-3-27b-it-BF16/gemma-3-27b-it-BF16-00001-of-00002.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = gemma3
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Gemma-3-27B-It
llama_model_loader: - kv   3:                           general.finetune str              = it
llama_model_loader: - kv   4:                           general.basename str              = Gemma-3-27B-It
llama_model_loader: - kv   5:                       general.quantized_by str              = Unsloth
llama_model_loader: - kv   6:                         general.size_label str              = 27B
llama_model_loader: - kv   7:                           general.repo_url str              = https://huggingface.co/unsloth
llama_model_loader: - kv   8:                      gemma3.context_length u32              = 131072
llama_model_loader: - kv   9:                    gemma3.embedding_length u32              = 5376
llama_model_loader: - kv  10:                         gemma3.block_count u32              = 62
llama_model_loader: - kv  11:                 gemma3.feed_forward_length u32              = 21504
llama_model_loader: - kv  12:                gemma3.attention.head_count u32              = 32
llama_model_loader: - kv  13:    gemma3.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  14:                gemma3.attention.key_length u32              = 128
llama_model_loader: - kv  15:              gemma3.attention.value_length u32              = 128
llama_model_loader: - kv  16:                          general.file_type u32              = 32
llama_model_loader: - kv  17:                      gemma3.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  18:            gemma3.attention.sliding_window u32              = 1024
llama_model_loader: - kv  19:             gemma3.attention.head_count_kv u32              = 16
llama_model_loader: - kv  20:                   gemma3.rope.scaling.type str              = linear
llama_model_loader: - kv  21:                 gemma3.rope.scaling.factor f32              = 8.000000
llama_model_loader: - kv  22:               general.quantization_version u32              = 2
llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = default
llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,262208]  = ["<pad>", "<eos>", "<bos>", "<unk>", ...
llama_model_loader: - kv  26:                      tokenizer.ggml.scores arr[f32,262208]  = [-1000.000000, -1000.000000, -1000.00...
llama_model_loader: - kv  27:                  tokenizer.ggml.token_type arr[i32,262208]  = [3, 3, 3, 3, 3, 4, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  28:                tokenizer.ggml.bos_token_id u32              = 2
llama_model_loader: - kv  29:                tokenizer.ggml.eos_token_id u32              = 106
llama_model_loader: - kv  30:            tokenizer.ggml.unknown_token_id u32              = 3
llama_model_loader: - kv  31:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  32:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  33:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  34:                    tokenizer.chat_template str              = {{ bos_token }}\n{%- if messages[0]['r...
llama_model_loader: - kv  35:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  36:                                   split.no u16              = 0
llama_model_loader: - kv  37:                                split.count u16              = 2
llama_model_loader: - kv  38:                        split.tensors.count i32              = 808
llama_model_loader: - type  f32:  373 tensors
llama_model_loader: - type bf16:  435 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = BF16
print_info: file size   = 50.31 GiB (16.00 BPW) 
load: special tokens cache size = 6415
load: token to piece cache size = 1.9446 MB
print_info: arch             = gemma3
print_info: vocab_only       = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 5376
print_info: n_layer          = 62
print_info: n_head           = 32
print_info: n_head_kv        = 16
print_info: n_rot            = 128
print_info: n_swa            = 1024
print_info: is_swa_any       = 1
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 2
print_info: n_embd_k_gqa     = 2048
print_info: n_embd_v_gqa     = 2048
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-06
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 7.7e-02
print_info: n_ff             = 21504
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 1000000.0
print_info: freq_scale_train = 0.125
print_info: n_ctx_orig_yarn  = 131072
print_info: rope_finetuned   = unknown
print_info: model type       = 27B
print_info: model params     = 27.01 B
print_info: general.name     = Gemma-3-27B-It
print_info: vocab type       = SPM
print_info: n_vocab          = 262208
print_info: n_merges         = 0
print_info: BOS token        = 2 '<bos>'
print_info: EOS token        = 106 '<end_of_turn>'
print_info: EOT token        = 106 '<end_of_turn>'
print_info: UNK token        = 3 '<unk>'
print_info: PAD token        = 0 '<pad>'
print_info: LF token         = 248 '<0x0A>'
print_info: EOG token        = 106 '<end_of_turn>'
print_info: max token length = 48
load_tensors: loading model tensors, this can take a while... (mmap = true)
ggml_vulkan: Device memory allocation of size 2819260416 failed.
ggml_vulkan: Requested buffer size exceeds device memory allocation limit: ErrorOutOfDeviceMemory
alloc_tensor_range: failed to allocate Vulkan0 buffer of size 2819260416
llama_model_load: error loading model: unable to allocate Vulkan0 buffer
llama_model_load_from_file_impl: failed to load model
common_init_from_params: failed to load model 'models/gemma-3-27b-it-BF16/gemma-3-27b-it-BF16-00001-of-00002.gguf'
main: error: unable to load model

However, with RADV the model loads just fine:

toolbox enter llama-vulkan-radv 
⬢ [kyuz0@toolbx ~]$ llama-cli -ngl 99 -fa -m models/gemma-3-27b-it-BF16/gemma-3-27b-it-BF16-00001-of-00002.gguf 
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = Radeon 8060S Graphics (RADV GFX1151) (radv) | uma: 1 | fp16: 1 | bf16: 0 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: KHR_coopmat
build: 6040 (66625a59) with cc (GCC) 15.1.1 20250719 (Red Hat 15.1.1-5) for x86_64-redhat-linux
main: llama backend init
main: load the model and apply lora adapter, if any
llama_model_load_from_file_impl: using device Vulkan0 (Radeon 8060S Graphics (RADV GFX1151)) - 87722 MiB free
llama_model_loader: additional 1 GGUFs metadata loaded.
llama_model_loader: loaded meta data with 39 key-value pairs and 808 tensors from models/gemma-3-27b-it-BF16/gemma-3-27b-it-BF16-00001-of-00002.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = gemma3
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Gemma-3-27B-It
llama_model_loader: - kv   3:                           general.finetune str              = it
llama_model_loader: - kv   4:                           general.basename str              = Gemma-3-27B-It
llama_model_loader: - kv   5:                       general.quantized_by str              = Unsloth
llama_model_loader: - kv   6:                         general.size_label str              = 27B
llama_model_loader: - kv   7:                           general.repo_url str              = https://huggingface.co/unsloth
llama_model_loader: - kv   8:                      gemma3.context_length u32              = 131072
llama_model_loader: - kv   9:                    gemma3.embedding_length u32              = 5376
llama_model_loader: - kv  10:                         gemma3.block_count u32              = 62
llama_model_loader: - kv  11:                 gemma3.feed_forward_length u32              = 21504
llama_model_loader: - kv  12:                gemma3.attention.head_count u32              = 32
llama_model_loader: - kv  13:    gemma3.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  14:                gemma3.attention.key_length u32              = 128
llama_model_loader: - kv  15:              gemma3.attention.value_length u32              = 128
llama_model_loader: - kv  16:                          general.file_type u32              = 32
llama_model_loader: - kv  17:                      gemma3.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  18:            gemma3.attention.sliding_window u32              = 1024
llama_model_loader: - kv  19:             gemma3.attention.head_count_kv u32              = 16
llama_model_loader: - kv  20:                   gemma3.rope.scaling.type str              = linear
llama_model_loader: - kv  21:                 gemma3.rope.scaling.factor f32              = 8.000000
llama_model_loader: - kv  22:               general.quantization_version u32              = 2
llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = default
llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,262208]  = ["<pad>", "<eos>", "<bos>", "<unk>", ...
llama_model_loader: - kv  26:                      tokenizer.ggml.scores arr[f32,262208]  = [-1000.000000, -1000.000000, -1000.00...
llama_model_loader: - kv  27:                  tokenizer.ggml.token_type arr[i32,262208]  = [3, 3, 3, 3, 3, 4, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  28:                tokenizer.ggml.bos_token_id u32              = 2
llama_model_loader: - kv  29:                tokenizer.ggml.eos_token_id u32              = 106
llama_model_loader: - kv  30:            tokenizer.ggml.unknown_token_id u32              = 3
llama_model_loader: - kv  31:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  32:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  33:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  34:                    tokenizer.chat_template str              = {{ bos_token }}\n{%- if messages[0]['r...
llama_model_loader: - kv  35:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  36:                                   split.no u16              = 0
llama_model_loader: - kv  37:                                split.count u16              = 2
llama_model_loader: - kv  38:                        split.tensors.count i32              = 808
llama_model_loader: - type  f32:  373 tensors
llama_model_loader: - type bf16:  435 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = BF16
print_info: file size   = 50.31 GiB (16.00 BPW) 
load: special tokens cache size = 6415
load: token to piece cache size = 1.9446 MB
print_info: arch             = gemma3
print_info: vocab_only       = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 5376
print_info: n_layer          = 62
print_info: n_head           = 32
print_info: n_head_kv        = 16
print_info: n_rot            = 128
print_info: n_swa            = 1024
print_info: is_swa_any       = 1
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 2
print_info: n_embd_k_gqa     = 2048
print_info: n_embd_v_gqa     = 2048
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-06
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 7.7e-02
print_info: n_ff             = 21504
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 1000000.0
print_info: freq_scale_train = 0.125
print_info: n_ctx_orig_yarn  = 131072
print_info: rope_finetuned   = unknown
print_info: model type       = 27B
print_info: model params     = 27.01 B
print_info: general.name     = Gemma-3-27B-It
print_info: vocab type       = SPM
print_info: n_vocab          = 262208
print_info: n_merges         = 0
print_info: BOS token        = 2 '<bos>'
print_info: EOS token        = 106 '<end_of_turn>'
print_info: EOT token        = 106 '<end_of_turn>'
print_info: UNK token        = 3 '<unk>'
print_info: PAD token        = 0 '<pad>'
print_info: LF token         = 248 '<0x0A>'
print_info: EOG token        = 106 '<end_of_turn>'
print_info: max token length = 48
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: offloading 62 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 63/63 layers to GPU
load_tensors:      Vulkan0 model buffer size = 51518.82 MiB
load_tensors:   CPU_Mapped model buffer size =  2688.66 MiB
.............................................................................................
llama_context: constructing llama_context
llama_context: non-unified KV cache requires ggml_set_rows() - forcing unified KV cache
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 4096
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch       = 2048
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 1
llama_context: kv_unified    = true
llama_context: freq_base     = 1000000.0
llama_context: freq_scale    = 0.125
llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_context: Vulkan_Host  output buffer size =     1.00 MiB
llama_kv_cache_unified_iswa: creating non-SWA KV cache, size = 4096 cells
llama_kv_cache_unified:    Vulkan0 KV buffer size =   320.00 MiB
llama_kv_cache_unified: size =  320.00 MiB (  4096 cells,  10 layers,  1/ 1 seqs), K (f16):  160.00 MiB, V (f16):  160.00 MiB
llama_kv_cache_unified: LLAMA_SET_ROWS=0, using old ggml_cpy() method for backwards compatibility
llama_kv_cache_unified_iswa: creating     SWA KV cache, size = 1536 cells
llama_kv_cache_unified:    Vulkan0 KV buffer size =   624.00 MiB
llama_kv_cache_unified: size =  624.00 MiB (  1536 cells,  52 layers,  1/ 1 seqs), K (f16):  312.00 MiB, V (f16):  312.00 MiB
llama_kv_cache_unified: LLAMA_SET_ROWS=0, using old ggml_cpy() method for backwards compatibility
llama_context:    Vulkan0 compute buffer size =   522.62 MiB
llama_context: Vulkan_Host compute buffer size =    21.51 MiB
llama_context: graph nodes  = 2613
llama_context: graph splits = 2
common_init_from_params: KV cache shifting is not supported for this context, disabling KV cache shifting
common_init_from_params: added <end_of_turn> logit bias = -inf
common_init_from_params: setting dry_penalty_last_n to ctx_size = 4096
common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
main: llama threadpool init, n_threads = 16
main: chat template is available, enabling conversation mode (disable it with -no-cnv)
main: chat template example:
<start_of_turn>user
You are a helpful assistant

Hello<end_of_turn>
<start_of_turn>model
Hi there<end_of_turn>
<start_of_turn>user
How are you?<end_of_turn>
<start_of_turn>model


system_info: n_threads = 16 (n_threads_batch = 16) / 32 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | BMI2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | LLAMAFILE = 1 | OPENMP = 1 | REPACK = 1 | 

main: interactive mode on.
sampler seed: 3532905611
sampler params: 
	repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
	dry_multiplier = 0.000, dry_base = 1.750, dry_allowed_length = 2, dry_penalty_last_n = 4096
	top_k = 40, top_p = 0.950, min_p = 0.050, xtc_probability = 0.000, xtc_threshold = 0.100, typical_p = 1.000, top_n_sigma = -1.000, temp = 0.800
	mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampler chain: logits -> logit-bias -> penalties -> dry -> top-n-sigma -> top-k -> typical -> top-p -> min-p -> xtc -> temp-ext -> dist 
generate: n_ctx = 4096, n_batch = 2048, n_predict = -1, n_keep = 1

== Running in interactive mode. ==
 - Press Ctrl+C to interject at any time.
 - Press Return to return control to the AI.
 - To return control without starting a new line, end your input with '/'.
 - If you want to submit another line, end your input with '\'.
 - Not using system message. To change it, set a different value via -sys PROMPT

Vulkaninfo output

$vulkaninfo | grep -i maxMemoryAllocationSize

'DISPLAY' environment variable not set... skipping surface info
	maxMemoryAllocationSize           = 0xfffffffc
	maxMemoryAllocationSize           = 0x80000000
$vulkaninfo | grep -i maxMemoryAllocationSize

'DISPLAY' environment variable not set... skipping surface info
	maxMemoryAllocationSize           = 0x80000000

Name and Version

llama-cli --version
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = Radeon 8060S Graphics (AMD open-source driver) | uma: 1 | fp16: 1 | bf16: 0 | warp size: 64 | shared memory: 32768 | int dot: 1 | matrix cores: KHR_coopmat
version: 6060 (9c35706b)
built with cc (GCC) 15.1.1 20250719 (Red Hat 15.1.1-5) for x86_64-redhat-linux

Operating systems

Linux

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions