-
Notifications
You must be signed in to change notification settings - Fork 412
Description
Some integrated GPUs (see report 43990) have memory that closely mirrors that of a dGPU, in particular multiple vram heaps with the primary vram heap not host-visible.
This behavior may change based on the driver/OS for an identical system, see report 44007 for same system as 43990 but with only one DEVICE_LOCAL heap.
When requesting memory with VMA_ALLOCATION_CREATE_HOST_ACCESS_ALLOW_TRANSFER_INSTEAD_BIT, the behavior of VMA changes based on the VkPhysicalDeviceType being used. This seems incongruent with the VK spec's description of that enum.
The physical device type is advertised for informational purposes only, and does not directly affect the operation of the system. However, the device type may correlate with other advertised properties or capabilities of the system, such as how many memory heaps there are.
In some cases, this behavior difference can cause memory allocation to fail simply due to the GPU being reported as integrated instead of discrete.
Behavior difference stems from this line which will later decide if HOST_VISIBLE is a preferred or required flag for the memory allocation, even when ALLOW_TRANSFER_INSTEAD is set.
In my specific case, at startup i look over the heap layout of a device to decide if it should be treated as UMA for read-back purposes, and select the heap(s) to be used for allocations (src link), later passing bits for all types of those heaps to VMA allocation functions.
When creating a buffer (src link) I always have prefer mappable allow transfer set, and on dGPUs this works exactly as expected and as the VMA documentation says. When on an iGPU w/ UMA, this also works exactly as expected. When using an iGPU w/o UMA/mappable memory (for the selected device heap), this allocation will fail instead of returning a non-mappable allocation. Based on VMA documentation I would expect this allocation to still succeed and return a non-mappable allocation.
Related issue: RogueLogix/Cinnabar#22