You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have code that checks for VK_ERROR_OUT_OF_POOL_MEMORY from a descriptor pool allocation and handles it. When a pool is exhausted, it records the current semaphore tick, places the pool in a pending queue to not be used until after that tick, and either fetches a free pool from the pending queue or creates a new one if needed.
However, it seems that this code can never work as intended, because instead the allocate calls return VK_SUCCESS and spam hundreds of instances of the following in my output:
[mvk-warn] VK_ERROR_OUT_OF_POOL_MEMORY: VkDescriptorPool exhausted pool of (x) descriptors. Allocating descriptor dynamically.
I understand that this behavior was added to account for certain applications that rely on looser driver behavior, but can there at least be an option to turn it off? It seems sub-optimal to either be stuck with hundreds of dynamic allocations outside of the pool, or to have to go with the largest possible needed size in a single pool from the start, when the actual needs can vary depending on load.
Also as a side note, even with this turned off, getting a forced stderr output by default every time the pool runs out of descriptors isn't great either.
The text was updated successfully, but these errors were encountered:
I have code that checks for
VK_ERROR_OUT_OF_POOL_MEMORY
from a descriptor pool allocation and handles it. When a pool is exhausted, it records the current semaphore tick, places the pool in a pending queue to not be used until after that tick, and either fetches a free pool from the pending queue or creates a new one if needed.However, it seems that this code can never work as intended, because instead the allocate calls return
VK_SUCCESS
and spam hundreds of instances of the following in my output:I understand that this behavior was added to account for certain applications that rely on looser driver behavior, but can there at least be an option to turn it off? It seems sub-optimal to either be stuck with hundreds of dynamic allocations outside of the pool, or to have to go with the largest possible needed size in a single pool from the start, when the actual needs can vary depending on load.
Also as a side note, even with this turned off, getting a forced
stderr
output by default every time the pool runs out of descriptors isn't great either.The text was updated successfully, but these errors were encountered: