Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Descriptor pools never report when out of memory #2449

Open
squidbus opened this issue Feb 16, 2025 · 0 comments
Open

Descriptor pools never report when out of memory #2449

squidbus opened this issue Feb 16, 2025 · 0 comments

Comments

@squidbus
Copy link
Contributor

squidbus commented Feb 16, 2025

I have code that checks for VK_ERROR_OUT_OF_POOL_MEMORY from a descriptor pool allocation and handles it. When a pool is exhausted, it records the current semaphore tick, places the pool in a pending queue to not be used until after that tick, and either fetches a free pool from the pending queue or creates a new one if needed.

However, it seems that this code can never work as intended, because instead the allocate calls return VK_SUCCESS and spam hundreds of instances of the following in my output:

[mvk-warn] VK_ERROR_OUT_OF_POOL_MEMORY: VkDescriptorPool exhausted pool of (x) descriptors. Allocating descriptor dynamically.

I understand that this behavior was added to account for certain applications that rely on looser driver behavior, but can there at least be an option to turn it off? It seems sub-optimal to either be stuck with hundreds of dynamic allocations outside of the pool, or to have to go with the largest possible needed size in a single pool from the start, when the actual needs can vary depending on load.

Also as a side note, even with this turned off, getting a forced stderr output by default every time the pool runs out of descriptors isn't great either.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant