Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WGPU prefferencing a low power graphics adapter by default #2810

Open
4 tasks done
laycookie opened this issue Feb 21, 2025 · 6 comments
Open
4 tasks done

WGPU prefferencing a low power graphics adapter by default #2810

laycookie opened this issue Feb 21, 2025 · 6 comments
Labels
bug Something isn't working

Comments

@laycookie
Copy link

laycookie commented Feb 21, 2025

Is your issue REALLY a bug?

  • My issue is indeed a bug!
  • I am not crazy! I will not fill out this form just to ask a question or request a feature. Pinky promise.

Is there an existing issue for this?

  • I have searched the existing issues.

Is this issue related to iced?

  • My hardware is compatible and my graphics drivers are up-to-date.

What happened?

In cases where user has both a discrete, and integrated GPU, iced is currently preferring integrated GPU, which is a huge problem, as it isn't just the low powered version, but is also practically always not the GPU the user is actively using as there output device, which results in severe performance issues.

What is the expected behavior?

Currently wgpu picks the GPU to utilize by first calling wgpu::util::power_preference_from_env() (which was removed in the latest version of wgpu) which checks for the users preference based on there environment performance, which in practice is never set returning None.

        let adapter_options = wgpu::RequestAdapterOptions {
            power_preference: wgpu::util::power_preference_from_env()
                .unwrap_or(if settings.antialiasing.is_none() {
                    wgpu::PowerPreference::LowPower
                } else {
                    wgpu::PowerPreference::HighPerformance
                }),

This results in the power preference being decided, by seeing if anti-aliasing is enabled. Not exactly sure what was the logic behind this decision back at the time, but I also don't know much about low powered devices so I digress (This gets relevant as bellow will be my speculations on when LowPower preference should be set).
Now intuitively one might think that it is checking if the user has anti-aliasing set as a system default, as if it isn't enabled this most likely means they are using a device which is either low powered, or is attempting to preserve battery as much as possible, where in both cases Low Power preference is probably preferred.
In reality after tracking down where the settings gets initialized you will be lead to the initialization of the Program, or Deamon, which means that in reality it is actually just checking if the app itself has anti-aliasing enabled, which it by default does not.
Sourced from iced::Deamon function:

    Daemon {
        raw: Instance {
            update,
            view,
            _state: PhantomData,
            _message: PhantomData,
            _theme: PhantomData,
            _renderer: PhantomData,
        },
        settings: Settings::default(),
    }

Sourced from wgpu Settings definition:

impl Default for Settings {
    fn default() -> Settings {
        Settings {
            present_mode: wgpu::PresentMode::AutoVsync,
            backends: wgpu::Backends::all(),
            default_font: Font::default(),
            default_text_size: Pixels(16.0),
            antialiasing: None,
        }
    }
}

Depending on the simplicity of the app, and the power of the integrated GPU, the performance hit might mostly be only noticeable only on window resize, however in cases such as

  • Windows being the best operating system in the universe, and trashing the performance of old enough integrated GPUs (Currently only tested on i7 13th gen. but I can potentially also test on i7 8th gen. and R9 9950X at some point if really needed) the performance hit I measured got the app down to 15 frames per second, on a simple app which renders only 5 widgets.
  • Apps growing more complex over time using iced might be experiencing reduction in performance disproportional to the computer specification of users.
  • Users preferring more buggier software fallback as it typically preforms better then utilizing wgpu, in the scenario described in "What happened?" section, and repeated above.
  • Developers trying to debug the app performance with manufacturers GPU profilers, failing due to the iced application not running on the expected GPU.

current behavior might result in a lot of headache for both the users, and the developers.

Also side note, it feels like this issue isn't something everyone who might be using this crate might be capable of debugging, so it might be able to also resolves issues like #2750

TLDR:

Replacing this:

        let adapter_options = wgpu::RequestAdapterOptions {
            power_preference: wgpu::util::power_preference_from_env()
                .unwrap_or(if settings.antialiasing.is_none() {
                    wgpu::PowerPreference::LowPower
                } else {
                    wgpu::PowerPreference::HighPerformance
                }),

To this:

        let adapter_options = wgpu::RequestAdapterOptions {
            power_preference: wgpu::PowerPreference::HighPerformance,

Results in people who have both integrated, and discrete GPUs having a 5000% increase in performance on average. While also making this code compatible with newer version of WGPU crate.

I can push this change if I get a green light, but if there are any wishes to handling this better I can look in to attempting to implement some other solution, to this problem. Just note that I don't have mac on me right now for writing platform specific code.

Sources:
https://github.com/iced-rs/iced/blob/master/wgpu/src/window/compositor.rs#L84-L89
https://github.com/iced-rs/iced/blob/master/src/daemon.rs#L74-L84
https://github.com/iced-rs/iced/blob/master/wgpu/src/settings.rs#L32-L42

Version

master

Operating System

Windows

Do you have any log output?

@laycookie laycookie added the bug Something isn't working label Feb 21, 2025
@edwloef
Copy link
Contributor

edwloef commented Feb 21, 2025

Some input on this (my opinion):

  • wgpu::util::power_preference_from_env() was replaced by wgpu::PowerPreference::from_env(). I don't see the harm in keeping that bit there tbh, especially if it's usually but not always unset
  • if that bit is removed, behavior should definitely be configurable in some way so this doesn't unconditionally tank the battery life of laptop users
  • the current stable iced release can perform terribly on a dedicated Nvidia GPU on some Linux setups, so making sure that's not an issue anymore would be cool as well before it becomes the default

@laycookie
Copy link
Author

  • wgpu::util::power_preference_from_env() was replaced by wgpu::PowerPreference::from_env(). I don't see the harm in keeping that bit there tbh, especially if it's usually but not always unset

Ya I can see it being useful in rare debugging purposes. I might try upgrading iced to the latest wgpu crate version, if not too many things would decide to brake during the upgrade.

  • if that bit is removed, behavior should definitely be configurable in some way so this doesn't unconditionally tank the battery life of laptop users

I agree that this behavior should probably be configurable in some way, for whoever needs it. A little skeptical about the battery life benefits on laptops from my knowledge of computers, but I have also mostly only investigated desktops so I can't really speak on this topic much, aside from pointing out that it would be nice if somebody could benchmark this for us. However as of now I can propose to set the gpu based on if iced would be able to detect a battery on the device (I will attempt to investigate how to do so slightly later if that is even possible)

  • the current stable iced release can perform terribly on a dedicated Nvidia GPU on some Linux setups, so making sure that's not an issue anymore would be cool as well before it becomes the default

I sadly haven't really went through those issues, and I would really appreciate if you could link them. From the way you phrased it, it sounds like the reasons for poor performance might be unknown, so I feel like it might be worth while checking out if the reasons for the poor performance is this exact issue that Im pointing out.

@edwloef
Copy link
Contributor

edwloef commented Feb 21, 2025

Ya I can see it being useful in rare debugging purposes.

I use it to force my integrated GPU on my laptop hahaha

A little skeptical about the battery life benefits on laptops from my knowledge of computers, but I have also mostly only investigated desktops so I can't really speak on this topic much, aside from pointing out that it would be nice if somebody could benchmark this for us.

I can completely turn off my dedicated GPU on my laptop while it's not in use, saving me around 7W of total consumption in comparison to an idle DGPU. Having it on would halve my battery life from ~10h to ~5h.

I sadly haven't really went through those issues, and I would really appreciate if you could link them. From the way you phrased it, it sounds like the reasons for poor performance might be unknown, so I feel like it might be worth while checking out if the reasons for the poor performance is this exact issue that Im pointing out.

That's just my own experience, I haven't reported it because I haven't gotten around to testing whether the master branch exhibits the same behavior, or figuring out why it happens.

@laycookie
Copy link
Author

@edwloef By the way almost forgot. Testing software rendering yields better performance, on my machine. So I was wondering if you have ever measured the performance, and battery life difference between iGPU, and CPU rendering on your laptop?

@shzhe02
Copy link

shzhe02 commented Feb 22, 2025

Turns out this was why my system fails to get any iced applications to run...
My iGPU seems to not be compatible with iced, and I get a validation error every time I run any iced application:

ERROR wgpu_hal::vulkan::adapter: get_physical_device_surface_present_modes: ERROR_SURFACE_LOST_KHR    

thread 'main' panicked at /var/home/<me>/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/wgpu-23.0.1/src/backend/wgpu_core.rs:719:18:
Error in Surface::configure: Validation Error

Caused by:
  Surface does not support the adapter's queue family

(This is using the iGPU of a Ryzen 9 9900X, attempting to run the todos example)

In case others have a similar problem and have a compatible dGPU that iced isn't using, just setting the env variable WGPU_POWER_PREF='high' seems to do the trick (for now). Not sure how the PR mentioned above this will change that, though.

Would be nice to have iced keep trying other devices to find one that's compatible instead of just failing. Or better yet, don't use devices that aren't compatible...

Not sure if I should create a new issue for this though considering it can temporarily be resolved.

@laycookie
Copy link
Author

laycookie commented Feb 23, 2025

In case others have a similar problem and have a compatible dGPU that iced isn't using, just setting the env variable WGPU_POWER_PREF='high' seems to do the trick (for now). Not sure how the PR mentioned above this will change that, though.

Pr above still prioritizes env variable above all if it exists, at least as of now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants