-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Windows misreporting high GPU usage #251
Comments
Sounds like it's just about the presence of vertical sync. Bunnymark is a benchmark, hence it doesn't have vsync enabled - blade/examples/bunnymark/main.rs Line 73 in 4588302
Are you sure that Windows is misreporting this? Perhaps, it's just a bug in Zed's Windows platform that ends up shooting frames as fast as possible without a proper frequency of updates. |
30.mp4
180.mp4I don't have the slightest clue but covering the window with another the window seems to make the usage rise, these two tests were done with release mode but when using debug mode it reports at a constant 100% when being covered by another window. I'm not too sure if this is remotely close to the correct way to test it but I just added a thread sleep in the The code doesn't actually draw anything on the screen just acquires a frame and presents it. I'm not sure if this reported usage is reasonable or not. Winit + blade testing codeuse std::time::{Duration, Instant};
use blade_graphics::{self as gpu, CommandEncoderDesc, Extent};
static mut COUNT: i32 = 0;
static mut EPOCH: Option<Instant> = None;
fn count_frame() {
unsafe {
COUNT += 1;
if let Some(epoch) = EPOCH {
let elapsed = epoch.elapsed();
if elapsed >= Duration::from_secs(1) {
println!("{} redraw requests per second", COUNT);
COUNT = 0;
EPOCH = Some(Instant::now());
}
} else {
EPOCH = Some(Instant::now());
}
}
}
fn main() {
let event_loop = winit::event_loop::EventLoop::new().unwrap();
let window_attributes =
winit::window::Window::default_attributes().with_title("blade-usage-issue");
let window = event_loop.create_window(window_attributes).unwrap();
let context = unsafe {
gpu::Context::init(gpu::ContextDesc {
presentation: true,
validation: false,
timing: false,
capture: false,
overlay: true,
device_id: 0,
})
.unwrap()
};
let mut surface = context
.create_surface_configured(
&window,
gpu::SurfaceConfig {
size: gpu::Extent {
width: 1,
height: 1,
depth: 1,
},
usage: gpu::TextureUsage::TARGET,
display_sync: gpu::DisplaySync::Recent,
..Default::default()
},
)
.unwrap();
let mut command_encoder = context.create_command_encoder(CommandEncoderDesc {
name: "main",
buffer_count: 2,
});
event_loop
.run(|event, target| {
target.set_control_flow(winit::event_loop::ControlFlow::Poll);
match event {
winit::event::Event::AboutToWait => {
window.request_redraw();
// Delay before requesting a new frame
std::thread::sleep(Duration::from_millis(32));
}
winit::event::Event::WindowEvent { event, .. } => match event {
winit::event::WindowEvent::Resized(size) => {
let config = gpu::SurfaceConfig {
size: Extent {
height: size.height,
width: size.width,
depth: 1,
},
usage: gpu::TextureUsage::TARGET,
display_sync: gpu::DisplaySync::Recent,
..Default::default()
};
context.reconfigure_surface(&mut surface, config);
}
winit::event::WindowEvent::CloseRequested => {
target.exit();
}
winit::event::WindowEvent::RedrawRequested => {
count_frame();
let frame = surface.acquire_frame();
command_encoder.start();
command_encoder.present(frame);
context.submit(&mut command_encoder);
}
_ => {}
},
_ => {}
}
})
.unwrap();
} Using |
Task manager seems to misreport the GPU usage of a GPUI app using the blade Vulkan backend at 80-100% GPU 3D usage. I also see this happen with the bunnymark example, but not the particle example.
This occurs on my machine when using the default Nvidia 3D setting
![Image](https://private-user-images.githubusercontent.com/76515905/405786985-f52ee6c4-1406-49ef-8498-eda6dd1af9e4.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk2ODM3NTAsIm5iZiI6MTczOTY4MzQ1MCwicGF0aCI6Ii83NjUxNTkwNS80MDU3ODY5ODUtZjUyZWU2YzQtMTQwNi00OWVmLTg0OTgtZWRhNmRkMWFmOWU0LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTYlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjE2VDA1MjQxMFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWEwMjI0M2Q5ZjE0M2EyMmM3OWJlOGIzMmZjZjgzMGZkNTllMTk1OWNiMzgxNWMzNDk4ZWNhOTM5YmMzYzlkNjQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.3asHKdVG_FrJ-B_jD-f6nKJoSk3MW7xauCqFel256_g)
Vulkan/OpenGl present method
set toAuto
.If I set the Nvidia 3D setting
![Image](https://private-user-images.githubusercontent.com/76515905/405787421-b25b3a1d-7343-4f61-bb94-8ef2c2123249.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk2ODM3NTAsIm5iZiI6MTczOTY4MzQ1MCwicGF0aCI6Ii83NjUxNTkwNS80MDU3ODc0MjEtYjI1YjNhMWQtNzM0My00ZjYxLWJiOTQtOGVmMmMyMTIzMjQ5LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTYlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjE2VDA1MjQxMFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTYzYWRkZWY4NGFhMTc2N2QwYjcyODI2MzI2ODkyMTM4YTIxNjk4YTYxYjI0Yjk1MGJlMTJlNTdjMWNkZTdkNTUmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.DG8xEIwkyI8p6LGn0Qo0mrxmSC4XT_lYj1dY8xZQyt8)
Vulkan/OpenGl present method
toPrefer layered on DXGI Swapchain
. It results with the GPUI application being reported at just 3% GPU 3D usage in Task manager. This seems much more reasonable.I am not sure why a DirectX swapchain would make Task manager correctly report the GPU usage.
The text was updated successfully, but these errors were encountered: