-
-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
API design for shader chaining #316
Comments
Continuing from a previous post... Not quite a discussion of a new API, just experience sharing. So in order to make an NTSC emulation, my first step was to be able to put a shader between the pixel surface rendering and the scaling renderer. What I have done is simply to copy/paste the ScalingRenderer code and modify it so that it doesn't use a SurfaceSize which is private (a really minor modification). Then I wired things like this:
The hardest part was to understand WSGL a bit (I've never done a single line of WGSL before but bit of GLSL). As I said earlier, this is a very simple architecture. If I had a simple example and a reusable ScalingRenderer (i.e. wihout the SurfaceSize input) then I would have been up and running pretty fast. My point is: maybe I don't need a full scale API just for that; a simple example could be enough. But maybe I'll add some post processing, which means either I'll modify the ScalingRenderer shader either I'll need to chain another renderer behind it. Dunno. But (there's but) doing things like this seems to be hard on pixels: it's no longer rendering at 60fps... It's strange because I leverage |
Now I have a prototype that mostly works. I'm stille trying to figuring out how the HiDPI influences scaling. I think the interesting point for @parasyte here is that I have been able to reuse the
with
in order to avoid to use the So, with that very small change and using |
Either of those changes seem appropriate if we are going to keep the existing design. It's also a low friction change and could be done as part of the next major release. I suspect it would also need a method to update its texture view and bindings like the two custom shader examples allow. It would avoid recreating the entire pipeline when handling |
Hmmm... Hold on. Although I have some kind of prototype running, it doesn't work well on HiDpi screens: there are some discrepancies between the window size and the buffer size which ends up to be very ugly or simply not working. I didn't see that before 'cos the PC I'm used to has no HiDpi screen. First investigations show that there is some deep flaw in my understanding of Pixels's pipeline. Could you confirm that the rendering goes like this:
Again, what I'm doing is:
So I come before Unfortunately, the more I think about it, the more I think I'm leaving the scenario Pixels has been designed for... But that would be too bad to leave it because there is already a ton of code in it that's really good... |
Your understanding of the pipeline sounds correct. High DPI displays don't really do anything special. For a display like on my MacBook Pro, the logical pixel size is 2x larger than the physical pixel size. I.e., the scaling ratio is 2.0. It is exactly identical to the situation where the window's inner area is 2x larger than the pixel buffer and the logical and physical pixel sizes are the same (scaling ratio = 1.0). This is from a renderer perspective; the logical pixel size plays a role for UX purposes like mouse coordinates and making the window "normal sized" instead of way too small on a high DPI display. So even if you put your shader before the scaling renderer, you should be getting the same kind of result on the high DPI display as if your window was scaled up by the same ratio on a normal DPI display. Remember that the "backing texture" ( All that is to say, the texture your shader samples will always be the pixel buffer size, and the texture it renders to should be whatever size you want to output without taking high DPI or scaling into account. It's the scaling renderer's job to scale the output, not your shader. Another way to think about it is that your shader output size should use logical pixels. Or alternatively, don't use the scaling renderer at all, and do the scaling directly in your shader. Then you do have to take physical pixel size with high DPI into account. |
Thanks for your great explanation. I've double checked my code and fixed it. I have not yet implemented the resize operation but it shouldn't be any trouble. So I can confirm that making ScalingRenderer fully accessible by letting the Regarding the API proposal, I'm no expert but building separated render pass may be the easiest. Moreover, as I can see with my stuff (single data point !), we actually don't need full blown chaining. Being able to inject a shader before/after pixels' stuff is already quite a lot. And once you've done that, as a pixels user you have already been facing 99% of the complexity which is setting up wgpu (that was, for me, by far the trickiest past). |
In the very early days of this crate, we had a trait and heterogenous list of shaders. The idea was to allow shaders to be chained together to produce a complete set of render passes. See: https://docs.rs/pixels/0.0.4/pixels/trait.RenderPass.html
The
RenderPass
trait was replaced in v0.1 with thePixels::render_with()
method in #95 and #96. This API simplified the design substantially, at the cost of making certain things more difficult, and some things impossible.This issue tracks the reintroduction of this feature to chain shaders programmatically
Rationale
Shaders that need to operate in pixel buffer space (opposed to screen space) cannot be implemented easily today because the default renderer is hardcoded to accept the pixel buffer texture view as its only input. (See #285) To build these kinds of shaders, one must reimplement the scaling renderer on the user-side and ignore the renderer passed to
render_with()
viaPixelsContext
.Chaining shaders should be a first-class experience. The examples in
pixels/examples/custom-shader/src/main.rs
Lines 55 to 65 in 0a85025
pixels/examples/fill-window/src/main.rs
Lines 56 to 63 in 00f774a
A more unified API would treat these renderers as "of the same class" where the interface itself offers a seamless way to chain each renderer together.
This existing API also forces users to synchronize the inverse scaling matrix to handle mouse coordinates with multiple passes. The scissor rect also needs to be set correctly, etc. See #262.
API Sketch
This is very incomplete pseudocode, but I want to put my thoughts down now. (And maybe it will help someone make progress here if they feel inclined.)
This API will allow a
Renderer
to consume anotherRenderer
for chaining purposes. In other words, thenext
arg becomes a child ofself
after chaining. The method returns an existing child if it needs to be replaced. Therender
method takes a mutableRenderPass
, which each renderer in the chain can recursively render to.Unresolved Questions
How does chaining "connect" the render target of one renderer to the input texture view on the next? the
RenderPass
only has one color attachment, decided by the caller (which is pixels itself; and it only uses theSurface
texture view as the color attachment).&mut wgpu::Encoder
and&wgpu::TextureView
as the render target" pattern that we have today. This requires renderers to create a newRenderPass
for themselves and chaining would be performed by eachRenderer
with a method likefn update_input(&mut self, texture_view: wgpu::TextureView)
, e.g. called byresize()
.Is there anything else we can learn from other users of
wgpu
?wgpu
middleware: https://github.com/gfx-rs/wgpu/wiki/Encapsulating-Graphics-Workglyph_brush
doesn't use the middleware pattern as defined in the link above. Instead itsdraw_queued()
method resembles ourScalingRenderer::render()
method as it is today.iced
.The text was updated successfully, but these errors were encountered: