[go: up one dir, main page]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to make the pixel buffer larger than the window #319

Open
wiz21b opened this issue Dec 6, 2022 · 7 comments
Open

How to make the pixel buffer larger than the window #319

wiz21b opened this issue Dec 6, 2022 · 7 comments
Labels
question Usability question

Comments

@wiz21b
Copy link
wiz21b commented Dec 6, 2022

Hello,

I've been using pixels for a while now and I'm now stuck at some edge case. Basically, I'd like to put a bit more information in the pixels buffer than what will be actually drawn. This ends up with my "buffer" being 1 pixel wider than the actual window's width. In the documentation of the SurfaceTexture, it is written : "It is recommended (but not required) that the width and height are equivalent to the physical dimensions of the surface.". But that's not 100% clear to me what happens behind the scenes. Of course I have tried to make a surface texture wider than the actual window size but this brings a white picture (I was expecting a distorted picture, maybe zoomed). I thought that maybe the scaling matrix was at fault, esp. the constraint over the scale factor, so I set it to one. But that didn't succeed...

If you could just tell me where I should start looking that would be very kind. I've also looked in other places in the code and saw there was a "render texture" and a "backing texture" but I fail to understand the difference.

Thank your for your time...

@parasyte parasyte added the question Usability question label Dec 6, 2022
@parasyte
Copy link
Owner
parasyte commented Dec 6, 2022

What actually happens when your surface texture does not match the window size depends on the backend. The Vulkan backend (default for Windows and Linux) refuses to do anything, so you get the default background color decided by the window manager. Using the DirectX 12 backend on Windows will stretch the image to fit the window. Metal on macOS does something similar to DirectX 12, etc. Basically, doing this is "unspecified behavior". wgpu docs says it "Must be the same size as the surface".

I do have to question why you want to do this because it sounds like an XY problem. Whatever extra information you are including in the texture arguably does not need to be there because it will be unused by definition when you hide it from being displayed. The only thing I can think of is that you might be trying to code-golf to reduce a few lines of code for copying a small region from a larger texture. If that is the case, we have better ways...


For terminology used in the code and documentation:

The "render texture" is the name given to the render target for the ScalingRenderer. It is documented here: https://docs.rs/pixels/latest/pixels/struct.PixelsBuilder.html#method.render_texture_format. The render target is a normal texture, but GPUs have very specific format requirements for the texture that is ultimately copied into the frame buffer for the video signal. The render texture is "post-scaled" and includes any border to fill out the surface area.

On the other hand, the "backing texture" is the name given to the texture that you physically upload pixel color values to. This texture can have any format you want, and its size is exactly what you specify. It is a "texture" in the GPU sense and is distinct from the raw pixel data. The byte data that you upload has a 1:1 mapping onto the backing texture. In other words, this is the "pre-scaled" texture without a border.

Finally, the "surface texture" is not a real GPU texture. The documentation calls it a "logical texture" for this reason. It is just a conceptual wrapper around the render texture. It allows us to maintain a reference to the window and its size in a generic way. We cannot query the window size because the only thing we know about it is that it is a system-defined window pointer that implements HasRawWindowHandle.

@parasyte
Copy link
Owner
parasyte commented Dec 6, 2022

This should give you a much clearer sense of what the two textures look like. I captured these with RenderDoc on Windows 11.

This is the "backing texture" for the minimal-winit example. It's exactly what you expect your pixel data to look like.

backing-texture

And this one is the "render texture". Both textures were captured on the same frame, and the window was resized slightly bigger than the default (to give it a border). This texture is "what you actually see" in the window.

render-texture

@wiz21b
Copy link
Author
wiz21b commented Dec 7, 2022

The only thing I can think of is that you might be trying to code-golf to reduce a few lines of code for copying a small region from a larger texture. If that is the case, we have better ways...

Yeah, I told you it was "edge case" so yes, code golf. I wanted to reduce the number of texture uploads :-) To explain it fully: my main texture is a composite PAL/NTSC signal. As you know there's a notion of "color burst" which tells how to decode the composite signal: that is with or without colour. I wanted to put the colour bust on/off in my texture, on its right edge "a bit like" what is done with a TV. If I don't do that, then I have to put that information in another texture which means more code and more GPU transfers... I wanted to avoid that :-) So yes, definitely code golf (but not sub-par, I admit)

@wiz21b
Copy link
Author
wiz21b commented Dec 7, 2022

For the backing/render texture... If it's to get a border, why didn't you use a regular texture border, this would have avoided the additional texture. But sure, you knwo what you do better than I :-)

@parasyte
Copy link
Owner
parasyte commented Dec 7, 2022

I use the pix crate with pixels in a game I am working on to handle compositing operations. It was a little work to setup initially, but it allows copying small regions from any offset of the source to any offset in the destination as shown in the documentation:

*------------+      *-------------+
|            |      |    *------+ |
| *------+   |      |    |      | |
| |      |   |      |    | from | |
| |  to  |   | <--- |    +------+ |
| +------+   |      |             |
|            |      |     src     |
|    self    |      +-------------+
+------------+

This kind of compositing is almost certainly what you want to do, instead of trying to clip the surface texture. I don't know about other crates, but with pix you can do the copy from a larger image (your PAL/NTSC simulated video) to a smaller image (the pixel buffer managed by pixels) without any unnecessary copies or texture uploads.

For the backing/render texture... If it's to get a border, why didn't you use a regular texture border, this would have avoided the additional texture. But sure, you knwo what you do better than I :-)

What is a "regular texture border"? Regardless, you always need at least two textures. The source texture that you upload to, and the destination render target that the shader writes to. The render target has to be window-sized (for reasons described above). Scaling and adding a border are just two things that are done for convenient/nice handling of the size difference between the two textures.

@wiz21b
Copy link
Author
wiz21b commented Dec 7, 2022

hmmm... I forgot to mention that my conversion from "pixels" buffer (composite signal) to the displayed RGB signal is done with a WGSL shader. Hence my willingness to keep the pipeline simple by not multiplying GPU textures. In my case, the "color burst" signal is an information I use to change the behavior of the shader (which also goes in the direction of having everything crammed into a single texture -- although not 100% needed).

For the regular texture border, I was refering to the fact that one can tell a shader to use a given border color when it accesses texels which coordinates are outside the texture (that's part of the choice to wrap the texture, clamp it, mirror it, etc.)

@parasyte
Copy link
Owner
parasyte commented Dec 7, 2022

That is an important detail, yeah. I was under the impression that the "analog video signal" was constructed on the CPU side. But then the good news is that you should be able to do the clipping in your shader, right?

For the regular texture border, I was refering to the fact that one can tell a shader to use a given border color when it accesses texels which coordinates are outside the texture (that's part of the choice to wrap the texture, clamp it, mirror it, etc.)

I follow. You can see that the pipeline clamps the texture coordinates on the edges:

pixels/src/renderers.rs

Lines 34 to 36 in bc8235f

address_mode_u: wgpu::AddressMode::ClampToEdge,
address_mode_v: wgpu::AddressMode::ClampToEdge,
address_mode_w: wgpu::AddressMode::ClampToEdge,
This is the reason the border is composed of the clear color, instead of, say, wrapping the texture.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Usability question
Projects
None yet
Development

No branches or pull requests

2 participants