Replies: 1 comment 1 reply
-
|
Hi! This crate is for real-time pixel plotting. Most prominently for writing emulators for old computers and game systems, and for games that want to simulate the style of old games with low resolution graphics (typically 2D and low color depth). It has also been used for various tools of all sorts. But that sets the stage for the primary target: low resolution pixel plotting and animation. The use case you describe is more closely related to 2D vector graphics. And You are correct that triangles are the most common GPU primitive, but there is some subtlety! GPUs also have a line primitive (e.g., for wireframe) and a point primitive (which can be used for particle effects). See: https://docs.rs/wgpu/latest/wgpu/enum.PolygonMode.html Triangles are rasterized when the vertex list is processed in "fill" mode, otherwise it's rasterized as a contiguous line or individual points (with arbitrary radius). The hardware acceleration that While you can draw lines and points with a shader, it is not going to be as pretty as a real vector graphics library. For that, you have a few options:
Directly answering some of your questions:
It will be ridiculously slow. 😢 #180 explains some relevant comparisons between CPU and GPU rasterization.
That's the texture scaling, and it's one of the key features along with maintaining integer scaling and original aspect ratio. CPU rasterization with large textures will always be terribly slow and there is nothing you can do to improve it as much as a GPU. Alignment and sharpness are the same thing, and it's paradoxically both easy to spot and easy to miss when wrong.
The hardware acceleration was already answered above. There is absolutely no tessellation in this crate. That's a technique completely unrelated to putting a texture on a display surface.
Only as much as the rasterization is optimized for SIMD architectures. But again,
Not at the moment. See #170 for discussion.
Both! Since
It's more complicated than that. You use
Yes. The |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi! I hope this will not end up irrelevant.
Motivation
I'm interested in building a stylus-whiteboard toy app that renders strokes based on the sampled points with their pressure, position and stylus angle etc.. I hope to better utilize these properties to simulate various kinds of pens and brushes and record them effectively in a vectorized way (as opposed to rasterized brushes in painting apps or simple strokes in whiteboard apps). I once assumed that I could take these sampled points as vertices and pass them to GPU to let the fragment shaders determine their boundaries, but then this doesn't seem possible since GPUs like triangles, so I've heard, see discussion
I realized that what I really want is to (for every frame) plot pixel by pixel over the entire screen, given a selection of strokes with their sampled points and I just assumed that doing it in parallel on GPU will be faster, but maybe doing it on CPU is necessary and will be more versatile (Correct me if I'm wrong).
I've read the introduction and some issues here. It seems that this crate is the right tool? Currently, I don't have much concept on what would be the performance bottleneck. I guess I can try it out but I'd appreciate if someone point out that I'm off track earlier.
Scenario model, with questions
Say, the screen I'm looking at right now has ~6 million pixels, and there are 2000 characters displayed, and I can scroll and zoom in pretty smoothly and the rendering is sharp. From what I learned, these glyphs are eventually turned into little triangles to be rendered on GPU.
pixels, and draw on each of these 6 million pixels, according to the neighboring glyph/stroke, ignoring the tessellated triangles (provided that I have a method to tell which glyphs or strokes are relevant). Would the performance be on-par with the GPU? I can argue that I saved resources making triangles and use less points to record a glyph, if I do it smartly.pixelshave pixels larger than screen pixel, but I do want to render to screen pixels most of the time. Will it be slow if I render at screen resolution/size? Can I align them easily? Will they be sharp?pixelsis hardware accelerated? Does it mean that GPU is used eventually? But ispixelsstill doing tessellation into triangles? Or does it use some other GPU features that draw pixels efficiently that I don't know? Compared to CPU rendering, what makes it faster in the end?pixels, am I to write the pixel rendering functions in rust or wgsl? Am I storing the sampled points of strokes in memory or GPU buffer? Or both?SurfaceTarget?pixelspixels to the canvas, can I also render other things on top/below it using wgpu?Thank you!
Beta Was this translation helpful? Give feedback.
All reactions