You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What is a bit verbose is the configuration of the **render pipeline**, and the creation of **shaders**. Luckily, we already introduced a lot of key concepts in chapter [*Our first shader*](../getting-started/our-first-shader.md), the main new element here is the render pipeline
43
+
What is a bit verbose is the configuration of the **render pipeline**, and the creation of **shaders**.
44
44
45
45
Render Pipeline
46
46
---------------
@@ -115,7 +115,7 @@ Both the **vertex fetch** and **vertex shader** stages are configured through th
115
115
{{Describe vertex shader}}
116
116
```
117
117
118
-
The render pipeline first **fetches vertex attributes** from some buffers that lives in GPU memory. These *attributes* include usually at least a **vertex position**, and might include additional per-vertex information like **color**, **normal**, **texture coordinate**, etc.
118
+
The render pipeline first **fetches vertex attributes** from some buffers that live in GPU memory. These *attributes* include usually at least a **vertex position**, and might include additional per-vertex information like **color**, **normal**, **texture coordinate**, etc.
119
119
120
120
**In this first example**, we hard-code the position of the 3 vertices of the triangles in shaders so we do not even need a position buffer.
121
121
@@ -199,7 +199,7 @@ Usually we set the **cull mode** to `Front` to avoid wasting resources in render
199
199
200
200
### Fragment shader
201
201
202
-
Once a primitive have been sliced into many little fragments by the rasterizer, the **fragment shader** stage is invoked for each one of them. This shader receives the interpolated values generated by the vertex shader, and must output on its turn the **final color** of the fragment.
202
+
Once a primitive has been sliced into many little fragments by the rasterizer, the **fragment shader** stage is invoked for each one of them. This shader receives the interpolated values generated by the vertex shader, and must output on its turn the **final color** of the fragment.
203
203
204
204
```{note}
205
205
Keep in mind that all these stages are happening in a **very parallel** and **asynchronous** environment. When rendering a large mesh, the fragment shader for the first primitives may be invoked before the last primitives have been rasterized.
@@ -239,7 +239,7 @@ Note that the fragment stage is **optional**, so `pipelineDesc.fragment` is a (p
239
239
240
240
### Stencil/Depth state
241
241
242
-
The **depth test** is used to discard fragments that are **behind** other fragments associated to the *same pixel*. Remember that a fragment is the projection of a given primitive on a given pixel, so **when primitives overlap each others**, multiple fragments are emitted for the same pixel. Fragments have a **depth** information, which is used by the depth test.
242
+
The **depth test** is used to discard fragments that are **behind** other fragments associated to the *same pixel*. Remember that a fragment is the projection of a given primitive on a given pixel, so **when primitives overlap each other**, multiple fragments are emitted for the same pixel. Fragments have **depth** information, which is used by the depth test.
243
243
244
244
The **stencil test** is another fragment discarding mechanism, used to hide fragments based on previously rendered primitives. Let's **ignore** the depth and stencil mechanism **for now**, we will introduce them in the [Depth buffer](3d-meshes/depth-buffer.md) chapter.
The **usual blending** equation is configured as $rgb = a_s \times rgb_s + (1 - a_s) \times rgb_d$. This corresponds to **the intuition of layering** the rendered fragments over the existing pixels value:
299
+
The **usual blending** equation is configured as $rgb = a_s \times rgb_s + (1 - a_s) \times rgb_d$. This corresponds to **the intuition of layering** the rendered fragments over the existing pixel's value:
Okay, we finally **configured all the stages** of the render pipeline. All that remains now is to specify the behavior of the two **programmable stages**, namely give a **vertex** and a **fragment shaders**.
356
+
Okay, we finally **configured all the stages** of the render pipeline. All that remains now is to specify the behavior of the two **programmable stages**, namely a **vertex** and a **fragment shader**.
357
357
358
358
Shaders
359
359
-------
@@ -687,20 +687,20 @@ When using Dawn, you may see **different colors** (more saturated), because the
687
687
Conclusion
688
688
----------
689
689
690
-
This chapter introduced the **core skeleton** for rendering triangle-based shapes on the GPU. For now these are 2D graphics, but once everything will be in place, switching to 3D will be straightforward. We have seen two very important concepts:
690
+
This chapter introduced the **core skeleton** for rendering triangle-based shapes on the GPU. For now these are 2D graphics, but once everything is in place, switching to 3D will be straightforward. We have seen two very important concepts:
691
691
692
-
- The **render pipeline**, which is based on the way the hardware actually works, with some parts fixed, for the sake of efficiency, and some parts are programmable.
692
+
- The **render pipeline**, which is based on the way the hardware actually works, where some parts are fixed, for the sake of efficiency, and other parts are programmable.
693
693
- The **shaders**, which are the GPU-side programs driving the programmable stages of the pipeline.
694
694
695
695
### What's next?
696
696
697
697
The key algorithms and techniques of computer graphics used for 3D rendering are for a large part implemented in the shaders code. What we still miss at this point though is ways to **communicate** between the C++ code (CPU) and the shaders (GPU).
698
698
699
-
The next two chapters focus on two ways to **feed input** to this render pipeline: **vertex** attributes, where there is one value per vertex, and **uniforms**, which define variable that are common to all vertices and fragments for a given call.
699
+
The next two chapters focus on two ways to **feed input** to this render pipeline: **vertex** attributes, where there is one value per vertex, and **uniforms**, which define variables that are common to all vertices and fragments for a given call.
700
700
701
-
We then take a break away from pipeline things with the switch to **3D meshes**, which is in the end less about code and more about math. We also introduce a bit of **interaction** with a basic **camera controller**. We then introduce a 3rd way to provide input resource, namely **textures**, and how to map them onto meshes.
701
+
We then take a break from the pipeline and switch to **3D meshes**, which is in the end less about code and more about math. We also introduce a bit of **interaction** with a basic **camera controller**. We then introduce a 3rd way to provide input resources, namely **textures**, and how to map them onto meshes.
702
702
703
-
Storage textures, which are used the other way around, to get data out of the render pipeline, will be presented only in advanced chapters. Instead, the last chapter of this section is fully dedicated to the computer graphics matter of **lighting** and **material modeling**.
703
+
Storage textures, which are used the other way around, to get data out of the render pipeline, will be presented only in advanced chapters. Instead, the last chapter of this section is fully dedicated to the computer graphics matters of **lighting** and **material modeling**.
Vertices can contain more than just a position attribute. A typical example is to **add a color attribute** to each vertex. This will also show us how the rasterizer automatically interpolate vertex attributes across triangles.
23
+
Vertices can contain more than just a position attribute. A typical example is to **add a color attribute** to each vertex. This will also show us how the rasterizer automatically interpolates vertex attributes across triangles.
24
24
25
25
Shader
26
26
------
@@ -50,7 +50,7 @@ struct VertexInput {
50
50
};
51
51
```
52
52
53
-
Our vertex shader thus only receive one single argument, whose type is `VertexInput`:
53
+
Our vertex shader thus only receives one single argument, whose type is `VertexInput`:
54
54
55
55
```{lit} rust, Vertex shader (also for tangle root "Vanilla")
Before feeding vertex data to the render pipeline, we need to get familiar with the notion of **buffer**. A buffer is "just" a **chunk of memory** allocated in the **VRAM** (the GPU's memory). Think of it as some kind of `new` or `malloc` for the GPU.
23
+
Before feeding vertex data to the render pipeline, we need to get familiar with the notion of a **buffer**. A buffer is "just" a **chunk of memory** allocated in the **VRAM** (the GPU's memory). Think of it as some kind of `new` or `malloc` for the GPU.
24
24
25
-
In this chapter, we see how to **create** (i.e., allocate), **write** from CPU, **copy** from GPU to GPU and **read back** to CPU.
25
+
In this chapter, we will see how to **create** (i.e., allocate), **write** from CPU, **copy** from GPU to GPU and **read back** to CPU.
26
26
27
27
```{note}
28
-
Note that textures are a special kind of memory (because of the way we usually sample them) that they live in a different kind of object.
28
+
Note that textures are a special kind of memory (because of the way we usually sample them) so they live in a different kind of object.
29
29
```
30
30
31
31
Since this is just an experiment, I suggest we temporarily write the whole code of this chapter at the end of the `Initialize()` function. The overall outline of our code is as follows:
32
32
33
33
```{lit} C++, Playing with buffers (insert in {{Initialize}} after "InitializePipeline()", also for tangle root "Vanilla")
34
-
// Experimentation for the "Playing with buffer" chapter
34
+
// Experimentation for the "Playing with buffers" chapter
35
35
{{Create a first buffer}}
36
36
{{Create a second buffer}}
37
37
@@ -177,7 +177,7 @@ And don't forget that commands sent through the **command encoder** are only sub
177
177
Copying a buffer
178
178
----------------
179
179
180
-
We can now submit a **buffer-buffer copy** operation to the command queue. This is not directly available from the queue object but rather requires to **create a command encoder**. We may use the same one as the render pass for our test and simply add the following:
180
+
We can now submit a **buffer-buffer copy** operation to the command queue. This is not directly available from the queue object but rather requires us to **create a command encoder**. Once we have an encoder we may simply add the following:
181
181
182
182
````{tab} With webgpu.hpp
183
183
```{lit} C++, Copy buffer to buffer
@@ -228,7 +228,7 @@ Reading from a buffer
228
228
229
229
The **command queue**, that we used to send data (`writeBuffer`) and instructions (`copyBufferToBuffer`), **only goes in one way**: from CPU host to GPU device. It is thus a "fire and forget" queue: functions do not return a value since they **run on a different timeline**.
230
230
231
-
So, how do we read data back then? We use an **asynchronous operation**, like we did when using `wgpuQueueOnSubmittedWorkDone` in the [Command Queue](../../getting-started/the-command-queue.md) chapter. Instead of directly get a value back, we set up a **callback** that gets invoked whenever the requested data is ready. We then **poll the device** to check for incoming events.
231
+
So, how do we read data back then? We use an **asynchronous operation**, like we did when using `wgpuQueueOnSubmittedWorkDone` in the [Command Queue](../../getting-started/the-command-queue.md) chapter. Instead of directly getting a value back, we set up a **callback** that gets invoked whenever the requested data is ready. We then **poll the device** to check for incoming events.
232
232
233
233
**To read data from a buffer**, we use `buffer.mapAsync` (or `wgpuBufferMapAsync`). This operation **maps** the GPU buffer into CPU memory, and then whenever it is ready it executes the callback function it was provided. Once we are done, we can **unmap** the buffer.
234
234
@@ -353,7 +353,7 @@ while (!ready) {
353
353
}
354
354
```
355
355
356
-
You could now see `Buffer 2 mapped with status 1` (1 being the value of `BufferMapAsyncStatus::Success`) when running your program. **However**, we never change the `ready` variable to `true`! So the program then **hangs forever**... not great. That is why the next section shows how to pass some context to the callback.
356
+
You could now see `Buffer 2 mapped with status 1` (1 being the value of `BufferMapAsyncStatus::Success` when using Dawn, it is 0 for WGPU) when running your program. **However**, we never change the `ready` variable to `true`! So the program then **hangs forever**... not great. That is why the next section shows how to pass some context to the callback.
357
357
358
358
### Mapping context
359
359
@@ -363,7 +363,7 @@ So, we need the callback to **access and mutate** the `ready` variable. But how
363
363
When defining `onBuffer2Mapped` as a regular function, it is clear that `ready` is not accessible. When using a lambda expression like we did above, one could be tempted to add `ready` in the **capture list** (the brackets before function arguments). But this **does not work** because a capturing lambda has a **different type**, that cannot be used as a regular callback. We see below that the C++ wrapper fixes this limitation.
364
364
```
365
365
366
-
The **user pointer** is an argument that is provided to `wgpuBufferMapAsync`, when setting up the callback, and that is then fed **as is** to the callback `onBuffer2Mapped` when the map operation is ready. The buffer only forwards this pointer but never uses it: **only you** (the user of the API) interprets it.
366
+
The **user pointer** is an argument that is provided to `wgpuBufferMapAsync`, when setting up the callback, and that is then fed **as is** to the callback `onBuffer2Mapped` when the map operation is ready. The buffer only forwards this pointer but never uses it: **only you** (the user of the API) interpret it.
Copy file name to clipboardExpand all lines: getting-started/adapter-and-device/the-device.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ The Device <span class="bullet">🟢</span>
10
10
11
11
A WebGPU **device** represents a **context** of use of the API. All the objects that we create (geometry, textures, etc.) are owned by the device.
12
12
13
-
The device is requested from an **adapter** by specifying the **subset of limits and features** that we are interesed in. Once the device is created, the adapter should no longer be used. **The only capabilities that matter** to the application are the one of the device.
13
+
The device is requested from an **adapter** by specifying the **subset of limits and features** that we are interesed in. Once the device is created, the adapter should no longer be used. **The only capabilities that matter** to the application are the ones of the device.
We will come back here and refine these options whenever we will need some more capabilities from the device.
222
222
223
223
```{note}
224
-
The `label` is **used in error message** to help you debug where something went wrong, so it is good practice to use it as soon as you get multiple objects of the same type. Currently, this is only used by Dawn.
224
+
The `label` is **used in error messages** to help you debug where something went wrong, so it is good practice to use it as soon as you get multiple objects of the same type. Currently, this is only used by Dawn.
0 commit comments