Skip to content

Commit eb73cb9

Browse files
committed
Merge branch 'main' of github.com:eliemichel/LearnWebGPU
2 parents ca8df9a + 3ea9d77 commit eb73cb9

File tree

8 files changed

+37
-37
lines changed

8 files changed

+37
-37
lines changed

basic-3d-rendering/hello-triangle.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ wgpuRenderPassEncoderDraw(renderPass, 3, 1, 0, 0);
4040
```
4141
````
4242
43-
What is a bit verbose is the configuration of the **render pipeline**, and the creation of **shaders**. Luckily, we already introduced a lot of key concepts in chapter [*Our first shader*](../getting-started/our-first-shader.md), the main new element here is the render pipeline
43+
What is a bit verbose is the configuration of the **render pipeline**, and the creation of **shaders**.
4444
4545
Render Pipeline
4646
---------------
@@ -115,7 +115,7 @@ Both the **vertex fetch** and **vertex shader** stages are configured through th
115115
{{Describe vertex shader}}
116116
```
117117

118-
The render pipeline first **fetches vertex attributes** from some buffers that lives in GPU memory. These *attributes* include usually at least a **vertex position**, and might include additional per-vertex information like **color**, **normal**, **texture coordinate**, etc.
118+
The render pipeline first **fetches vertex attributes** from some buffers that live in GPU memory. These *attributes* include usually at least a **vertex position**, and might include additional per-vertex information like **color**, **normal**, **texture coordinate**, etc.
119119

120120
**In this first example**, we hard-code the position of the 3 vertices of the triangles in shaders so we do not even need a position buffer.
121121

@@ -199,7 +199,7 @@ Usually we set the **cull mode** to `Front` to avoid wasting resources in render
199199
200200
### Fragment shader
201201
202-
Once a primitive have been sliced into many little fragments by the rasterizer, the **fragment shader** stage is invoked for each one of them. This shader receives the interpolated values generated by the vertex shader, and must output on its turn the **final color** of the fragment.
202+
Once a primitive has been sliced into many little fragments by the rasterizer, the **fragment shader** stage is invoked for each one of them. This shader receives the interpolated values generated by the vertex shader, and must output on its turn the **final color** of the fragment.
203203
204204
```{note}
205205
Keep in mind that all these stages are happening in a **very parallel** and **asynchronous** environment. When rendering a large mesh, the fragment shader for the first primitives may be invoked before the last primitives have been rasterized.
@@ -239,7 +239,7 @@ Note that the fragment stage is **optional**, so `pipelineDesc.fragment` is a (p
239239
240240
### Stencil/Depth state
241241
242-
The **depth test** is used to discard fragments that are **behind** other fragments associated to the *same pixel*. Remember that a fragment is the projection of a given primitive on a given pixel, so **when primitives overlap each others**, multiple fragments are emitted for the same pixel. Fragments have a **depth** information, which is used by the depth test.
242+
The **depth test** is used to discard fragments that are **behind** other fragments associated to the *same pixel*. Remember that a fragment is the projection of a given primitive on a given pixel, so **when primitives overlap each other**, multiple fragments are emitted for the same pixel. Fragments have **depth** information, which is used by the depth test.
243243
244244
The **stencil test** is another fragment discarding mechanism, used to hide fragments based on previously rendered primitives. Let's **ignore** the depth and stencil mechanism **for now**, we will introduce them in the [Depth buffer](3d-meshes/depth-buffer.md) chapter.
245245
@@ -296,7 +296,7 @@ $$
296296
rgb = \texttt{srcFactor} \times rgb_s ~~[\texttt{operation}]~~ \texttt{dstFactor} \times rgb_d
297297
$$
298298
299-
The **usual blending** equation is configured as $rgb = a_s \times rgb_s + (1 - a_s) \times rgb_d$. This corresponds to **the intuition of layering** the rendered fragments over the existing pixels value:
299+
The **usual blending** equation is configured as $rgb = a_s \times rgb_s + (1 - a_s) \times rgb_d$. This corresponds to **the intuition of layering** the rendered fragments over the existing pixel's value:
300300
301301
````{tab} With webgpu.hpp
302302
```{lit} C++, Configure color blending equation
@@ -353,7 +353,7 @@ pipelineDesc.multisample.mask = ~0u;
353353
pipelineDesc.multisample.alphaToCoverageEnabled = false;
354354
```
355355

356-
Okay, we finally **configured all the stages** of the render pipeline. All that remains now is to specify the behavior of the two **programmable stages**, namely give a **vertex** and a **fragment shaders**.
356+
Okay, we finally **configured all the stages** of the render pipeline. All that remains now is to specify the behavior of the two **programmable stages**, namely a **vertex** and a **fragment shader**.
357357

358358
Shaders
359359
-------
@@ -687,20 +687,20 @@ When using Dawn, you may see **different colors** (more saturated), because the
687687
Conclusion
688688
----------
689689

690-
This chapter introduced the **core skeleton** for rendering triangle-based shapes on the GPU. For now these are 2D graphics, but once everything will be in place, switching to 3D will be straightforward. We have seen two very important concepts:
690+
This chapter introduced the **core skeleton** for rendering triangle-based shapes on the GPU. For now these are 2D graphics, but once everything is in place, switching to 3D will be straightforward. We have seen two very important concepts:
691691

692-
- The **render pipeline**, which is based on the way the hardware actually works, with some parts fixed, for the sake of efficiency, and some parts are programmable.
692+
- The **render pipeline**, which is based on the way the hardware actually works, where some parts are fixed, for the sake of efficiency, and other parts are programmable.
693693
- The **shaders**, which are the GPU-side programs driving the programmable stages of the pipeline.
694694

695695
### What's next?
696696

697697
The key algorithms and techniques of computer graphics used for 3D rendering are for a large part implemented in the shaders code. What we still miss at this point though is ways to **communicate** between the C++ code (CPU) and the shaders (GPU).
698698

699-
The next two chapters focus on two ways to **feed input** to this render pipeline: **vertex** attributes, where there is one value per vertex, and **uniforms**, which define variable that are common to all vertices and fragments for a given call.
699+
The next two chapters focus on two ways to **feed input** to this render pipeline: **vertex** attributes, where there is one value per vertex, and **uniforms**, which define variables that are common to all vertices and fragments for a given call.
700700

701-
We then take a break away from pipeline things with the switch to **3D meshes**, which is in the end less about code and more about math. We also introduce a bit of **interaction** with a basic **camera controller**. We then introduce a 3rd way to provide input resource, namely **textures**, and how to map them onto meshes.
701+
We then take a break from the pipeline and switch to **3D meshes**, which is in the end less about code and more about math. We also introduce a bit of **interaction** with a basic **camera controller**. We then introduce a 3rd way to provide input resources, namely **textures**, and how to map them onto meshes.
702702

703-
Storage textures, which are used the other way around, to get data out of the render pipeline, will be presented only in advanced chapters. Instead, the last chapter of this section is fully dedicated to the computer graphics matter of **lighting** and **material modeling**.
703+
Storage textures, which are used the other way around, to get data out of the render pipeline, will be presented only in advanced chapters. Instead, the last chapter of this section is fully dedicated to the computer graphics matters of **lighting** and **material modeling**.
704704

705705
````{tab} With webgpu.hpp
706706
*Resulting code:* [`step030`](https://github.com/eliemichel/LearnWebGPU-Code/tree/step030)

basic-3d-rendering/input-geometry/a-first-vertex-attribute.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -291,7 +291,7 @@ There are two limits that may cause issue even if set to `WGPU_LIMIT_U32_UNDEFIN
291291
292292
```{lit} C++, Other device limits (also for tangle root "Vanilla")
293293
// These two limits are different because they are "minimum" limits,
294-
// they are the only ones we are may forward from the adapter's supported
294+
// they are the only ones we may forward from the adapter's supported
295295
// limits.
296296
requiredLimits.limits.minUniformBufferOffsetAlignment = supportedLimits.limits.minUniformBufferOffsetAlignment;
297297
requiredLimits.limits.minStorageBufferOffsetAlignment = supportedLimits.limits.minStorageBufferOffsetAlignment;
@@ -384,7 +384,7 @@ private: // Application attributes
384384
````
385385
386386
```{lit} C++, Private methods (append, also for tangle root "Vanilla")
387-
private: // APplication methods
387+
private: // Application methods
388388
void InitializeBuffers();
389389
```
390390

basic-3d-rendering/input-geometry/multiple-attributes.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ Multiple Attributes <span class="bullet">🟢</span>
2020
*Resulting code:* [`step033-vanilla`](https://github.com/eliemichel/LearnWebGPU-Code/tree/step033-vanilla)
2121
````
2222

23-
Vertices can contain more than just a position attribute. A typical example is to **add a color attribute** to each vertex. This will also show us how the rasterizer automatically interpolate vertex attributes across triangles.
23+
Vertices can contain more than just a position attribute. A typical example is to **add a color attribute** to each vertex. This will also show us how the rasterizer automatically interpolates vertex attributes across triangles.
2424

2525
Shader
2626
------
@@ -50,7 +50,7 @@ struct VertexInput {
5050
};
5151
```
5252

53-
Our vertex shader thus only receive one single argument, whose type is `VertexInput`:
53+
Our vertex shader thus only receives one single argument, whose type is `VertexInput`:
5454

5555
```{lit} rust, Vertex shader (also for tangle root "Vanilla")
5656
fn vs_main(in: VertexInput) -> /* ... */ {

basic-3d-rendering/input-geometry/playing-with-buffers.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -20,18 +20,18 @@ Playing with buffers <span class="bullet">🟢</span>
2020
*Resulting code:* [`step031-vanilla`](https://github.com/eliemichel/LearnWebGPU-Code/tree/step031-vanilla)
2121
````
2222

23-
Before feeding vertex data to the render pipeline, we need to get familiar with the notion of **buffer**. A buffer is "just" a **chunk of memory** allocated in the **VRAM** (the GPU's memory). Think of it as some kind of `new` or `malloc` for the GPU.
23+
Before feeding vertex data to the render pipeline, we need to get familiar with the notion of a **buffer**. A buffer is "just" a **chunk of memory** allocated in the **VRAM** (the GPU's memory). Think of it as some kind of `new` or `malloc` for the GPU.
2424

25-
In this chapter, we see how to **create** (i.e., allocate), **write** from CPU, **copy** from GPU to GPU and **read back** to CPU.
25+
In this chapter, we will see how to **create** (i.e., allocate), **write** from CPU, **copy** from GPU to GPU and **read back** to CPU.
2626

2727
```{note}
28-
Note that textures are a special kind of memory (because of the way we usually sample them) that they live in a different kind of object.
28+
Note that textures are a special kind of memory (because of the way we usually sample them) so they live in a different kind of object.
2929
```
3030

3131
Since this is just an experiment, I suggest we temporarily write the whole code of this chapter at the end of the `Initialize()` function. The overall outline of our code is as follows:
3232

3333
```{lit} C++, Playing with buffers (insert in {{Initialize}} after "InitializePipeline()", also for tangle root "Vanilla")
34-
// Experimentation for the "Playing with buffer" chapter
34+
// Experimentation for the "Playing with buffers" chapter
3535
{{Create a first buffer}}
3636
{{Create a second buffer}}
3737
@@ -177,7 +177,7 @@ And don't forget that commands sent through the **command encoder** are only sub
177177
Copying a buffer
178178
----------------
179179
180-
We can now submit a **buffer-buffer copy** operation to the command queue. This is not directly available from the queue object but rather requires to **create a command encoder**. We may use the same one as the render pass for our test and simply add the following:
180+
We can now submit a **buffer-buffer copy** operation to the command queue. This is not directly available from the queue object but rather requires us to **create a command encoder**. Once we have an encoder we may simply add the following:
181181
182182
````{tab} With webgpu.hpp
183183
```{lit} C++, Copy buffer to buffer
@@ -228,7 +228,7 @@ Reading from a buffer
228228

229229
The **command queue**, that we used to send data (`writeBuffer`) and instructions (`copyBufferToBuffer`), **only goes in one way**: from CPU host to GPU device. It is thus a "fire and forget" queue: functions do not return a value since they **run on a different timeline**.
230230

231-
So, how do we read data back then? We use an **asynchronous operation**, like we did when using `wgpuQueueOnSubmittedWorkDone` in the [Command Queue](../../getting-started/the-command-queue.md) chapter. Instead of directly get a value back, we set up a **callback** that gets invoked whenever the requested data is ready. We then **poll the device** to check for incoming events.
231+
So, how do we read data back then? We use an **asynchronous operation**, like we did when using `wgpuQueueOnSubmittedWorkDone` in the [Command Queue](../../getting-started/the-command-queue.md) chapter. Instead of directly getting a value back, we set up a **callback** that gets invoked whenever the requested data is ready. We then **poll the device** to check for incoming events.
232232

233233
**To read data from a buffer**, we use `buffer.mapAsync` (or `wgpuBufferMapAsync`). This operation **maps** the GPU buffer into CPU memory, and then whenever it is ready it executes the callback function it was provided. Once we are done, we can **unmap** the buffer.
234234

@@ -353,7 +353,7 @@ while (!ready) {
353353
}
354354
```
355355
356-
You could now see `Buffer 2 mapped with status 1` (1 being the value of `BufferMapAsyncStatus::Success`) when running your program. **However**, we never change the `ready` variable to `true`! So the program then **hangs forever**... not great. That is why the next section shows how to pass some context to the callback.
356+
You could now see `Buffer 2 mapped with status 1` (1 being the value of `BufferMapAsyncStatus::Success` when using Dawn, it is 0 for WGPU) when running your program. **However**, we never change the `ready` variable to `true`! So the program then **hangs forever**... not great. That is why the next section shows how to pass some context to the callback.
357357
358358
### Mapping context
359359
@@ -363,7 +363,7 @@ So, we need the callback to **access and mutate** the `ready` variable. But how
363363
When defining `onBuffer2Mapped` as a regular function, it is clear that `ready` is not accessible. When using a lambda expression like we did above, one could be tempted to add `ready` in the **capture list** (the brackets before function arguments). But this **does not work** because a capturing lambda has a **different type**, that cannot be used as a regular callback. We see below that the C++ wrapper fixes this limitation.
364364
```
365365

366-
The **user pointer** is an argument that is provided to `wgpuBufferMapAsync`, when setting up the callback, and that is then fed **as is** to the callback `onBuffer2Mapped` when the map operation is ready. The buffer only forwards this pointer but never uses it: **only you** (the user of the API) interprets it.
366+
The **user pointer** is an argument that is provided to `wgpuBufferMapAsync`, when setting up the callback, and that is then fed **as is** to the callback `onBuffer2Mapped` when the map operation is ready. The buffer only forwards this pointer but never uses it: **only you** (the user of the API) interpret it.
367367

368368
````{tab} With webgpu.hpp
369369
```C++

basic-3d-rendering/shader-uniforms/a-first-uniform.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -587,7 +587,7 @@ We place this for instance at the beginning of `Application::MainLoop()`:
587587

588588
<figure class="align-center">
589589
<video autoplay loop muted inline nocontrols style="width:100%;height:auto;max-width:642px">
590-
<source src="../../_static/turning-webgpu-logo.mp4" type="video/mp4">
590+
<source src="../../../_static/turning-webgpu-logo.mp4" type="video/mp4">
591591
</video>
592592
<figcaption>
593593
<p><span class="caption-text">Our first dynamic scene!</span></p>

getting-started/adapter-and-device/the-device.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ The Device <span class="bullet">🟢</span>
1010

1111
A WebGPU **device** represents a **context** of use of the API. All the objects that we create (geometry, textures, etc.) are owned by the device.
1212

13-
The device is requested from an **adapter** by specifying the **subset of limits and features** that we are interesed in. Once the device is created, the adapter should no longer be used. **The only capabilities that matter** to the application are the one of the device.
13+
The device is requested from an **adapter** by specifying the **subset of limits and features** that we are interesed in. Once the device is created, the adapter should no longer be used. **The only capabilities that matter** to the application are the ones of the device.
1414

1515
Device request
1616
--------------
@@ -221,7 +221,7 @@ deviceDesc.deviceLostCallback = nullptr;
221221
We will come back here and refine these options whenever we will need some more capabilities from the device.
222222

223223
```{note}
224-
The `label` is **used in error message** to help you debug where something went wrong, so it is good practice to use it as soon as you get multiple objects of the same type. Currently, this is only used by Dawn.
224+
The `label` is **used in error messages** to help you debug where something went wrong, so it is good practice to use it as soon as you get multiple objects of the same type. Currently, this is only used by Dawn.
225225
```
226226

227227
Inspecting the device

0 commit comments

Comments
 (0)