@ -47,12 +47,12 @@ In the same folder as `main.rs`, create a file `shader.wgsl`. Write the followin
// Vertex shader
struct VertexOutput {
[[builtin(position)]] clip_position: vec4<f32>;
@builtin(position) clip_position: vec4<f32>,
};
[[stage(vertex)]]
@vertex
fn vs_main(
[[builtin(vertex_index)]] in_vertex_index: u32,
@builtin(vertex_index) in_vertex_index: u32,
) -> VertexOutput {
var out: VertexOutput;
let x = f32(1 - i32(in_vertex_index)) * 0.5;
@ -62,7 +62,7 @@ fn vs_main(
}
```
First, we declare `struct` to store the output of our vertex shader. This consists of only one field currently which is our vertex's `clip_position`. The `[[builtin(position)]]` bit tells WGPU that this is the value we want to use as the vertex's [clip coordinates](https://en.wikipedia.org/wiki/Clip_coordinates). This is analogous to GLSL's `gl_Position` variable.
First, we declare `struct` to store the output of our vertex shader. This consists of only one field currently which is our vertex's `clip_position`. The `@builtin(position)` bit tells WGPU that this is the value we want to use as the vertex's [clip coordinates](https://en.wikipedia.org/wiki/Clip_coordinates). This is analogous to GLSL's `gl_Position` variable.
<divclass="note">
@ -70,7 +70,7 @@ Vector types such as `vec4` are generic. Currently, you must specify the type of
</div>
The next part of the shader code is the `vs_main` function. We are using `[[stage(vertex)]]` to mark this function as a valid entry point for a vertex shader. We expect a `u32` called `in_vertex_index` which gets its value from `[[builtin(vertex_index)]]`.
The next part of the shader code is the `vs_main` function. We are using `@vertex` to mark this function as a valid entry point for a vertex shader. We expect a `u32` called `in_vertex_index` which gets its value from `@builtin(vertex_index)`.
We then declare a variable called `out` using our `VertexOutput` struct. We create two other variables for the `x`, and `y`, of a triangle.
@ -93,10 +93,10 @@ Now we can save our `clip_position` to `out`. We then just return `out` and we'r
We technically didn't need a struct for this example, and could have just done something like the following:
```wgsl
[[stage(vertex)]]
@vertex
fn vs_main(
[[builtin(vertex_index)]] in_vertex_index: u32
) -> [[builtin(position)]] vec4<f32> {
@builtin(vertex_index) in_vertex_index: u32
) -> @builtin(position) vec4<f32> {
// Vertex shader code...
}
```
@ -110,8 +110,8 @@ Next up, the fragment shader. Still in `shader.wgsl` add the following:
@ -124,7 +124,7 @@ Notice that the entry point for the vertex shader was named `vs_main` and that t
</div>
The `[[location(0)]]` bit tells WGPU to store the `vec4` value returned by this function in the first color target. We'll get into what this is later.
The `@location(0)` bit tells WGPU to store the `vec4` value returned by this function in the first color target. We'll get into what this is later.
## How do we use the shaders?
This is the part where we finally make the thing in the title: the pipeline. First, let's modify `State` to include the following.
@ -145,7 +145,7 @@ struct State {
Now let's move to the `new()` method, and start making the pipeline. We'll have to load in those shaders we made earlier, as the `render_pipeline` requires those.
```rust
let shader = device.create_shader_module(&wgpu::ShaderModuleDescriptor {
let shader = device.create_shader_module(wgpu::ShaderModuleDescriptor {
@ -197,7 +197,7 @@ let render_pipeline = device.create_render_pipeline(&wgpu::RenderPipelineDescrip
```
Several things to note here:
1. Here you can specify which function inside the shader should be the `entry_point`. These are the functions we marked with `[[stage(vertex)]]` and `[[stage(fragment)]]`
1. Here you can specify which function inside the shader should be the `entry_point`. These are the functions we marked with `@vertex` and `@fragment`
2. The `buffers` field tells `wgpu` what type of vertices we want to pass to the vertex shader. We're specifying the vertices in the vertex shader itself, so we'll leave this empty. We'll put something there in the next tutorial.
3. The `fragment` is technically optional, so you have to wrap it in `Some()`. We need it if we want to store color data to the `surface`.
4. The `targets` field tells `wgpu` what color outputs it should set up. Currently, we only need one for the `surface`. We use the `surface`'s format so that copying to it is easy, and we specify that the blending should just replace old pixel data with new data. We also tell `wgpu` to write to all colors: red, blue, green, and alpha. *We'll talk more about*`color_state` *when we talk about textures.*
@ -270,7 +270,7 @@ If you run your program now, it'll take a little longer to start, but it will st
let mut render_pass = encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
label: Some("Render Pass"),
color_attachments: &[
// This is what [[location(0)]] in the fragment shader targets
// This is what @location(0) in the fragment shader targets
wgpu::RenderPassColorAttachment {
view: &view,
resolve_target: None,
@ -300,7 +300,7 @@ If you run your program now, it'll take a little longer to start, but it will st
We didn't change much, but let's talk about what we did change.
1. We renamed `_render_pass` to `render_pass` and made it mutable.
2. We set the pipeline on the `render_pass` using the one we just created.
3. We tell `wgpu` to draw *something* with 3 vertices, and 1 instance. This is where `[[builtin(vertex_index)]]` comes from.
3. We tell `wgpu` to draw *something* with 3 vertices, and 1 instance. This is where `@builtin(vertex_index)` comes from.
With all that you should be seeing a lovely brown triangle.
2. `step_mode` tells the pipeline how often it should move to the next vertex. This seems redundant in our case, but we can specify `wgpu::VertexStepMode::Instance` if we only want to change vertices when we start drawing a new instance. We'll cover instancing in a later tutorial.
3. Vertex attributes describe the individual parts of the vertex. Generally, this is a 1:1 mapping with a struct's fields, which is true in our case.
4. This defines the `offset` in bytes until the attribute starts. For the first attribute, the offset is usually zero. For any later attributes, the offset is the sum over `size_of` of the previous attributes' data.
5. This tells the shader what location to store this attribute at. For example `[[location(0)]] x: vec3<f32>` in the vertex shader would correspond to the `position` field of the `Vertex` struct, while `[[location(1)]] x: vec3<f32>` would be the `color` field.
5. This tells the shader what location to store this attribute at. For example `@location(0) x: vec3<f32>` in the vertex shader would correspond to the `position` field of the `Vertex` struct, while `@location(1) x: vec3<f32>` would be the `color` field.
6. `format` tells the shader the shape of the attribute. `Float32x3` corresponds to `vec3<f32>` in shader code. The max value we can store in an attribute is `Float32x4` (`Uint32x4`, and `Sint32x4` work as well). We'll keep this in mind for when we have to store things that are bigger than `Float32x4`.
For you visual learners, our vertex buffer looks like this.
@ -280,16 +280,16 @@ Before our changes will have any effect, we need to update our vertex shader to
@ -229,17 +229,17 @@ We need to reference the parts of our new matrix in `shader.wgsl` so that we can
```wgsl
struct InstanceInput {
[[location(5)]] model_matrix_0: vec4<f32>;
[[location(6)]] model_matrix_1: vec4<f32>;
[[location(7)]] model_matrix_2: vec4<f32>;
[[location(8)]] model_matrix_3: vec4<f32>;
@location(5) model_matrix_0: vec4<f32>;
@location(6) model_matrix_1: vec4<f32>;
@location(7) model_matrix_2: vec4<f32>;
@location(8) model_matrix_3: vec4<f32>;
};
```
We need to reassemble the matrix before we can use it.
```wgsl
[[stage(vertex)]]
@vertex
fn vs_main(
model: VertexInput,
instance: InstanceInput,
@ -257,7 +257,7 @@ fn vs_main(
We'll apply the `model_matrix` before we apply `camera_uniform.view_proj`. We do this because the `camera_uniform.view_proj` changes the coordinate system from `world space` to `camera space`. Our `model_matrix` is a `world space` transformation, so we don't want to be in `camera space` when using it.
The shaders used in this example don't compile on WASM using version 0.12.0 of wgpu. I created an issue [here](https://github.com/gfx-rs/naga/issues/1739). The issue is fixed on the most recent version of naga, but that is using the updated WGSL syntax.
Once 0.13 comes out I'll port the WGSL code to the new syntax and this example should be working.
</div>
While we can tell that our scene is 3d because of our camera, it still feels very flat. That's because our model stays the same color regardless of how it's oriented. If we want to change that we need to add lighting to our scene.
In the real world, a light source emits photons that bounce around until they enter our eyes. The color we see is the light's original color minus whatever energy it lost while it was bouncing around.
@ -145,7 +137,7 @@ fn create_render_pipeline(
vertex_layouts: &[wgpu::VertexBufferLayout],
shader: wgpu::ShaderModuleDescriptor,
) -> wgpu::RenderPipeline {
let shader = device.create_shader_module(&shader);
The shaders used in this example don't compile on WASM using version 0.12.0 of wgpu. I created an issue [here](https://github.com/gfx-rs/naga/issues/1739). The issue is fixed on the most recent version of naga, but that is using the updated WGSL syntax.
Once 0.13 comes out I'll port the WGSL code to the new syntax and this example should be working.
</div>
With just lighting, our scene is already looking pretty good. Still, our models are still overly smooth. This is understandable because we are using a very simple model. If we were using a texture that was supposed to be smooth, this wouldn't be a problem, but our brick texture is supposed to be rougher. We could solve this by adding more geometry, but that would slow our scene down, and it be would hard to know where to add new polygons. This is where normal mapping comes in.
Remember in [the instancing tutorial](/beginner/tutorial7-instancing/#a-different-way-textures), we experimented with storing instance data in a texture? A normal map is doing just that with normal data! We'll use the normals in the normal map in our lighting calculation in addition to the vertex normal.
@ -106,17 +98,17 @@ Now we can use the texture in the fragment shader.
The shaders used in this example don't compile on WASM using version 0.12.0 of wgpu. I created an issue [here](https://github.com/gfx-rs/naga/issues/1739). The issue is fixed on the most recent version of naga, but that is using the updated WGSL syntax.
Once 0.13 comes out I'll port the WGSL code to the new syntax and this example should be working.
</div>
I've been putting this off for a while. Implementing a camera isn't specifically related to using WGPU properly, but it's been bugging me so let's do it.
`lib.rs` is getting a little crowded, so let's create a `camera.rs` file to put our camera code. The first things we're going to put in it are some imports and our `OPENGL_TO_WGPU_MATRIX`.
Our shader will expect a `uniform` buffer that includes the size of the quad grid in `chunk_size`, the `chunk_corner` that our noise algorithm should start at, and `min_max_height` of the terrain.
@ -168,9 +168,9 @@ We'll cover a method called triplanar mapping to texture the terrain in a future
Now that we can get a vertex on the terrains surface we can fill our vertex and index buffers with actual data. We'll create a `gen_terrain()` function that will be the entry point for our compute shader:
The change log above contains most of the details about what has changed about WGPU and therefore the tutorial. I will make a special mention about how to use `map_async()` as that has changed. Previously `map_async` returned a promise that you had to await before you could access a buffers contents. It now expects a `'static` callback that takes the `Result` of the mapping attempt as a parameter. This means that if we want to save a buffers context to an image instead of doing the following:
```rust
{
let buffer_slice = output_buffer.slice(..);
let mapping = buffer_slice.map_async(wgpu::MapMode::Read);
@ -34,9 +34,9 @@ You can see the full table of the alignments in section [4.3.7.1 of the WGSL spe
```wgsl
struct Light {
position: vec3<f32>;
color: vec3<f32>;
};
position: vec3<f32>,
color: vec3<f32>,
}
```
So what's the alignment of this struct? Your first guess would be that it's the sum of the alignments of the individual fields. That might make sense if we were in Rust-land, but in shader-land, it's a little more involved. The alignment for a given struct is given by the following equation:
Some files were not shown because too many files have changed in this diff
Show More