@ -87,20 +87,20 @@ Now that we have our camera, and it can make us a view projection matrix, we nee
## The uniform buffer
Up to this point we've used `Buffer`s to store our vertex and index data, and even to load our textures. We are going to use them again to create what's known as a uniform buffer. A uniform is a blob of data that is available to every invocation of a set of shaders. We've technically already used uniforms for our texture and sampler. We're going to use them again to store our view projection matrix. To start let's create a struct to hold our `Uniforms`.
Up to this point we've used `Buffer`s to store our vertex and index data, and even to load our textures. We are going to use them again to create what's known as a uniform buffer. A uniform is a blob of data that is available to every invocation of a set of shaders. We've technically already used uniforms for our texture and sampler. We're going to use them again to store our view projection matrix. To start let's create a struct to hold our uniform.
```rust
// We need this for Rust to store our data correctly for the shaders
@ -136,7 +136,7 @@ let uniform_buffer = device.create_buffer_init(
Cool, now that we have a uniform buffer, what do we do with it? The answer is we create a bind group for it. First we have to create the bind group layout.
```rust
let uniform_bind_group_layout = device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
let camera_bind_group_layout = device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
entries: &[
wgpu::BindGroupLayoutEntry {
binding: 0,
@ -149,7 +149,7 @@ let uniform_bind_group_layout = device.create_bind_group_layout(&wgpu::BindGroup
count: None,
}
],
label: Some("uniform_bind_group_layout"),
label: Some("camera_bind_group_layout"),
});
```
@ -159,19 +159,19 @@ let uniform_bind_group_layout = device.create_bind_group_layout(&wgpu::BindGroup
Now we can create the actual bind group.
```rust
let uniform_bind_group = device.create_bind_group(&wgpu::BindGroupDescriptor {
layout: &uniform_bind_group_layout,
let camera_bind_group = device.create_bind_group(&wgpu::BindGroupDescriptor {
layout: &camera_bind_group_layout,
entries: &[
wgpu::BindGroupEntry {
binding: 0,
resource: uniform_buffer.as_entire_binding(),
resource: camera_buffer.as_entire_binding(),
}
],
label: Some("uniform_bind_group"),
label: Some("camera_bind_group"),
});
```
Like with our texture, we need to register our `uniform_bind_group_layout` with the render pipeline.
Like with our texture, we need to register our `camera_bind_group_layout` with the render pipeline.
```rust
let render_pipeline_layout = device.create_pipeline_layout(
@ -179,22 +179,22 @@ let render_pipeline_layout = device.create_pipeline_layout(
label: Some("Render Pipeline Layout"),
bind_group_layouts: &[
&texture_bind_group_layout,
&uniform_bind_group_layout,
&camera_bind_group_layout,
],
push_constant_ranges: &[],
}
);
```
Now we need to add `uniform_buffer` and `uniform_bind_group` to `State`
Now we need to add `camera_buffer` and `camera_bind_group` to `State`
1. According to the [WGSL Spec](https://gpuweb.github.io/gpuweb/wgsl/), The block decorator indicates this structure type represents the contents of a buffer resource occupying a single binding slot in the shader’s resource interface. Any structure used as a `uniform` must be annotated with `[[block]]`
2. Because we've created a new bind group, we need to specify which one we're using in the shader. The number is determined by our `render_pipeline_layout`. The `texture_bind_group_layout` is listed first, thus it's `group(0)`, and `uniform_bind_group` is second, so it's `group(1)`.
2. Because we've created a new bind group, we need to specify which one we're using in the shader. The number is determined by our `render_pipeline_layout`. The `texture_bind_group_layout` is listed first, thus it's `group(0)`, and `camera_bind_group` is second, so it's `group(1)`.
3. Multiplication order is important when it comes to matrices. The vector goes on the right, and the matrices gone on the left in order of importance.
Up to this point, the camera controller isn't actually doing anything. The values in our uniform buffer need to be updated. There are a few main methods to do that.
1. We can create a separate buffer and copy it's contents to our `uniform_buffer`. The new buffer is known as a staging buffer. This method is usually how it's done as it allows the contents of the main buffer (in this case `uniform_buffer`) to only be accessible by the gpu. The gpu can do some speed optimizations which it couldn't if we could access the buffer via the cpu.
1. We can create a separate buffer and copy it's contents to our `camera_buffer`. The new buffer is known as a staging buffer. This method is usually how it's done as it allows the contents of the main buffer (in this case `camera_buffer`) to only be accessible by the gpu. The gpu can do some speed optimizations which it couldn't if we could access the buffer via the cpu.
2. We can call on of the mapping method's `map_read_async`, and `map_write_async` on the buffer itself. These allow us to access a buffer's contents directly, but requires us to deal with the `async` aspect of these methods this also requires our buffer to use the `BufferUsage::MAP_READ` and/or `BufferUsage::MAP_WRITE`. We won't talk about it here, but you check out [Wgpu without a window](../../showcase/windowless) tutorial if you want to know more.
3. We can use `write_buffer` on `queue`.
@ -416,8 +416,8 @@ We're going to use option number 3.
@ -214,7 +214,7 @@ render_pass.draw_indexed(0..self.num_indices, 0, 0..self.instances.len() as _);
<divclass="warning">
Make sure if you add new instances to the `Vec`, that you recreate the `instance_buffer` and as well as `uniform_bind_group`, otherwise your new instances won't show up correctly.
Make sure if you add new instances to the `Vec`, that you recreate the `instance_buffer` and as well as `camera_bind_group`, otherwise your new instances won't show up correctly.
</div>
@ -247,7 +247,7 @@ fn main(
}
```
We'll apply the `model_matrix` before we apply `uniforms.view_proj`. We do this because the `uniforms.view_proj` changes the coordinate system from `world space` to `camera space`. Our `model_matrix` is a `world space` transformation, so we don't want to be in `camera space` when using it.
We'll apply the `model_matrix` before we apply `camera_uniform.view_proj`. We do this because the `camera_uniform.view_proj` changes the coordinate system from `world space` to `camera space`. Our `model_matrix` is a `world space` transformation, so we don't want to be in `camera space` when using it.
@ -771,32 +771,32 @@ Because this is relative to the view angle, we are going to need to pass in the
```wgsl
[[block]]
struct Uniforms {
struct Camera {
view_pos: vec4<f32>;
view_proj: mat4x4<f32>;
};
[[group(1), binding(0)]]
var<uniform>uniforms: Uniforms;
var<uniform>camera: Camera;
```
<divclass="note">
Don't forget to update the `Uniforms` struct in `light.wgsl` as well, as if it doesn't match the `Uniforms` struct in rust, the light will render wrong.
Don't forget to update the `Camera` struct in `light.wgsl` as well, as if it doesn't match the `CameraUniform` struct in rust, the light will render wrong.
</div>
We're going to need to update the `Uniforms` struct as well.
We're going to need to update the `CameraUniform` struct as well.
@ -816,7 +816,7 @@ Since we want to use our uniforms in the fragment shader now, we need to change
```rust
// main.rs
let uniform_bind_group_layout = device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
let camera_bind_group_layout = device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
entries: &[
wgpu::BindGroupLayoutBinding {
// ...
@ -833,7 +833,7 @@ We're going to get the direction from the fragment's position to the camera, and
```wgsl
// In the fragment shader...
let view_dir = normalize(uniforms.view_pos.xyz - in.world_position);
let view_dir = normalize(camera.view_pos.xyz - in.world_position);
let reflect_dir = reflect(-light_dir, in.world_normal);
```
@ -863,7 +863,7 @@ If we just look at the `specular_color` on it's own we get this.
Up to this point we've actually only implemented the Phong part of Blinn-Phong. The Phong reflection model works well, but it can break down under [certain circumstances](https://learnopengl.com/Advanced-Lighting/Advanced-Lighting). The Blinn part of Blinn-Phong comes from the realization that if you add the `view_dir`, and `light_dir` together, normalize the result and use the dot product of that and the `normal`, you get roughly the same results without the issues that using `reflect_dir` had.
```wgsl
let view_dir = normalize(uniforms.view_pos.xyz - in.world_position);
let view_dir = normalize(camera.view_pos.xyz - in.world_position);
let half_dir = normalize(view_dir + light_dir);
let specular_strength = pow(max(dot(in.world_normal, half_dir), 0.0), 32.0);