@@ -651,13 +647,13 @@ impl model::Vertex for InstanceRaw {
attributes: &[
wgpu::VertexAttribute {
offset: 0,
- // While our vertex shader only uses locations 0, and 1 now, in later tutorials we'll
- // be using 2, 3, and 4, for Vertex. We'll start at slot 5 not conflict with them later
+ // While our vertex shader only uses locations 0, and 1 now, in later tutorials, we'll
+ // be using 2, 3, and 4 for Vertex. We'll start at slot 5 to not conflict with them later
shader_location: 5,
format: wgpu::VertexFormat::Float32x4,
},
// A mat4 takes up 4 vertex slots as it is technically 4 vec4s. We need to define a slot
- // for each vec4. We don't have to do this in code though.
+ // for each vec4. We don't have to do this in code, though.
wgpu::VertexAttribute {
offset: mem::size_of::<[f32; 4]>() as wgpu::BufferAddress,
shader_location: 6,
@@ -716,7 +712,7 @@ impl Instance {
}
```
-Now we need to reconstruct the normal matrix in the vertex shader.
+Now, we need to reconstruct the normal matrix in the vertex shader.
```wgsl
struct InstanceInput {
@@ -766,9 +762,9 @@ fn vs_main(
-I'm currently doing things in [world space](https://gamedev.stackexchange.com/questions/65783/what-are-world-space-and-eye-space-in-game-development). Doing things in view-space also known as eye-space, is more standard as objects can have lighting issues when they are further away from the origin. If we wanted to use view-space, we would have included the rotation due to the view matrix as well. We'd also have to transform our light's position using something like `view_matrix * model_matrix * light_position` to keep the calculation from getting messed up when the camera moves.
+I'm currently doing things in [world space](https://gamedev.stackexchange.com/questions/65783/what-are-world-space-and-eye-space-in-game-development). Doing things in view-space, also known as eye-space, is more standard as objects can have lighting issues when they are further away from the origin. If we wanted to use view-space, we would have included the rotation due to the view matrix as well. We'd also have to transform our light's position using something like `view_matrix * model_matrix * light_position` to keep the calculation from getting messed up when the camera moves.
-There are advantages to using view space. The main one is when you have massive worlds doing lighting and other calculations in model spacing can cause issues as floating-point precision degrades when numbers get really large. View space keeps the camera at the origin meaning all calculations will be using smaller numbers. The actual lighting math ends up the same, but it does require a bit more setup.
+There are advantages to using view space. The main one is that when you have massive worlds doing lighting and other calculations in model spacing, it can cause issues as floating-point precision degrades when numbers get really large. View space keeps the camera at the origin meaning all calculations will be using smaller numbers. The actual lighting math ends up the same, but it does require a bit more setup.
@@ -776,21 +772,21 @@ With that change, our lighting now looks correct.
![./diffuse_right.png](./diffuse_right.png)
-Bringing back our other objects, and adding the ambient lighting gives us this.
+Bringing back our other objects and adding the ambient lighting gives us this.
![./ambient_diffuse_lighting.png](./ambient_diffuse_lighting.png);
-If you can guarantee that your model matrix will always apply uniform scaling to your objects, you can get away with just using the model matrix. Github user @julhe pointed shared this code with me that does the trick:
+If you can guarantee that your model matrix will always apply uniform scaling to your objects, you can get away with just using the model matrix. Github user @julhe shared this code with me that does the trick:
```wgsl
out.world_normal = (model_matrix * vec4(model.normal, 0.0)).xyz;
```
-This works by exploiting the fact that by multiplying a 4x4 matrix by a vector with 0 in the w component, only the rotation and scaling will be applied to the vector. You'll need to normalize this vector though as normals need to be unit length for the calculations to work.
+This works by exploiting the fact that by multiplying a 4x4 matrix by a vector with 0 in the w component, only the rotation and scaling will be applied to the vector. You'll need to normalize this vector, though, as normals need to be unit length for the calculations to work.
-The scaling factor *needs* to be uniform in order for this to work. If it's not the resulting normal will be skewed as you can see in the following image.
+The scaling factor *needs* to be uniform in order for this to work. If it's not, the resulting normal will be skewed, as you can see in the following image.
![./normal-scale-issue.png](./normal-scale-issue.png)
@@ -863,7 +859,7 @@ let camera_bind_group_layout = device.create_bind_group_layout(&wgpu::BindGroupL
});
```
-We're going to get the direction from the fragment's position to the camera, and use that with the normal to calculate the `reflect_dir`.
+We're going to get the direction from the fragment's position to the camera and use that with the normal to calculate the `reflect_dir`.
```wgsl
// shader.wgsl
@@ -872,7 +868,7 @@ let view_dir = normalize(camera.view_pos.xyz - in.world_position);
let reflect_dir = reflect(-light_dir, in.world_normal);
```
-Then we use the dot product to calculate the `specular_strength` and use that to compute the `specular_color`.
+Then, we use the dot product to calculate the `specular_strength` and use that to compute the `specular_color`.
```wgsl
let specular_strength = pow(max(dot(view_dir, reflect_dir), 0.0), 32.0);
@@ -889,13 +885,13 @@ With that, you should have something like this.
![./ambient_diffuse_specular_lighting.png](./ambient_diffuse_specular_lighting.png)
-If we just look at the `specular_color` on its own we get this.
+If we just look at the `specular_color` on its own, we get this.
![./specular_lighting.png](./specular_lighting.png)
## The half direction
-Up to this point, we've actually only implemented the Phong part of Blinn-Phong. The Phong reflection model works well, but it can break down under [certain circumstances](https://learnopengl.com/Advanced-Lighting/Advanced-Lighting). The Blinn part of Blinn-Phong comes from the realization that if you add the `view_dir`, and `light_dir` together, normalize the result and use the dot product of that and the `normal`, you get roughly the same results without the issues that using `reflect_dir` had.
+Up to this point, we've actually only implemented the Phong part of Blinn-Phong. The Phong reflection model works well, but it can break down under [certain circumstances](https://learnopengl.com/Advanced-Lighting/Advanced-Lighting). The Blinn part of Blinn-Phong comes from the realization that if you add the `view_dir` and `light_dir` together, normalize the result and use the dot product of that and the `normal`, you get roughly the same results without the issues that using `reflect_dir` had.
```wgsl
let view_dir = normalize(camera.view_pos.xyz - in.world_position);