Typos and grammar for rest of docs

pull/347/head
dis_da_moe 2 years ago
parent 03e905cc4d
commit 2275f68247

@ -18,7 +18,7 @@ Once 0.13 comes out I'll revert to using the version published on crates.io.
While we can tell that our scene is 3d because of our camera, it still feels very flat. That's because our model stays the same color regardless of how it's oriented. If we want to change that we need to add lighting to our scene.
In the real world, a light source emits photons which bounce around until they enter into our eyes. The color we see is the light's original color minus whatever energy it lost while it was bouncing around.
In the real world, a light source emits photons that bounce around until they enter our eyes. The color we see is the light's original color minus whatever energy it lost while it was bouncing around.
In the computer graphics world, modeling individual photons would be hilariously computationally expensive. A single 100 Watt light bulb emits about 3.27 x 10^20 photons *per second*. Just imagine that for the sun! To get around this, we're gonna use math to cheat.
@ -26,11 +26,11 @@ Let's discuss a few options.
## Ray/Path Tracing
This is an *advanced* topic, and we won't be covering it in depth here. It's the closest model to the way light really works so I felt I had to mention it. Check out the [ray tracing tutorial](../../todo/) if you want to learn more.
This is an *advanced* topic, and we won't be covering it in-depth here. It's the closest model to the way light really works so I felt I had to mention it. Check out the [ray tracing tutorial](../../todo/) if you want to learn more.
## The Blinn-Phong Model
Ray/path tracing is often too computationally expensive for most realtime applications (though that is starting to change), so a more efficient, if less accurate method based on the [Phong reflection model](https://en.wikipedia.org/wiki/Phong_shading) is often used. It splits up the lighting calculation into three (3) parts: ambient lighting, diffuse lighting, and specular lighting. We're going to be learning the [Blinn-Phong model](https://en.wikipedia.org/wiki/Blinn%E2%80%93Phong_reflection_model), which cheats a bit at the specular calculation to speed things up.
Ray/path tracing is often too computationally expensive for most real-time applications (though that is starting to change), so a more efficient, if less accurate method based on the [Phong reflection model](https://en.wikipedia.org/wiki/Phong_shading) is often used. It splits up the lighting calculation into three (3) parts: ambient lighting, diffuse lighting, and specular lighting. We're going to be learning the [Blinn-Phong model](https://en.wikipedia.org/wiki/Blinn%E2%80%93Phong_reflection_model), which cheats a bit at the specular calculation to speed things up.
Before we can get into that though, we need to add a light to our scene.
@ -54,17 +54,17 @@ Our `LightUniform` represents a colored point in space. We're just going to use
<div class="note">
The rule of thumb for alignment with WGSL structs is field alignments are
always powers of 2. For example a `vec3` may only have 3 float fields giving
always powers of 2. For example, a `vec3` may only have 3 float fields giving
it a size of 12, the alignment will be bumped up to the next power of 2 being
16. This means that you have to be more careful with how you layout your struct
in Rust.
in Rust.
Some developers choose the use `vec4`s instead of `vec3`s to avoid alignment
issues. You can learn more about the alignment rules in the [wgsl spec](https://www.w3.org/TR/WGSL/#alignment-and-size)
</div>
We're going to create another buffer to store our light in.
We're going to create another buffer to store our light in.
```rust
let light_uniform = LightUniform {
@ -85,7 +85,7 @@ let light_buffer = device.create_buffer_init(
```
Don't forget to add the `light_uniform` and `light_buffer` to `State`. After that we need to create a bind group layout and bind group for our light.
Don't forget to add the `light_uniform` and `light_buffer` to `State`. After that, we need to create a bind group layout and bind group for our light.
```rust
let light_bind_group_layout =
@ -125,7 +125,7 @@ let render_pipeline_layout = device.create_pipeline_layout(&wgpu::PipelineLayout
});
```
Let's also update the lights position in the `update()` method, so we can see what our objects look like from different angles.
Let's also update the light's position in the `update()` method, so we can see what our objects look like from different angles.
```rust
// Update the light
@ -141,7 +141,7 @@ This will have the light rotate around the origin one degree every frame.
## Seeing the light
For debugging purposes, it would be nice if we could see where the light is to make sure that the scene looks correct. We could adapt our existing render pipeline to draw the light, but it will likely get in the way. Instead we are going to extract our render pipeline creation code into a new function called `create_render_pipeline()`.
For debugging purposes, it would be nice if we could see where the light is to make sure that the scene looks correct. We could adapt our existing render pipeline to draw the light, but it will likely get in the way. Instead, we are going to extract our render pipeline creation code into a new function called `create_render_pipeline()`.
```rust
@ -339,7 +339,7 @@ let light_render_pipeline = {
I chose to create a separate layout for the `light_render_pipeline`, as it doesn't need all the resources that the regular `render_pipeline` needs (main just the textures).
With that in place we need to write the actual shaders.
With that in place, we need to write the actual shaders.
```wgsl
// light.wgsl
@ -469,7 +469,7 @@ where
}
```
Finally we want to add Light rendering to our render passes.
Finally, we want to add Light rendering to our render passes.
```rust
impl State {
@ -496,13 +496,13 @@ impl State {
}
```
With all that we'll end up with something like this.
With all that, we'll end up with something like this.
![./light-in-scene.png](./light-in-scene.png)
## Ambient Lighting
Light has a tendency to bounce around before entering our eyes. That's why you can see in areas that are in shadow. Actually modeling this interaction is computationally expensive, so we cheat. We define an ambient lighting value that stands in for the light bouncing of other parts of the scene to light our objects.
Light has a tendency to bounce around before entering our eyes. That's why you can see in areas that are in shadow. Actually modeling this interaction is computationally expensive, so we cheat. We define an ambient lighting value that stands for the light bouncing off other parts of the scene to light our objects.
The ambient part is based on the light color as well as the object color. We've already added our `light_bind_group`, so we just need to use it in our shader. In `shader.wgsl`, add the following below the texture uniforms.
@ -532,7 +532,7 @@ fn fs_main(in: VertexOutput) -> [[location(0)]] vec4<f32> {
}
```
With that we should get something like the this.
With that, we should get something like this.
![./ambient_lighting.png](./ambient_lighting.png)
@ -542,7 +542,7 @@ Remember the normal vectors that were included with our model? We're finally goi
![./normal_diagram.png](./normal_diagram.png)
If the dot product of the normal and light vector is 1.0, that means that the current fragment is directly inline with the light source and will receive the lights full intensity. A value of 0.0 or lower means that the surface is perpendicular or facing away from the light, and therefore will be dark.
If the dot product of the normal and light vector is 1.0, that means that the current fragment is directly in line with the light source and will receive the light's full intensity. A value of 0.0 or lower means that the surface is perpendicular or facing away from the light, and therefore will be dark.
We're going to need to pull in the normal vector into our `shader.wgsl`.
@ -565,7 +565,7 @@ struct VertexOutput {
};
```
For now let's just pass the normal directly as is. This is wrong, but we'll fix it later.
For now, let's just pass the normal directly as-is. This is wrong, but we'll fix it later.
```wgsl
[[stage(vertex)]]
@ -589,7 +589,7 @@ fn vs_main(
}
```
With that we can do the actual calculation. Below the `ambient_color` calculation, but above `result`, add the following.
With that, we can do the actual calculation. Below the `ambient_color` calculation, but above `result`, add the following.
```wgsl
let light_dir = normalize(light.position - in.world_position);
@ -604,7 +604,7 @@ Now we can include the `diffuse_color` in the `result`.
let result = (ambient_color + diffuse_color) * object_color.xyz;
```
With that we get something like this.
With that, we get something like this.
![./ambient_diffuse_wrong.png](./ambient_diffuse_wrong.png)
@ -633,11 +633,11 @@ This is clearly wrong as the light is illuminating the wrong side of the cube. T
![./normal_not_rotated.png](./normal_not_rotated.png)
We need to use the model matrix to transform the normals to be in the right direction. We only want the rotation data though. A normal represents a direction, and should be a unit vector throughout the calculation. We can get our normals into the right direction using what is called a normal matrix.
We need to use the model matrix to transform the normals to be in the right direction. We only want the rotation data though. A normal represents a direction and should be a unit vector throughout the calculation. We can get our normals in the right direction using what is called a normal matrix.
We could compute the normal matrix in the vertex shader, but that would involve inverting the `model_matrix`, and WGSL doesn't actually have an inverse function. We would have to code our own. On top of that computing the inverse of a matrix is actually really expensive, especially doing that compututation for every vertex.
We could compute the normal matrix in the vertex shader, but that would involve inverting the `model_matrix`, and WGSL doesn't actually have an inverse function. We would have to code our own. On top of that computing, the inverse of a matrix is actually really expensive, especially doing that computation for every vertex.
Instead we're going to add a `normal` matrix field to `InstanceRaw`. Instead of inverting the model matrix, we'll just be using the instance's rotation to create a `Matrix3`.
Instead, we're going to add a `normal` matrix field to `InstanceRaw`. Instead of inverting the model matrix, we'll just be using the instance's rotation to create a `Matrix3`.
<div class="note">
@ -781,13 +781,13 @@ fn vs_main(
<div class="note">
I'm currently doing things in [world space](https://gamedev.stackexchange.com/questions/65783/what-are-world-space-and-eye-space-in-game-development). Doing things in view-space also known as eye-space, is more standard as objects can have lighting issues when they are further away from the origin. If we wanted to use view-space, we would have include the rotation due to the view matrix as well. We'd also have to transform our light's position using something like `view_matrix * model_matrix * light_position` to keep the calculation from getting messed up when the camera moves.
I'm currently doing things in [world space](https://gamedev.stackexchange.com/questions/65783/what-are-world-space-and-eye-space-in-game-development). Doing things in view-space also known as eye-space, is more standard as objects can have lighting issues when they are further away from the origin. If we wanted to use view-space, we would have included the rotation due to the view matrix as well. We'd also have to transform our light's position using something like `view_matrix * model_matrix * light_position` to keep the calculation from getting messed up when the camera moves.
There are advantages to using view space. The main one is when you have massive worlds doing lighting and other calculations in model spacing can cause issues as floating point precision degrades when numbers get really large. View space keeps the camera at the origin meaning all calculations will be using smaller numbers. The actual lighting math ends up the same, but it does require a bit more setup.
There are advantages to using view space. The main one is when you have massive worlds doing lighting and other calculations in model spacing can cause issues as floating-point precision degrades when numbers get really large. View space keeps the camera at the origin meaning all calculations will be using smaller numbers. The actual lighting math ends up the same, but it does require a bit more setup.
</div>
With that change our lighting now looks correct.
With that change, our lighting now looks correct.
![./diffuse_right.png](./diffuse_right.png)
@ -803,7 +803,7 @@ If you can guarantee that your model matrix will always apply uniform scaling to
out.world_normal = (model_matrix * vec4<f32>(model.normal, 0.0)).xyz;
```
This works by exploiting the fact that multiplying a 4x4 matrix by a vector with 0 in the w component, only the rotation and scaling will be applied to the vector. You'll need to normalize this vector though as normals need to be unit length for the calculations to work.
This works by exploiting the fact that by multiplying a 4x4 matrix by a vector with 0 in the w component, only the rotation and scaling will be applied to the vector. You'll need to normalize this vector though as normals need to be unit length for the calculations to work.
The scaling factor *needs* to be uniform in order for this to work. If it's not the resulting normal will be skewed as you can see in the following image.
@ -813,7 +813,7 @@ The scaling factor *needs* to be uniform in order for this to work. If it's not
## Specular Lighting
Specular lighting describes the highlights that appear on objects when viewed from certain angles. If you've ever looked at a car, it's the super bright parts. Basically, some of the light can reflect of the surface like a mirror. The location of the hightlight shifts depending on what angle you view it at.
Specular lighting describes the highlights that appear on objects when viewed from certain angles. If you've ever looked at a car, it's the super bright parts. Basically, some of the light can reflect off the surface like a mirror. The location of the highlight shifts depending on what angle you view it at.
![./specular_diagram.png](./specular_diagram.png)
@ -861,7 +861,7 @@ impl CameraUniform {
}
```
Since we want to use our uniforms in the fragment shader now, we need to change it's visibility.
Since we want to use our uniforms in the fragment shader now, we need to change its visibility.
```rust
// lib.rs
@ -894,23 +894,23 @@ let specular_strength = pow(max(dot(view_dir, reflect_dir), 0.0), 32.0);
let specular_color = specular_strength * light.color;
```
Finally we add that to the result.
Finally, we add that to the result.
```wgsl
let result = (ambient_color + diffuse_color + specular_color) * object_color.xyz;
```
With that you should have something like this.
With that, you should have something like this.
![./ambient_diffuse_specular_lighting.png](./ambient_diffuse_specular_lighting.png)
If we just look at the `specular_color` on it's own we get this.
If we just look at the `specular_color` on its own we get this.
![./specular_lighting.png](./specular_lighting.png)
## The half direction
Up to this point we've actually only implemented the Phong part of Blinn-Phong. The Phong reflection model works well, but it can break down under [certain circumstances](https://learnopengl.com/Advanced-Lighting/Advanced-Lighting). The Blinn part of Blinn-Phong comes from the realization that if you add the `view_dir`, and `light_dir` together, normalize the result and use the dot product of that and the `normal`, you get roughly the same results without the issues that using `reflect_dir` had.
Up to this point, we've actually only implemented the Phong part of Blinn-Phong. The Phong reflection model works well, but it can break down under [certain circumstances](https://learnopengl.com/Advanced-Lighting/Advanced-Lighting). The Blinn part of Blinn-Phong comes from the realization that if you add the `view_dir`, and `light_dir` together, normalize the result and use the dot product of that and the `normal`, you get roughly the same results without the issues that using `reflect_dir` had.
```wgsl
let view_dir = normalize(camera.view_pos.xyz - in.world_position);
@ -919,7 +919,7 @@ let half_dir = normalize(view_dir + light_dir);
let specular_strength = pow(max(dot(in.world_normal, half_dir), 0.0), 32.0);
```
It's hard to tell the difference, but here's the results.
It's hard to tell the difference, but here are the results.
![./half_dir.png](./half_dir.png)

@ -16,7 +16,7 @@ Once 0.13 comes out I'll revert to using the version published on crates.io.
</div>
With just lighting, our scene is already looking pretty good. Still, our models are still overly smooth. This is understandable because we are using a very simple model. If we were using a texture that was supposed to be smooth, this wouldn't be a problem, but our brick texture is supposed to be rougher. We could solve this by adding more geometry, but that would slow our scene down, and it be would hard to know where to add new polygons. This is were normal mapping comes in.
With just lighting, our scene is already looking pretty good. Still, our models are still overly smooth. This is understandable because we are using a very simple model. If we were using a texture that was supposed to be smooth, this wouldn't be a problem, but our brick texture is supposed to be rougher. We could solve this by adding more geometry, but that would slow our scene down, and it be would hard to know where to add new polygons. This is where normal mapping comes in.
Remember in [the instancing tutorial](/beginner/tutorial7-instancing/#a-different-way-textures), we experimented with storing instance data in a texture? A normal map is doing just that with normal data! We'll use the normals in the normal map in our lighting calculation in addition to the vertex normal.
@ -65,7 +65,7 @@ let texture_bind_group_layout = device.create_bind_group_layout(&wgpu::BindGroup
});
```
We'll need to actually load the normal map. We'll do this in the loop we create the materials in.
We'll need to actually load the normal map. We'll do this in the loop where we create the materials.
```rust
let diffuse_path = mat.diffuse_texture;
@ -78,7 +78,7 @@ We'll need to actually load the normal map. We'll do this in the loop we create
* Note: I duplicated and moved the `command_buffers.push(cmds);` line. This means we can reuse the `cmds` variable for both the normal map and diffuse/color map.
Our `Material`'s `bind_group` will have to change as well.
Our `Material`'s `bind_group` will have to change as well.
```rust
let bind_group = device.create_bind_group(&wgpu::BindGroupDescriptor {
@ -109,7 +109,7 @@ materials.push(Material {
});
```
Now we can add use the texture in the fragment shader.
Now we can use the texture in the fragment shader.
```wgsl
// Fragment shader
@ -159,11 +159,11 @@ Parts of the scene are dark when they should be lit up, and vice versa.
## Tangent Space to World Space
I mentioned it briefly in the [lighting tutorial](/intermediate/tutorial10-lighting/#the-normal-matrix), that we were doing our lighting calculation in "world space". This meant that the entire scene was oriented with respect to the *world's* coordinate system. When we pull the normal data from our normal texture, all the normals are in what's known as pointing roughly in the positive z direction. That means that our lighting calculation thinks all of the surfaces of our models are facing in roughly the same direction. This is referred to as `tangent space`.
I mentioned briefly in the [lighting tutorial](/intermediate/tutorial10-lighting/#the-normal-matrix), that we were doing our lighting calculation in "world space". This meant that the entire scene was oriented with respect to the *world's* coordinate system. When we pull the normal data from our normal texture, all the normals are in what's known as pointing roughly in the positive z direction. That means that our lighting calculation thinks all of the surfaces of our models are facing in roughly the same direction. This is referred to as `tangent space`.
If we remember the [lighting-tutorial](/intermediate/tutorial10-lighting/#), we used the vertex normal to indicate the direction of the surface. It turns out we can use that to transform our normals from `tangent space` into `world space`. In order to do that we need to draw from the depths of linear algebra.
We can create a matrix that represents a coordinate system using 3 vectors that are perpendicular (or orthonormal) to each other. Basically we define the x, y, and z axes of our coordinate system.
We can create a matrix that represents a coordinate system using 3 vectors that are perpendicular (or orthonormal) to each other. Basically, we define the x, y, and z axes of our coordinate system.
```wgsl
let coordinate_system = mat3x3<f32>(
@ -173,17 +173,17 @@ let coordinate_system = mat3x3<f32>(
);
```
We're going to create a matrix that will represent the coordinate space relative to our vertex normals. We're then going to use that to transform our normal map data to be in world space.
We're going to create a matrix that will represent the coordinate space relative to our vertex normals. We're then going to use that to transform our normal map data to be in world space.
## The tangent, and the bitangent
We have one of the 3 vectors we need, the normal. What about the others? These are the tangent, and bitangent vectors. A tangent represents any vector that is parallel with a surface (aka. doesn't intersect with it). The tangent is always perpendicular to the normal vector. The bitangent is a tangent vector that is perpendicular to the other tangent vector. Together the tangent, bitangent, and normal represent the x, y, and z axes respectively.
We have one of the 3 vectors we need, the normal. What about the others? These are the tangent and bitangent vectors. A tangent represents any vector that is parallel with a surface (aka. doesn't intersect with it). The tangent is always perpendicular to the normal vector. The bitangent is a tangent vector that is perpendicular to the other tangent vector. Together the tangent, bitangent, and normal represent the x, y, and z axes respectively.
Some model formats include the tanget and bitangent (sometimes called the binormal) in the vertex data, but OBJ does not. We'll have to calculate them manually. Luckily we can derive our tangent, and bitangent from our existing vertex data. Take a look at the following diagram.
Some model formats include the tanget and bitangent (sometimes called the binormal) in the vertex data, but OBJ does not. We'll have to calculate them manually. Luckily we can derive our tangent and bitangent from our existing vertex data. Take a look at the following diagram.
![](./tangent_space.png)
Basically we can use the edges of our triangles, and our normal to calculate the tangent and bitangent. But first, we need to update our `ModelVertex` struct in `model.rs`.
Basically, we can use the edges of our triangles, and our normal to calculate the tangent and bitangent. But first, we need to update our `ModelVertex` struct in `model.rs`.
```rust
#[repr(C)]
@ -227,7 +227,7 @@ impl Vertex for ModelVertex {
}
```
Now we can calculate the new tangent, and bitangent vectors.
Now we can calculate the new tangent and bitangent vectors.
```rust
impl Model {
@ -337,7 +337,7 @@ impl Model {
## World Space to Tangent Space
Since the normal map by default is in tangent space, we need to transform all the other variables used in that calculation to tangent space as well. We'll need to construct the tangent matrix in the vertex shader. First we need our `VertexInput` to include the tangent and bitangents we calculated earlier.
Since the normal map by default is in tangent space, we need to transform all the other variables used in that calculation to tangent space as well. We'll need to construct the tangent matrix in the vertex shader. First, we need our `VertexInput` to include the tangent and bitangents we calculated earlier.
```wgsl
struct VertexInput {
@ -349,7 +349,7 @@ struct VertexInput {
};
```
Next we'll construct the `tangent_matrix` and then transform the vertex, light and view position into tangent space.
Next, we'll construct the `tangent_matrix` and then transform the vertex's light and view position into tangent space.
```wgsl
struct VertexOutput {
@ -395,7 +395,7 @@ fn vs_main(
}
```
Finally we'll update the fragment shader to use these transformed lighting values.
Finally, we'll update the fragment shader to use these transformed lighting values.
```wgsl
[[stage(fragment)]]
@ -419,7 +419,7 @@ We get the following from this calculation.
We've been using `Rgba8UnormSrgb` for all our textures. The `Srgb` bit specifies that we will be using [standard red green blue color space](https://en.wikipedia.org/wiki/SRGB). This is also known as linear color space. Linear color space has less color density. Even so, it is often used for diffuse textures, as they are typically made in `Srgb` color space.
Normal textures aren't made with `Srgb`. Using `Rgba8UnormSrgb` can changes how the GPU samples the texture. This can make the resulting simulation [less accurate](https://medium.com/@bgolus/generating-perfect-normal-maps-for-unity-f929e673fc57#b86c). We can avoid these issues by using `Rgba8Unorm` when we create the texture. Let's add an `is_normal_map` method to our `Texture` struct.
Normal textures aren't made with `Srgb`. Using `Rgba8UnormSrgb` can change how the GPU samples the texture. This can make the resulting simulation [less accurate](https://medium.com/@bgolus/generating-perfect-normal-maps-for-unity-f929e673fc57#b86c). We can avoid these issues by using `Rgba8Unorm` when we create the texture. Let's add an `is_normal_map` method to our `Texture` struct.
```rust
pub fn from_image(
@ -599,7 +599,7 @@ where
}
```
I found a cobblestone texture with matching normal map, and created a `debug_material` for that.
I found a cobblestone texture with a matching normal map and created a `debug_material` for that.
```rust
// main.rs

@ -18,7 +18,7 @@ Once 0.13 comes out I'll revert to using the version published on crates.io.
I've been putting this off for a while. Implementing a camera isn't specifically related to using WGPU properly, but it's been bugging me so let's do it.
`main.rs` is getting a little crowded, so let's create a `camera.rs` file to put our camera code. The first thing we're going to put in it in is some imports and our `OPENGL_TO_WGPU_MATRIX`.
`main.rs` is getting a little crowded, so let's create a `camera.rs` file to put our camera code. The first things we're going to put in it are some imports and our `OPENGL_TO_WGPU_MATRIX`.
```rust
use cgmath::*;
@ -50,7 +50,7 @@ instant = "0.1"
## The Camera
Next we need create a new `Camera` struct. We're going to be using a FPS style camera, so we'll store the position and the yaw (horizontal rotation), and pitch (vertical rotation). We'll have a `calc_matrix` method to create our view matrix.
Next, we need to create a new `Camera` struct. We're going to be using an FPS-style camera, so we'll store the position and the yaw (horizontal rotation), and pitch (vertical rotation). We'll have a `calc_matrix` method to create our view matrix.
```rust
#[derive(Debug)]
@ -129,9 +129,9 @@ impl Projection {
}
```
On thing to note: `cgmath` currently returns a right-handed projection matrix from the `perspective` function. This means that the z-axis points out of the screen. If you want the z-axis to be *into* the screen (aka. a left-handed projection matrix), you'll have to code your own.
One thing to note: `cgmath` currently returns a right-handed projection matrix from the `perspective` function. This means that the z-axis points out of the screen. If you want the z-axis to be *into* the screen (aka. a left-handed projection matrix), you'll have to code your own.
You can tell the difference between a right-handed coordinate system and a left-handed one by using your hands. Point your thumb to the right. This is the x-axis. Point your pointer finger up. This is the y-axis. Extend your middle finger. This is the z-axis. On your right hand your middle finger should be pointing towards you. On your left hand it should be pointing away.
You can tell the difference between a right-handed coordinate system and a left-handed one by using your hands. Point your thumb to the right. This is the x-axis. Point your pointer finger up. This is the y-axis. Extend your middle finger. This is the z-axis. On your right hand, your middle finger should be pointing towards you. On your left hand, it should be pointing away.
![./left_right_hand.gif](./left_right_hand.gif)
@ -343,7 +343,7 @@ fn resize(&mut self, new_size: winit::dpi::PhysicalSize<u32>) {
}
```
`input()` will need to be updated as well. Up to this point we have been using `WindowEvent`s for our camera controls. While this works, it's not the best solution. The [winit docs](https://docs.rs/winit/0.24.0/winit/event/enum.WindowEvent.html?search=#variant.CursorMoved) inform us that OS will often transform the data for the `CursorMoved` event to allow effects such as cursor acceleration.
`input()` will need to be updated as well. Up to this point, we have been using `WindowEvent`s for our camera controls. While this works, it's not the best solution. The [winit docs](https://docs.rs/winit/0.24.0/winit/event/enum.WindowEvent.html?search=#variant.CursorMoved) inform us that OS will often transform the data for the `CursorMoved` event to allow effects such as cursor acceleration.
Now to fix this we could change the `input()` function to process `DeviceEvent` instead of `WindowEvent`, but keyboard and button presses don't get emitted as `DeviceEvent`s on MacOS and WASM. Instead, we'll just remove the `CursorMoved` check in `input()`, and a manual call to `camera_controller.process_mouse()` in the `run()` function.
@ -425,7 +425,7 @@ fn main() {
}
```
The `update` function requires a bit more explanation. The `update_camera` function on the `CameraController` has a parameter `dt: Duration` which is the delta time or time between frames. This is to help smooth out the camera movement so that it's not locked be the framerate. Currently we aren't calculating `dt`, so I decided to pass it into `update` as a parameter.
The `update` function requires a bit more explanation. The `update_camera` function on the `CameraController` has a parameter `dt: Duration` which is the delta time or time between frames. This is to help smooth out the camera movement so that it's not locked by the framerate. Currently, we aren't calculating `dt`, so I decided to pass it into `update` as a parameter.
```rust
fn update(&mut self, dt: instant::Duration) {
@ -470,7 +470,7 @@ fn main() {
}
```
With that we should be able to move our camera wherever we want.
With that, we should be able to move our camera wherever we want.
![./screenshot.png](./screenshot.png)

@ -16,7 +16,7 @@ Once 0.13 comes out I'll revert to using the version published on crates.io.
</div>
The main selling point of Vulkan, DirectX 12, Metal, and by extension Wgpu is that these APIs is that they designed from the ground up to be thread safe. Up to this point we have been doing everything on a single thread. That's about to change.
The main selling point of Vulkan, DirectX 12, Metal, and by extension Wgpu is that these APIs is that they designed from the ground up to be thread-safe. Up to this point, we have been doing everything on a single thread. That's about to change.
<div class="note">
@ -28,7 +28,7 @@ We won't go over multithreading rendering as we don't have enough different type
## Parallelizing loading models and textures
Currently we load the materials and meshes of our model one at a time. This is a perfect opportunity for multithreading! All our changes will be in `model.rs`. Let's first start with the materials. We'll convert the regular for loop into a `par_iter().map()`.
Currently, we load the materials and meshes of our model one at a time. This is a perfect opportunity for multithreading! All our changes will be in `model.rs`. Let's first start with the materials. We'll convert the regular for loop into a `par_iter().map()`.
```rust
// resources.rs
@ -72,7 +72,7 @@ impl Model {
}
```
Next we can update the meshes to be loaded in parallel.
Next, we can update the meshes to be loaded in parallel.
```rust
impl Model {
@ -145,7 +145,7 @@ Elapsed (Original): 309.596382ms
Elapsed (Threaded): 199.645027ms
```
We're not loading that many resources, so the speed up is minimal. We'll be doing more stuff with threading, but this is a good introduction.
We're not loading that many resources, so the speedup is minimal. We'll be doing more stuff with threading, but this is a good introduction.
<WasmExample example="tutorial12_camera"></WasmExample>

@ -1,6 +1,6 @@
# Update to 0.12!
There's not a ton of changes in this release, so the migration
There are not a ton of changes in this release, so the migration
wasn't too painful.
## Multi view added
@ -12,7 +12,7 @@ as render attachments.
## No more block attribute
The WGSL spec has changed and the `block` attribute is no longer a thing.
This means that structs in WGSL no longer need to be anotated to be used
This means that structs in WGSL no longer need to be annotated to be used
as uniform input. For example:
```wgsl
@ -74,4 +74,4 @@ imports and uses (ie. `anyhow::Result`). This was mostly an issue on my
build scripts for some of the showcase examples.
The main tutorial examples weren't affected, and the changes are minor, so
if your curious feel free to look at the repo.
if you're curious feel free to look at the repo.

@ -2,7 +2,7 @@
## Pong working on the web
This took a little while to figure out. I ended up using wasm-pack to create the wasm as I was having trouble with getting wasm-bindgen to work. I figured it out eventually but decided to keep using wasm-pack as I felt that the work flow would be more friendly to readers.
This took a little while to figure out. I ended up using wasm-pack to create the wasm as I was having trouble with getting wasm-bindgen to work. I figured it out eventually but decided to keep using wasm-pack as I felt that the workflow would be more friendly to readers.
I would have released this sooner, but I wanted to add support for touch so that people on their phones could play the game. It appears that winit doesn't record touch events for WASM, so I shelved that idea.
@ -26,7 +26,7 @@ self.queue.submit(iter::once(encoder.finish()));
output.present();
```
There a good deal of internal changes such as WebGL support (which I really need to cover). You can check out more on wgpu's [changelog](https://github.com/gfx-rs/wgpu/blob/master/CHANGELOG.md#wgpu-011-2021-10-07).
There are a good deal of internal changes such as WebGL support (which I really need to cover). You can check out more on wgpu's [changelog](https://github.com/gfx-rs/wgpu/blob/master/CHANGELOG.md#wgpu-011-2021-10-07).
## Pong is fixed for 0.10
@ -67,11 +67,11 @@ let view = output
.create_view(&wgpu::TextureViewDescriptor::default());
```
The Pong and imgui examples are broken again. I may remove the imgui example as the corresponding crate already has examples on how to use it. I'm also considering reworking the Pong example, but I may end up just updating it.
The Pong and imgui examples are broken again. I may remove the imgui example as the corresponding crate already has examples of how to use it. I'm also considering reworking the Pong example, but I may end up just updating it.
## Pong and imgui demos are fixed!
The `imgui_wgpu` and `wgpu_glyph` crates have been updated to `wgpu` 0.8 so I was able to fixed the demos! They both still use GLSL, and I don't think I'll be changing that for now. I may switch them over to `naga` at some point.
The `imgui_wgpu` and `wgpu_glyph` crates have been updated to `wgpu` 0.8 so I was able to fix the demos! They both still use GLSL, and I don't think I'll be changing that for now. I may switch them over to `naga` at some point.
## 0.8 and WGSL
@ -87,7 +87,7 @@ Since I needed to make a bunch of changes to the code base to make the glsl, and
### Some of the showcase examples are broken
The `wgpu_glyph`, and `imgui-wgpu` crates currently depend on `wgpu` 0.7, which is causing the `pong` and `imgui-demo` to not compile. I decided to excluded them from the workspace until the underlying crates update to using `wgpu` 0.8. (Feel free to submit a issue or even PR when that happens!)
The `wgpu_glyph`, and `imgui-wgpu` crates currently depend on `wgpu` 0.7, which is causing the `pong` and `imgui-demo` to not compile. I decided to exclude them from the workspace until the underlying crates update to using `wgpu` 0.8. (Feel free to submit a issue or even PR when that happens!)
### Various API changes
@ -105,11 +105,11 @@ The `wgpu_glyph`, and `imgui-wgpu` crates currently depend on `wgpu` 0.7, which
## 0.7
There were a lot of changes particularly to the `RenderPipelineDescriptor`. Most other things have not changed. You can check out the [0.9 PR](https://github.com/sotrh/learn-wgpu/pull/140) for the full details.
There were a lot of changes, particularly to the `RenderPipelineDescriptor`. Most other things have not changed. You can check out the [0.9 PR](https://github.com/sotrh/learn-wgpu/pull/140) for the full details.
## November 2020 Cleanup, Content Freeze, and Patreon
School is starting to ramp up, so I haven't had as much time to work on the site as I would like to. Because of that there were some issues piling up. I decided to tackle a bunch of them in one go. Here's a snapshot of what I did:
School is starting to ramp up, so I haven't had as much time to work on the site as I would like to. Because of that, there were some issues piling up. I decided to tackle a bunch of them in one go. Here's a snapshot of what I did:
* The tutorial now handles `SurfaceError` properly
* I'm now using bytemuck's derive feature on all buffer data structs.
@ -119,7 +119,7 @@ School is starting to ramp up, so I haven't had as much time to work on the site
* I made a [compute pipeline showcase](../showcase/compute) that computes the tangent and bitangent for each vertex in a model.
* I made a [imgui showcase](../showcase/imgui-demo). It's very basic, but it should be a good starting point.
Now in the headline I mentioned a "Content Freeze". Wgpu is still a moving target. The migration from `0.4` to `0.5` was lot of work. The same goes for `0.5` to `0.6`. I'm expected the next migration to be just as much work. As such, I won't be added much content until the API becomes a bit more stable. That being said, I still plan on resolving any issues with the content.
Now in the headline, I mentioned a "Content Freeze". Wgpu is still a moving target. The migration from `0.4` to `0.5` was a lot of work. The same goes for `0.5` to `0.6`. I expect the next migration to be just as much work. As such, I won't be adding much content until the API becomes a bit more stable. That being said, I still plan on resolving any issues with the content.
One more thing. This is actually quite awkward for me (especially since I'll be slowing down development), but I've started a [patreon](https://www.patreon.com/sotrh). My job doesn't give me a ton of hours, so things are a bit tight. You are by no means obligated to donate, but I would appreciate it.
@ -127,7 +127,7 @@ You can find out more about contributing to this project on the [introduction pa
## 0.6
This took me way too long. The changes weren't difficult, but I had to do a lot of copy pasting. The main changes are using `queue.write_buffer()` and `queue.write_texture()` everywhere. I won't get into the nitty gritty, but you can checkout the [pull request](https://github.com/sotrh/learn-wgpu/pull/90) if you're interested.
This took me way too long. The changes weren't difficult, but I had to do a lot of copy pasting. The main changes are using `queue.write_buffer()` and `queue.write_texture()` everywhere. I won't get into the nitty gritty, but you can check out the [pull request](https://github.com/sotrh/learn-wgpu/pull/90) if you're interested.
## Added Pong Showcase
@ -153,11 +153,11 @@ The [lighting tutorial](/intermediate/tutorial10-lighting/) was not up to par, s
## Updated texture tutorials
Up to this point, we created textures manually everytime. I've pulled out the texture creation code into a new `texture.rs` file and included it every tutorial from the [textures tutorial](/beginner/tutorial5-textures/#cleaning-things-up) onward.
Up to this point, we created textures manually every time. I've pulled out the texture creation code into a new `texture.rs` file and included it in every tutorial from the [textures tutorial](/beginner/tutorial5-textures/#cleaning-things-up) onward.
## Fixed panics do to not specifying the correct `usage`
## Fixed panics due to not specifying the correct `usage`
Wgpu has become more strict about what `BufferUsages`s and `TextureUsages`s are required when performing certain operations. For example int the [Wgpu without a window example](/intermediate/windowless/), the `texture_desc` only specified the usage to by `COPY_SRC`. This caused a crash when the `texture` was used as a render target. Adding `OUTPUT_ATTACHMENT` fixed the issue.
Wgpu has become more strict about what `BufferUsages`s and `TextureUsages`s are required when performing certain operations. For example in the [Wgpu without a window example](/intermediate/windowless/), the `texture_desc` only specified the usage to by `COPY_SRC`. This caused a crash when the `texture` was used as a render target. Adding `OUTPUT_ATTACHMENT` fixed the issue.
## Updating Winit from 0.20.0-alpha5 to 0.20
@ -168,7 +168,7 @@ There were a lot of small changes to how the dpi stuff works. You can see all th
* `State::size` is now `PhysicalSize<u32>` instead of the pre 0.20 `LogicalSize`.
* `EventsCleared` is now `MainEventsCleared`.
I may have missed a change, but I made sure that all the examples compile an run, so if you have trouble with your code you can use them as a reference.
I may have missed a change, but I made sure that all the examples compile and run, so if you have trouble with your code you can use them as a reference.
## Changed tutorial examples to use a src directory
@ -196,4 +196,4 @@ I don't know if this is a change from 0.4, but you use `wgpu = "0.4"` line in de
## New/Recent Articles
<RecentArticles/>
<RecentArticles/>

@ -1,3 +1,3 @@
# Foreword
The articles in this section are not meant to be tutorials. They are showcases of the various things you can do with `wgpu`. I won't go over specifics of creating `wgpu` resources, as those will be covered elsewhere. The code for these examples is still available however, and will be accessible on Github.
The articles in this section are not meant to be tutorials. They are showcases of the various things you can do with `wgpu`. I won't go over the specifics of creating `wgpu` resources, as those will be covered elsewhere. The code for these examples is still available however and will be accessible on Github.

@ -2,32 +2,21 @@
<div class="warn">
This page is currently being reworked. I want to understand the topics a bit better, but
as 0.12 is out I want to release what I have for now.
This page is currently being reworked. I want to understand the topics a bit better, but as 0.12 is out I want to release what I have for now.
</div>
## Alignment of vertex and index buffers
Vertex buffers require defining a `VertexBufferLayout`, so the memory alignment is whatever
you tell WebGPU it should be. This can be really convenient for keeping down memory usage
on the GPU.
Vertex buffers require defining a `VertexBufferLayout`, so the memory alignment is whatever you tell WebGPU it should be. This can be really convenient for keeping down memory usage on the GPU.
The Index Buffer use the alignment of whatever primitive type you specify via the `IndexFormat`
you pass into `RenderEncoder::set_index_buffer()`.
The Index Buffer uses the alignment of whatever primitive type you specify via the `IndexFormat` you pass into `RenderEncoder::set_index_buffer()`.
## Alignment of Uniform and Storage buffers
GPUs are designed to process thousands of pixels in parallel. In order to achieve this,
some sacrifices had to be made. Graphics hardware likes to have all the bytes you intend
on processing aligned by powers of 2. The exact specifics of why this is are beyond
my level of knowledge, but it's important to know so that you can trouble shoot why your
shaders aren't working.
GPUs are designed to process thousands of pixels in parallel. In order to achieve this, some sacrifices had to be made. Graphics hardware likes to have all the bytes you intend on processing aligned by powers of 2. The exact specifics of why this is are beyond my level of knowledge, but it's important to know so that you can troubleshoot why your shaders aren't working.
<!-- The The address of the position of an instance in memory has to a multiple of its alignment.
Normally alignment is the same as size. Exceptions are vec3, structs and arrays. A vec3
is padded to be a vec4 which means it behaves as if it was a vec4 just that the last entry
is not used. -->
<!-- The address of the position of an instance in memory has to be a multiple of its alignment. Normally alignment is the same as size. Exceptions are vec3, structs, and arrays. A vec3 is padded to be a vec4 which means it behaves as if it was a vec4 just that the last entry is not used. -->
Let's take a look at the following table:
@ -39,9 +28,7 @@ Let's take a look at the following table:
| vec3&lt;T&gt; | **16** | 12 |
| vec4&lt;T&gt; | 16 | 16 |
You can see for `vec3` the alignment is the next power of 2 from the size, 16. This can
catch beginners (and even veterans) off guard as it's not the most intuitive. This becomes especially
important when we start laying out structs. Take the light struct from the [lighting tutorial](../../intermediate/tutorial10-lighting/#seeing-the-light):
You can see for `vec3` the alignment is the next power of 2 from the size, 16. This can catch beginners (and even veterans) off guard as it's not the most intuitive. This becomes especially important when we start laying out structs. Take the light struct from the [lighting tutorial](../../intermediate/tutorial10-lighting/#seeing-the-light):
You can see the full table of the alignments in section [4.3.7.1 of the WGSL spec](https://www.w3.org/TR/WGSL/#alignment-and-size)
@ -52,10 +39,7 @@ struct Light {
};
```
So what's the alignment of this scruct? Your first guess would be that it's the sum of
the alignments of the individual fields. That might make sense if we were in Rust-land,
but in shader-land, it's a little more involved. The alignment for a given struct is given
by the following equation:
So what's the alignment of this struct? Your first guess would be that it's the sum of the alignments of the individual fields. That might make sense if we were in Rust-land, but in shader-land, it's a little more involved. The alignment for a given struct is given by the following equation:
```
// S is the struct in question
@ -63,8 +47,7 @@ by the following equation:
AlignOf(S) = max(AlignOfMember(S, M1), ... , AlignOfMember(S, Mn))
```
Basically the alignment of the struct is the maximum of the alignments of the members of
the struct. This means that:
Basically, the alignment of the struct is the maximum of the alignments of the members of the struct. This means that:
```
AlignOf(Light)
@ -73,13 +56,11 @@ AlignOf(Light)
= 16
```
This is why the `LightUniform` has those padding fields. WGPU won't accept it if the data
is not aligned correctly.
This is why the `LightUniform` has those padding fields. WGPU won't accept it if the data is not aligned correctly.
## How to deal with alignment issues
In general 16, is the max alignment you'll see. In that case you might think that we should
be able to do something like the following:
In general, 16 is the max alignment you'll see. In that case, you might think that we should be able to do something like the following:
```rust
#[repr(C, align(16))]
@ -90,9 +71,7 @@ struct LightUniform {
}
```
But this won't compile. The [bytemuck crate](https://docs.rs/bytemuck/) doesn't work with
structs with implicit padding bytes. Rust can't guarantee that the memory between the fields
has been initialized properly. This gave be an error when I tried it:
But this won't compile. The [bytemuck crate](https://docs.rs/bytemuck/) doesn't work with structs with implicit padding bytes. Rust can't guarantee that the memory between the fields has been initialized properly. This gave me an error when I tried it:
```
error[E0512]: cannot transmute between types of different sizes, or dependently-sized types
@ -107,4 +86,4 @@ error[E0512]: cannot transmute between types of different sizes, or dependently-
## Additional resources
If you're looking for more information check out the [right-up](https://gist.github.com/teoxoy/936891c16c2a3d1c3c5e7204ac6cd76c) by @teoxoy.
If you're looking for more information check out the [write-up](https://gist.github.com/teoxoy/936891c16c2a3d1c3c5e7204ac6cd76c) by @teoxoy.

@ -1,6 +1,6 @@
# Compute Example: Tangents and Bitangents
This proved more difficult than I anticipated. The first problem I encountered was some vertex data corruption due to the shader reading my vertex data incorrectly. I was using my `ModelVertex` struct I used in the [normal mapping tutorial](/intermediate/tutorial11-normals/).
This proved more difficult than I anticipated. The first problem I encountered was some vertex data corruption due to the shader reading my vertex data incorrectly. I was using the `ModelVertex` struct I used in the [normal mapping tutorial](/intermediate/tutorial11-normals/).
```rust
#[repr(C)]
@ -26,11 +26,11 @@ struct ModelVertex {
};
```
At first glance, this seems just fine, but OpenGL experts would likely see a problem with the structure. Our fields aren't aligned properly to support the `std430` alignment that storage buffers require.. I won't get into detail but you can check out the [alignment showcase](../alignment) if you want to know more. To summarize, the `vec2` for the `tex_coords` was messing up the byte alignment, corrupting the vertex data resulting in the following:
At first glance, this seems just fine, but OpenGL experts would likely see a problem with the structure. Our fields aren't aligned properly to support the `std430` alignment that storage buffers require... I won't get into detail but you can check out the [alignment showcase](../alignment) if you want to know more. To summarize, the `vec2` for the `tex_coords` was messing up the byte alignment, corrupting the vertex data resulting in the following:
![./corruption.png](./corruption.png)
I could have fixed this by adding a padding field after `tex_coords` on the Rust side, but that would require modifying the `VertexBufferLayout`. I ended up solving this problem by using the components of the vectors directly and resulted with a struct like this:
I could have fixed this by adding a padding field after `tex_coords` on the Rust side, but that would require modifying the `VertexBufferLayout`. I ended up solving this problem by using the components of the vectors directly which resulted in a struct like this:
```glsl
struct ModelVertex {
@ -44,7 +44,7 @@ struct ModelVertex {
Since `std430` will use the alignment of the largest element of the struct, using all floats means the struct will be aligned to 4 bytes. This is alignment matches what `ModelVertex` uses in Rust. This was kind of a pain to work with, but it fixed the corruption issue.
The second problem required me to rethink how I was computing the tangent and bitangent. The previous algorithm I was using only computed the tangent and bitangent for each triangle and set all the vertices in that triangle to use the same tangent and bitangent. While this is fine in a single threaded context, the code breaks down when trying to compute the triangles in parallel. The reason is that multiple triangles can share the same vertices. This means that when we go to save the resulting tangents, we inevitably end up trying to write to the same vertex from multiple different threads which is a big no no. You can see the issue with this method below:
The second problem required me to rethink how I was computing the tangent and bitangent. The previous algorithm I was using only computed the tangent and bitangent for each triangle and set all the vertices in that triangle to use the same tangent and bitangent. While this is fine in a single-threaded context, the code breaks down when trying to compute the triangles in parallel. The reason is that multiple triangles can share the same vertices. This means that when we go to save the resulting tangents, we inevitably end up trying to write to the same vertex from multiple different threads which is a big no no. You can see the issue with this method below:
![./black_triangles.png](./black_triangles.png)
@ -52,7 +52,7 @@ Those black triangles were the result of multiple GPU threads trying to modify t
![./render_doc_output.png](./render_doc_output.png)
While on the CPU we could introduce a synchronization primitive such as a `Mutex` to fix this issue, AFAIK there isn't really such a thing on the GPU. Instead I decided to swap my code to work with each vertex individually. There are some hurdles with that, but those will be easier to explain in code. Let's start with the `main` function.
While on the CPU we could introduce a synchronization primitive such as a `Mutex` to fix this issue, AFAIK there isn't really such a thing on the GPU. Instead, I decided to swap my code to work with each vertex individually. There are some hurdles with that, but those will be easier to explain in code. Let's start with the `main` function.
```glsl
void main() {
@ -62,7 +62,7 @@ void main() {
}
```
We use the `gl_GlobalInvocationID.x` to get the index of the vertex we want to compute the tangents for. I opted to put the actual calculation into it's own method. Let's take a look at that.
We use the `gl_GlobalInvocationID.x` to get the index of the vertex we want to compute the tangents for. I opted to put the actual calculation into its own method. Let's take a look at that.
```glsl
ModelVertex calcTangentBitangent(uint vertexIndex) {
@ -130,7 +130,7 @@ ModelVertex calcTangentBitangent(uint vertexIndex) {
## Possible Improvements
Looping over every triangle for every vertex is likely raising some red flags for some of you. In a single threaded context, this algorithm would end up being O(N*M). As we are utilizing the high number of threads available to our GPU, this is less of an issue, but it still means our GPU is burning more cycles than it needs to.
Looping over every triangle for every vertex is likely raising some red flags for some of you. In a single-threaded context, this algorithm would end up being O(N*M). As we are utilizing the high number of threads available to our GPU, this is less of an issue, but it still means our GPU is burning more cycles than it needs to.
One way I came up with to possibly improve performance is to store the index of each triangle in a hash map like structure with the vertex index as keys. Here's some pseudo code:
@ -154,7 +154,7 @@ for (i, (_v, t_list)) in triangle_map.iter().enumerate() {
}
```
I ultimately decided against this method as it was more complicated, and I haven't had time to benchmark it to see if it's faster that the simple method.
I ultimately decided against this method as it was more complicated, and I haven't had time to benchmark it to see if it's faster than the simple method.
## Results
@ -162,4 +162,4 @@ The tangents and bitangents are now getting calculated correctly and on the GPU!
![./results.png](./results.png)
<AutoGithubLink/>
<AutoGithubLink/>

@ -1,6 +1,6 @@
# Creating gifs
# Creating gifs
Sometimes you've created a nice simulation/animation, and you want to show it off. While you can record a video, that might be a bit overkill to break out your video recording if you just want something to post on twitter. That's where what [GIF](https://en.wikipedia.org/wiki/GIF)s are for.
Sometimes you've created a nice simulation/animation, and you want to show it off. While you can record a video, that might be a bit overkill to break out your video recording if you just want something to post on Twitter. That's where what [GIF](https://en.wikipedia.org/wiki/GIF)s are for.
Also, GIF is pronounced GHIF, not JIF as JIF is not only [peanut butter](https://en.wikipedia.org/wiki/Jif_%28peanut_butter%29), it is also a [different image format](https://filext.com/file-extension/JIF).
@ -25,7 +25,7 @@ fn save_gif(path: &str, frames: &mut Vec<Vec<u8>>, speed: i32, size: u16) -> Res
```
<!-- image-rs doesn't currently support looping, so I switched to gif -->
<!-- A GIF is a type of image, and fortunately the [image crate](https://docs.rs/image/) supports GIFs natively. It's pretty simple to use. -->
<!-- A GIF is a type of image, and fortunately, the [image crate](https://docs.rs/image/) supports GIFs natively. It's pretty simple to use. -->
<!-- ```rust
fn save_gif(path: &str, frames: &mut Vec<Vec<u8>>, speed: i32, size: u16) -> Result<(), failure::Error> {
@ -45,7 +45,7 @@ All we need to use this code is the frames of the GIF, how fast it should run, a
## How do we make the frames?
If you checked out the [windowless showcase](../windowless/#a-triangle-without-a-window), you'll know that we render directly to a `wgpu::Texture`. We'll create a texture to render to and a buffer the copy the output to.
If you checked out the [windowless showcase](../windowless/#a-triangle-without-a-window), you'll know that we render directly to a `wgpu::Texture`. We'll create a texture to render to and a buffer to copy the output to.
```rust
// create a texture to render to
@ -87,7 +87,7 @@ let buffer_desc = wgpu::BufferDescriptor {
let output_buffer = device.create_buffer(&buffer_desc);
```
With that we can render a frame, and then copy that frame to a `Vec<u8>`.
With that, we can render a frame, and then copy that frame to a `Vec<u8>`.
```rust
let mut frames = Vec::new();

@ -2,15 +2,15 @@
<div class="warning">
This example is currently broken. It got behind when I was migrating the tutorial to 0.8 as the imgui_wgpu crate was still on 0.7 at the time. I haven't updated it since. While the fixing it wouldn't be too hard (feel free to send a PR), I'm considering removing this example entirely.
This example is currently broken. It got behind when I was migrating the tutorial to 0.8 as the imgui_wgpu crate was still on 0.7 at the time. I haven't updated it since. While fixing it wouldn't be too hard (feel free to send a PR), I'm considering removing this example entirely.
This tutorial is focused how to use wgpu (and by extension the WebGPU standard). I'm looking to minimize the amount of wgpu-adjacent crates that I'm using. They can get in the way of keeping this tutorial as current as possible, and often a crate I'm using will have a different version of wgpu (or winit as is the case as of writing) preventing me from continuing with migration. Beyond dependency conflicts, I'd like to cover some of the topics that some of the existing crates implement such as text and guis.
This tutorial is focused on how to use wgpu (and by extension the WebGPU standard). I'm looking to minimize the amount of wgpu-adjacent crates that I'm using. They can get in the way of keeping this tutorial as current as possible, and often a crate I'm using will have a different version of wgpu (or winit as is the case as of writing) preventing me from continuing with migration. Beyond dependency conflicts, I'd like to cover some of the topics that some of the existing crates implement such as text and guis.
For the 0.10 migration I'll keep this example in and keep the showcase code excluded.
For the 0.10 migration, I'll keep this example in and keep the showcase code excluded.
</div>
This is not an in depth guid on how to use Imgui. But here are some of the basics you'll need to get started. We'll need to import [imgui-rs](https://docs.rs/imgui), [imgui-wgpu](https://docs.rs/imgui-wgpu), and [imgui-winit-support](https://docs.rs/imgui-winit-support).
This is not an in-depth guide on how to use Imgui. But here are some of the basics you'll need to get started. We'll need to import [imgui-rs](https://docs.rs/imgui), [imgui-wgpu](https://docs.rs/imgui-wgpu), and [imgui-winit-support](https://docs.rs/imgui-winit-support).
```toml
imgui = "0.7"
@ -20,11 +20,11 @@ imgui-winit-support = "0.7"
<div class="note">
I've excluded some dependencies for brevity. I'm also using the [framework crate](https://github.com/sotrh/learn-wgpu/tree/master/code/showcase/framework) I've created for showcases to simplify setup. If you see a `display` variable in code, it's from the `framework`. `Display` is where the the `device`, `queue`, `swap_chain`, and other basic wgpu objects are stored.
I've excluded some dependencies for brevity. I'm also using the [framework crate](https://github.com/sotrh/learn-wgpu/tree/master/code/showcase/framework) I've created for showcases to simplify setup. If you see a `display` variable in code, it's from the `framework`. `Display` is where the `device`, `queue`, `swap_chain`, and other basic wgpu objects are stored.
</div>
We need to setup imgui and a `WinitPlatform` to get started. Do this after creating you're `winit::Window`.
We need to set up imgui and a `WinitPlatform` to get started. Do this after creating you're `winit::Window`.
```rust
let mut imgui = imgui::Context::create();
@ -37,7 +37,7 @@ platform.attach_window(
imgui.set_ini_filename(None);
```
Now we need to configure the default font. We'll using the window's scale factor to keep things from being too big or small.
Now we need to configure the default font. We'll be using the window's scale factor to keep things from being too big or small.
```rust
let hidpi_factor = display.window.scale_factor();
@ -144,4 +144,4 @@ That's all there is to it. Here's a picture of the results!
![./screenshot.png](./screenshot.png)
<AutoGithubLink/>
<AutoGithubLink/>

@ -2,13 +2,13 @@
![](./pong.png)
Practically the "Hello World!" of games. Pong has been remade thousands of times. I know Pong. You know Pong. We all know Pong. That being said, this time I wanted to put a little more effort than most people do. This showcase has a basic menu system, sounds, and different game states.
Practically the "Hello World!" of games. Pong has been remade thousands of times. I know Pong. You know Pong. We all know Pong. That being said, this time I wanted to put in a little more effort than most people do. This showcase has a basic menu system, sounds, and different game states.
The architecture is not the best as I prescribed to the "get things done" mentality. If I were to redo this project, I'd change a lot of things. Regardless, let's get into the postmortem.
## The Architecture
I was messing around with separating state from the render code. It ended up similar to an entity component system.
I was messing around with separating state from the render code. It ended up similar to an entity-component system.
I had a `State` class with all of the objects in the scene. This included the ball and the paddles, as well as the text for the scores and even the menu. `State` also included a `game_state` field of type `GameState`.
@ -23,7 +23,7 @@ pub enum GameState {
}
```
The `State` class didn't have any methods on it as I was taking a more data oriented approach. Instead I created a `System` trait, and created multiple structs that implemented it.
The `State` class didn't have any methods on it as I was taking a more data-oriented approach. Instead, I created a `System` trait and created multiple structs that implemented it.
```rust
pub trait System {
@ -38,7 +38,7 @@ pub trait System {
}
```
The systems would be in charge of controlling updating the different objects state (position, visibility, etc), as well as updating the `game_state` field. I created all the systems on startup, and used a `match` on `game_state` to determine which ones should be allow to run (the `visiblity_system` always runs as it is always needed).
The systems would be in charge of controlling updating the different objects' states (position, visibility, etc), as well as updating the `game_state` field. I created all the systems on startup and used a `match` on `game_state` to determine which ones should be allowed to run (the `visiblity_system` always runs as it is always needed).
```rust
visiblity_system.update_state(&input, &mut state, &mut events);
@ -79,19 +79,19 @@ It's definitely not the cleanest code, but it works.
I ended up having 6 systems in total.
1. I added the `VisibilitySystem` near the end of development. Up to that point, all the systems had to set the `visible` field of the objects. That was a pain, and cluttered the logic. Instead I decided to create the `VisiblitySystem` to handle that.
1. I added the `VisibilitySystem` near the end of development. Up to that point, all the systems had to set the `visible` field of the objects. That was a pain and cluttered the logic. Instead, I decided to create the `VisiblitySystem` to handle that.
2. The `MenuSystem` handled controlling what text was focused, and what would happen when the user pressed the enter key. If the `Play` button was focused, pressing enter would change `game_state` to `GameState::Serving` which would start the game. The `Quit` button would shift to `GameState::Quiting`.
3. The `ServingSystem` sets the balls position to `(0.0, 0.0)`, updates the score texts, and shifts into `GameState::Playing` after a timer.
3. The `ServingSystem` sets the ball's position to `(0.0, 0.0)`, updates the score texts, and shifts into `GameState::Playing` after a timer.
4. The `PlaySystem` controls the players. It allows them to move, and keeps them from leaving the play space. This system runs on both `GameState::Playing` as well as `GameState::Serving`. I did this to allow the players to reposition themselves before the serve. The `PlaySystem` also will shift into `GameState::GameOver` when on of the players scores is greater than 2.
4. The `PlaySystem` controls the players. It allows them to move and keeps them from leaving the play space. This system runs on both `GameState::Playing` as well as `GameState::Serving`. I did this to allow the players to reposition themselves before the serve. The `PlaySystem` also will shift into `GameState::GameOver` when one of the players' scores is greater than 2.
5. The `BallSystem` system controls the balls movement as well as its bouncing of walls/players. It also updates the score and shifts to `GameState::Serving` when the ball goes off the side of the screen.
5. The `BallSystem` system controls the ball's movement as well as its bouncing of walls/players. It also updates the score and shifts to `GameState::Serving` when the ball goes off the side of the screen.
6. The `GameOver` system updates the `win_text` and shifts to `GameState::MainMenu` after a delay.
I found the system approach to quite nice to work with. My implementation wasn't the best, but I would like working with it again. I might even implement my own ECS.
I found the system approach quite nice to work with. My implementation wasn't the best, but I would like to work with it again. I might even implement my own ECS.
## Input
@ -155,9 +155,9 @@ This works really well. I simply pass this struct into the `update_state` method
## Render
I used [wgpu_glyph](https://docs.rs/wgpu_glyph) for the text, and white quads for the ball and paddles. There's not much to say here, it's Pong after all.
I used [wgpu_glyph](https://docs.rs/wgpu_glyph) for the text and white quads for the ball and paddles. There's not much to say here, it's Pong after all.
I did mess around with batching however. It was totally overkill for this project, but it was a good learning experience. Here's the code if you're interested.
I did mess around with batching, however. It was totally overkill for this project, but it was a good learning experience. Here's the code if you're interested.
```rust
pub struct QuadBufferBuilder {
@ -256,7 +256,7 @@ I was going to have `BallBounce` play a positioned sound using a `SpatialSink`,
## WASM Support
This example works on the web, but their are a few steps that I needed to take to make things work. The first one was that I needed to switch to using a `lib.rs` instead of just `main.rs`. I opted to use [wasm-pack](https://rustwasm.github.io/wasm-pack/) to create the web assembly. I could have kept the old format by using wasm-bindgen directly, but I ran into issues with using the wrong version of wasm-bindgen, so I elected to stick with wasm-pack.
This example works on the web, but there are a few steps that I needed to take to make things work. The first one was that I needed to switch to using a `lib.rs` instead of just `main.rs`. I opted to use [wasm-pack](https://rustwasm.github.io/wasm-pack/) to create the web assembly. I could have kept the old format by using wasm-bindgen directly, but I ran into issues with using the wrong version of wasm-bindgen, so I elected to stick with wasm-pack.
In order for wasm-pack to work properly I first needed to add some dependencies:
@ -294,15 +294,15 @@ wgpu = { version = "0.12", features = ["spirv", "webgl"]}
I'll highlight a few of these:
- rand: If you want to use rand on the web, you need to include getrandom directly and enable it's `js` feature.
- rodio: I had to disable all of the features for the WASM build, and then enabled them separately. The `mp3` feature specifically wasn't working for me. There might have been a work around, but since I'm not using mp3 in this example I just elected to only use wav.
- instant: This crate is basically just a wrapper around `std::time::Instant`. In a normal build it's just a type alias. In web builds it uses the browsers time functions.
- cfg-if: This is a convient crate for making platform specific code less horrible to write.
- rand: If you want to use rand on the web, you need to include getrandom directly and enable its `js` feature.
- rodio: I had to disable all of the features for the WASM build, and then enabled them separately. The `mp3` feature specifically wasn't working for me. There might have been a workaround, but since I'm not using mp3 in this example I just elected to only use wav.
- instant: This crate is basically just a wrapper around `std::time::Instant`. In a normal build, it's just a type alias. In web builds it uses the browser's time functions.
- cfg-if: This is a convenient crate for making platform-specific code less horrible to write.
- env_logger and console_log: env_logger doesn't work on web assembly so we need to use a different logger. console_log is the one used in the web assembly tutorials, so I went with that one.
- wasm-bindgen: This crate is the glue that makes Rust code work on the web. If you are building using the wasm-bindgen command you need to make sure that the command version of wasm-bindgen matches the version in Cargo.toml **exactly** otherwise you'll have problems. If you use wasm-pack it will download the appopriate wasm-bindgen binary to use for your crate.
- web-sys: This is has functions and types that allow you to use different methods available in js such as "getElementById()".
- wasm-bindgen: This crate is the glue that makes Rust code work on the web. If you are building using the wasm-bindgen command you need to make sure that the command version of wasm-bindgen matches the version in Cargo.toml **exactly** otherwise you'll have problems. If you use wasm-pack it will download the appropriate wasm-bindgen binary to use for your crate.
- web-sys: This has functions and types that allow you to use different methods available in js such as "getElementById()".
Now that that's out of the way lets talk about some code. First we need to create a function that will start our event loop.
Now that that's out of the way let's talk about some code. First, we need to create a function that will start our event loop.
```rust
#[cfg(target_arch="wasm32")]
@ -329,7 +329,7 @@ cfg_if::cfg_if! {
This code should run before you try to do anything significant. It sets up the logger based on what architecture you're building for. Most architectures will use `env_logger`. The `wasm32` architecture will use `console_log`. It's also important that we tell Rust to forward panics to javascript. If we didn't do this we would have no idea when our Rust code panics.
Next we create a window. Much of it is like we've done before, but since we are supporting fullscreen we need to do some extra steps.
Next, we create a window. Much of it is like we've done before, but since we are supporting fullscreen we need to do some extra steps.
```rust
let event_loop = EventLoop::new();
@ -350,7 +350,7 @@ if window.fullscreen().is_none() {
}
```
We then have to do some web specific stuff if we are on that platform.
We then have to do some web-specific stuff if we are on that platform.
```rust
#[cfg(target_arch = "wasm32")]
@ -379,7 +379,7 @@ Everything else works the same.
## Summary
A fun project to work on. It was overly architected, and kinda hard to make changes, but a good experience none the less.
A fun project to work on. It was overly architected, and kinda hard to make changes, but a good experience nonetheless.
Try the code down below! (Controls currently require a keyboard.)

@ -1,6 +1,6 @@
# Wgpu without a window
Sometimes we just want to leverage the gpu. Maybe we want to crunch a large set of numbers in parallel. Maybe we're working on a 3D movie, and need to create a realistic looking scene with path tracing. Maybe we're mining a cryptocurrency. In all these situations, we don't necessarily *need* to see what's going on.
Sometimes we just want to leverage the gpu. Maybe we want to crunch a large set of numbers in parallel. Maybe we're working on a 3D movie, and need to create a realistic-looking scene with path tracing. Maybe we're mining a cryptocurrency. In all these situations, we don't necessarily *need* to see what's going on.
## So what do we need to do?
@ -48,7 +48,7 @@ let texture_view = texture.create_view(&Default::default());
We're using `TextureUsages::RENDER_ATTACHMENT` so wgpu can render to our texture. The `TextureUsages::COPY_SRC` is so we can pull data out of the texture so we can save it to a file.
While we can use this texture to draw our triangle, we need some way to get at the pixels inside it. Back in the [texture tutorial](/beginner/tutorial5-textures/) we used a buffer load color data from a file that we then copied into our buffer. Now we are going to do the reverse: copy data into a buffer from our texture to save into a file. We'll need a buffer big enough for our data.
While we can use this texture to draw our triangle, we need some way to get at the pixels inside it. Back in the [texture tutorial](/beginner/tutorial5-textures/) we used a buffer to load color data from a file that we then copied into our buffer. Now we are going to do the reverse: copy data into a buffer from our texture to save into a file. We'll need a buffer big enough for our data.
```rust
// we need to store this for later
@ -240,7 +240,7 @@ queue.submit(Some(encoder.finish()));
## Getting data out of a buffer
In order to get the data out of the buffer we need to first map it, then we can get a `BufferView` that we can treat like a `&[u8]`.
In order to get the data out of the buffer, we need to first map it, then we can get a `BufferView` that we can treat like a `&[u8]`.
```rust
// We need to scope the mapping variables so that we can
@ -266,7 +266,7 @@ output_buffer.unmap();
## Main is not asyncable
The `main()` method can't return a future, so we can't use the `async` keyword. We'll get around this by putting our code into a different function so that we can block on it in `main()`. You'll need to use a crate that can poll futures such as the [pollster crate](https://docs.rs/pollster).
The `main()` method can't return a future, so we can't use the `async` keyword. We'll get around this by putting our code into a different function so that we can block it in `main()`. You'll need to use a crate that can poll futures such as the [pollster crate](https://docs.rs/pollster).
```rust
async fn run() {

Loading…
Cancel
Save