From 2275f68247cb2c297b18d7a6adb8979c495749bb Mon Sep 17 00:00:00 2001 From: dis_da_moe Date: Mon, 18 Apr 2022 08:48:38 +0300 Subject: [PATCH] Typos and grammar for rest of docs --- .../tutorial10-lighting/README.md | 64 +++++++++---------- .../intermediate/tutorial11-normals/README.md | 32 +++++----- docs/intermediate/tutorial12-camera/README.md | 14 ++-- .../tutorial13-threading/README.md | 8 +-- docs/news/0.12/readme.md | 6 +- docs/news/pre-0.12/readme.md | 28 ++++---- docs/showcase/README.md | 2 +- docs/showcase/alignment/README.md | 45 ++++--------- docs/showcase/compute/README.md | 18 +++--- docs/showcase/gifs/README.md | 10 +-- docs/showcase/imgui-demo/README.md | 16 ++--- docs/showcase/pong/README.md | 44 ++++++------- docs/showcase/windowless/README.md | 8 +-- 13 files changed, 137 insertions(+), 158 deletions(-) diff --git a/docs/intermediate/tutorial10-lighting/README.md b/docs/intermediate/tutorial10-lighting/README.md index d204c5bb..0e9f0da8 100644 --- a/docs/intermediate/tutorial10-lighting/README.md +++ b/docs/intermediate/tutorial10-lighting/README.md @@ -18,7 +18,7 @@ Once 0.13 comes out I'll revert to using the version published on crates.io. While we can tell that our scene is 3d because of our camera, it still feels very flat. That's because our model stays the same color regardless of how it's oriented. If we want to change that we need to add lighting to our scene. -In the real world, a light source emits photons which bounce around until they enter into our eyes. The color we see is the light's original color minus whatever energy it lost while it was bouncing around. +In the real world, a light source emits photons that bounce around until they enter our eyes. The color we see is the light's original color minus whatever energy it lost while it was bouncing around. In the computer graphics world, modeling individual photons would be hilariously computationally expensive. A single 100 Watt light bulb emits about 3.27 x 10^20 photons *per second*. Just imagine that for the sun! To get around this, we're gonna use math to cheat. @@ -26,11 +26,11 @@ Let's discuss a few options. ## Ray/Path Tracing -This is an *advanced* topic, and we won't be covering it in depth here. It's the closest model to the way light really works so I felt I had to mention it. Check out the [ray tracing tutorial](../../todo/) if you want to learn more. +This is an *advanced* topic, and we won't be covering it in-depth here. It's the closest model to the way light really works so I felt I had to mention it. Check out the [ray tracing tutorial](../../todo/) if you want to learn more. ## The Blinn-Phong Model -Ray/path tracing is often too computationally expensive for most realtime applications (though that is starting to change), so a more efficient, if less accurate method based on the [Phong reflection model](https://en.wikipedia.org/wiki/Phong_shading) is often used. It splits up the lighting calculation into three (3) parts: ambient lighting, diffuse lighting, and specular lighting. We're going to be learning the [Blinn-Phong model](https://en.wikipedia.org/wiki/Blinn%E2%80%93Phong_reflection_model), which cheats a bit at the specular calculation to speed things up. +Ray/path tracing is often too computationally expensive for most real-time applications (though that is starting to change), so a more efficient, if less accurate method based on the [Phong reflection model](https://en.wikipedia.org/wiki/Phong_shading) is often used. It splits up the lighting calculation into three (3) parts: ambient lighting, diffuse lighting, and specular lighting. We're going to be learning the [Blinn-Phong model](https://en.wikipedia.org/wiki/Blinn%E2%80%93Phong_reflection_model), which cheats a bit at the specular calculation to speed things up. Before we can get into that though, we need to add a light to our scene. @@ -54,17 +54,17 @@ Our `LightUniform` represents a colored point in space. We're just going to use
The rule of thumb for alignment with WGSL structs is field alignments are -always powers of 2. For example a `vec3` may only have 3 float fields giving +always powers of 2. For example, a `vec3` may only have 3 float fields giving it a size of 12, the alignment will be bumped up to the next power of 2 being 16. This means that you have to be more careful with how you layout your struct -in Rust. + in Rust. Some developers choose the use `vec4`s instead of `vec3`s to avoid alignment issues. You can learn more about the alignment rules in the [wgsl spec](https://www.w3.org/TR/WGSL/#alignment-and-size)
-We're going to create another buffer to store our light in. +We're going to create another buffer to store our light in. ```rust let light_uniform = LightUniform { @@ -85,7 +85,7 @@ let light_buffer = device.create_buffer_init( ``` -Don't forget to add the `light_uniform` and `light_buffer` to `State`. After that we need to create a bind group layout and bind group for our light. +Don't forget to add the `light_uniform` and `light_buffer` to `State`. After that, we need to create a bind group layout and bind group for our light. ```rust let light_bind_group_layout = @@ -125,7 +125,7 @@ let render_pipeline_layout = device.create_pipeline_layout(&wgpu::PipelineLayout }); ``` -Let's also update the lights position in the `update()` method, so we can see what our objects look like from different angles. +Let's also update the light's position in the `update()` method, so we can see what our objects look like from different angles. ```rust // Update the light @@ -141,7 +141,7 @@ This will have the light rotate around the origin one degree every frame. ## Seeing the light -For debugging purposes, it would be nice if we could see where the light is to make sure that the scene looks correct. We could adapt our existing render pipeline to draw the light, but it will likely get in the way. Instead we are going to extract our render pipeline creation code into a new function called `create_render_pipeline()`. +For debugging purposes, it would be nice if we could see where the light is to make sure that the scene looks correct. We could adapt our existing render pipeline to draw the light, but it will likely get in the way. Instead, we are going to extract our render pipeline creation code into a new function called `create_render_pipeline()`. ```rust @@ -339,7 +339,7 @@ let light_render_pipeline = { I chose to create a separate layout for the `light_render_pipeline`, as it doesn't need all the resources that the regular `render_pipeline` needs (main just the textures). -With that in place we need to write the actual shaders. +With that in place, we need to write the actual shaders. ```wgsl // light.wgsl @@ -469,7 +469,7 @@ where } ``` -Finally we want to add Light rendering to our render passes. +Finally, we want to add Light rendering to our render passes. ```rust impl State { @@ -496,13 +496,13 @@ impl State { } ``` -With all that we'll end up with something like this. +With all that, we'll end up with something like this. ![./light-in-scene.png](./light-in-scene.png) ## Ambient Lighting -Light has a tendency to bounce around before entering our eyes. That's why you can see in areas that are in shadow. Actually modeling this interaction is computationally expensive, so we cheat. We define an ambient lighting value that stands in for the light bouncing of other parts of the scene to light our objects. +Light has a tendency to bounce around before entering our eyes. That's why you can see in areas that are in shadow. Actually modeling this interaction is computationally expensive, so we cheat. We define an ambient lighting value that stands for the light bouncing off other parts of the scene to light our objects. The ambient part is based on the light color as well as the object color. We've already added our `light_bind_group`, so we just need to use it in our shader. In `shader.wgsl`, add the following below the texture uniforms. @@ -532,7 +532,7 @@ fn fs_main(in: VertexOutput) -> [[location(0)]] vec4 { } ``` -With that we should get something like the this. +With that, we should get something like this. ![./ambient_lighting.png](./ambient_lighting.png) @@ -542,7 +542,7 @@ Remember the normal vectors that were included with our model? We're finally goi ![./normal_diagram.png](./normal_diagram.png) -If the dot product of the normal and light vector is 1.0, that means that the current fragment is directly inline with the light source and will receive the lights full intensity. A value of 0.0 or lower means that the surface is perpendicular or facing away from the light, and therefore will be dark. +If the dot product of the normal and light vector is 1.0, that means that the current fragment is directly in line with the light source and will receive the light's full intensity. A value of 0.0 or lower means that the surface is perpendicular or facing away from the light, and therefore will be dark. We're going to need to pull in the normal vector into our `shader.wgsl`. @@ -565,7 +565,7 @@ struct VertexOutput { }; ``` -For now let's just pass the normal directly as is. This is wrong, but we'll fix it later. +For now, let's just pass the normal directly as-is. This is wrong, but we'll fix it later. ```wgsl [[stage(vertex)]] @@ -589,7 +589,7 @@ fn vs_main( } ``` -With that we can do the actual calculation. Below the `ambient_color` calculation, but above `result`, add the following. +With that, we can do the actual calculation. Below the `ambient_color` calculation, but above `result`, add the following. ```wgsl let light_dir = normalize(light.position - in.world_position); @@ -604,7 +604,7 @@ Now we can include the `diffuse_color` in the `result`. let result = (ambient_color + diffuse_color) * object_color.xyz; ``` -With that we get something like this. +With that, we get something like this. ![./ambient_diffuse_wrong.png](./ambient_diffuse_wrong.png) @@ -633,11 +633,11 @@ This is clearly wrong as the light is illuminating the wrong side of the cube. T ![./normal_not_rotated.png](./normal_not_rotated.png) -We need to use the model matrix to transform the normals to be in the right direction. We only want the rotation data though. A normal represents a direction, and should be a unit vector throughout the calculation. We can get our normals into the right direction using what is called a normal matrix. +We need to use the model matrix to transform the normals to be in the right direction. We only want the rotation data though. A normal represents a direction and should be a unit vector throughout the calculation. We can get our normals in the right direction using what is called a normal matrix. -We could compute the normal matrix in the vertex shader, but that would involve inverting the `model_matrix`, and WGSL doesn't actually have an inverse function. We would have to code our own. On top of that computing the inverse of a matrix is actually really expensive, especially doing that compututation for every vertex. +We could compute the normal matrix in the vertex shader, but that would involve inverting the `model_matrix`, and WGSL doesn't actually have an inverse function. We would have to code our own. On top of that computing, the inverse of a matrix is actually really expensive, especially doing that computation for every vertex. -Instead we're going to add a `normal` matrix field to `InstanceRaw`. Instead of inverting the model matrix, we'll just be using the instance's rotation to create a `Matrix3`. +Instead, we're going to add a `normal` matrix field to `InstanceRaw`. Instead of inverting the model matrix, we'll just be using the instance's rotation to create a `Matrix3`.
@@ -781,13 +781,13 @@ fn vs_main(
-I'm currently doing things in [world space](https://gamedev.stackexchange.com/questions/65783/what-are-world-space-and-eye-space-in-game-development). Doing things in view-space also known as eye-space, is more standard as objects can have lighting issues when they are further away from the origin. If we wanted to use view-space, we would have include the rotation due to the view matrix as well. We'd also have to transform our light's position using something like `view_matrix * model_matrix * light_position` to keep the calculation from getting messed up when the camera moves. +I'm currently doing things in [world space](https://gamedev.stackexchange.com/questions/65783/what-are-world-space-and-eye-space-in-game-development). Doing things in view-space also known as eye-space, is more standard as objects can have lighting issues when they are further away from the origin. If we wanted to use view-space, we would have included the rotation due to the view matrix as well. We'd also have to transform our light's position using something like `view_matrix * model_matrix * light_position` to keep the calculation from getting messed up when the camera moves. -There are advantages to using view space. The main one is when you have massive worlds doing lighting and other calculations in model spacing can cause issues as floating point precision degrades when numbers get really large. View space keeps the camera at the origin meaning all calculations will be using smaller numbers. The actual lighting math ends up the same, but it does require a bit more setup. +There are advantages to using view space. The main one is when you have massive worlds doing lighting and other calculations in model spacing can cause issues as floating-point precision degrades when numbers get really large. View space keeps the camera at the origin meaning all calculations will be using smaller numbers. The actual lighting math ends up the same, but it does require a bit more setup.
-With that change our lighting now looks correct. +With that change, our lighting now looks correct. ![./diffuse_right.png](./diffuse_right.png) @@ -803,7 +803,7 @@ If you can guarantee that your model matrix will always apply uniform scaling to out.world_normal = (model_matrix * vec4(model.normal, 0.0)).xyz; ``` -This works by exploiting the fact that multiplying a 4x4 matrix by a vector with 0 in the w component, only the rotation and scaling will be applied to the vector. You'll need to normalize this vector though as normals need to be unit length for the calculations to work. +This works by exploiting the fact that by multiplying a 4x4 matrix by a vector with 0 in the w component, only the rotation and scaling will be applied to the vector. You'll need to normalize this vector though as normals need to be unit length for the calculations to work. The scaling factor *needs* to be uniform in order for this to work. If it's not the resulting normal will be skewed as you can see in the following image. @@ -813,7 +813,7 @@ The scaling factor *needs* to be uniform in order for this to work. If it's not ## Specular Lighting -Specular lighting describes the highlights that appear on objects when viewed from certain angles. If you've ever looked at a car, it's the super bright parts. Basically, some of the light can reflect of the surface like a mirror. The location of the hightlight shifts depending on what angle you view it at. +Specular lighting describes the highlights that appear on objects when viewed from certain angles. If you've ever looked at a car, it's the super bright parts. Basically, some of the light can reflect off the surface like a mirror. The location of the highlight shifts depending on what angle you view it at. ![./specular_diagram.png](./specular_diagram.png) @@ -861,7 +861,7 @@ impl CameraUniform { } ``` -Since we want to use our uniforms in the fragment shader now, we need to change it's visibility. +Since we want to use our uniforms in the fragment shader now, we need to change its visibility. ```rust // lib.rs @@ -894,23 +894,23 @@ let specular_strength = pow(max(dot(view_dir, reflect_dir), 0.0), 32.0); let specular_color = specular_strength * light.color; ``` -Finally we add that to the result. +Finally, we add that to the result. ```wgsl let result = (ambient_color + diffuse_color + specular_color) * object_color.xyz; ``` -With that you should have something like this. +With that, you should have something like this. ![./ambient_diffuse_specular_lighting.png](./ambient_diffuse_specular_lighting.png) -If we just look at the `specular_color` on it's own we get this. +If we just look at the `specular_color` on its own we get this. ![./specular_lighting.png](./specular_lighting.png) ## The half direction -Up to this point we've actually only implemented the Phong part of Blinn-Phong. The Phong reflection model works well, but it can break down under [certain circumstances](https://learnopengl.com/Advanced-Lighting/Advanced-Lighting). The Blinn part of Blinn-Phong comes from the realization that if you add the `view_dir`, and `light_dir` together, normalize the result and use the dot product of that and the `normal`, you get roughly the same results without the issues that using `reflect_dir` had. +Up to this point, we've actually only implemented the Phong part of Blinn-Phong. The Phong reflection model works well, but it can break down under [certain circumstances](https://learnopengl.com/Advanced-Lighting/Advanced-Lighting). The Blinn part of Blinn-Phong comes from the realization that if you add the `view_dir`, and `light_dir` together, normalize the result and use the dot product of that and the `normal`, you get roughly the same results without the issues that using `reflect_dir` had. ```wgsl let view_dir = normalize(camera.view_pos.xyz - in.world_position); @@ -919,7 +919,7 @@ let half_dir = normalize(view_dir + light_dir); let specular_strength = pow(max(dot(in.world_normal, half_dir), 0.0), 32.0); ``` -It's hard to tell the difference, but here's the results. +It's hard to tell the difference, but here are the results. ![./half_dir.png](./half_dir.png) diff --git a/docs/intermediate/tutorial11-normals/README.md b/docs/intermediate/tutorial11-normals/README.md index cbd72062..a2826c2c 100644 --- a/docs/intermediate/tutorial11-normals/README.md +++ b/docs/intermediate/tutorial11-normals/README.md @@ -16,7 +16,7 @@ Once 0.13 comes out I'll revert to using the version published on crates.io.
-With just lighting, our scene is already looking pretty good. Still, our models are still overly smooth. This is understandable because we are using a very simple model. If we were using a texture that was supposed to be smooth, this wouldn't be a problem, but our brick texture is supposed to be rougher. We could solve this by adding more geometry, but that would slow our scene down, and it be would hard to know where to add new polygons. This is were normal mapping comes in. +With just lighting, our scene is already looking pretty good. Still, our models are still overly smooth. This is understandable because we are using a very simple model. If we were using a texture that was supposed to be smooth, this wouldn't be a problem, but our brick texture is supposed to be rougher. We could solve this by adding more geometry, but that would slow our scene down, and it be would hard to know where to add new polygons. This is where normal mapping comes in. Remember in [the instancing tutorial](/beginner/tutorial7-instancing/#a-different-way-textures), we experimented with storing instance data in a texture? A normal map is doing just that with normal data! We'll use the normals in the normal map in our lighting calculation in addition to the vertex normal. @@ -65,7 +65,7 @@ let texture_bind_group_layout = device.create_bind_group_layout(&wgpu::BindGroup }); ``` -We'll need to actually load the normal map. We'll do this in the loop we create the materials in. +We'll need to actually load the normal map. We'll do this in the loop where we create the materials. ```rust let diffuse_path = mat.diffuse_texture; @@ -78,7 +78,7 @@ We'll need to actually load the normal map. We'll do this in the loop we create * Note: I duplicated and moved the `command_buffers.push(cmds);` line. This means we can reuse the `cmds` variable for both the normal map and diffuse/color map. -Our `Material`'s `bind_group` will have to change as well. +Our `Material`'s `bind_group` will have to change as well. ```rust let bind_group = device.create_bind_group(&wgpu::BindGroupDescriptor { @@ -109,7 +109,7 @@ materials.push(Material { }); ``` -Now we can add use the texture in the fragment shader. +Now we can use the texture in the fragment shader. ```wgsl // Fragment shader @@ -159,11 +159,11 @@ Parts of the scene are dark when they should be lit up, and vice versa. ## Tangent Space to World Space -I mentioned it briefly in the [lighting tutorial](/intermediate/tutorial10-lighting/#the-normal-matrix), that we were doing our lighting calculation in "world space". This meant that the entire scene was oriented with respect to the *world's* coordinate system. When we pull the normal data from our normal texture, all the normals are in what's known as pointing roughly in the positive z direction. That means that our lighting calculation thinks all of the surfaces of our models are facing in roughly the same direction. This is referred to as `tangent space`. +I mentioned briefly in the [lighting tutorial](/intermediate/tutorial10-lighting/#the-normal-matrix), that we were doing our lighting calculation in "world space". This meant that the entire scene was oriented with respect to the *world's* coordinate system. When we pull the normal data from our normal texture, all the normals are in what's known as pointing roughly in the positive z direction. That means that our lighting calculation thinks all of the surfaces of our models are facing in roughly the same direction. This is referred to as `tangent space`. If we remember the [lighting-tutorial](/intermediate/tutorial10-lighting/#), we used the vertex normal to indicate the direction of the surface. It turns out we can use that to transform our normals from `tangent space` into `world space`. In order to do that we need to draw from the depths of linear algebra. -We can create a matrix that represents a coordinate system using 3 vectors that are perpendicular (or orthonormal) to each other. Basically we define the x, y, and z axes of our coordinate system. +We can create a matrix that represents a coordinate system using 3 vectors that are perpendicular (or orthonormal) to each other. Basically, we define the x, y, and z axes of our coordinate system. ```wgsl let coordinate_system = mat3x3( @@ -173,17 +173,17 @@ let coordinate_system = mat3x3( ); ``` -We're going to create a matrix that will represent the coordinate space relative to our vertex normals. We're then going to use that to transform our normal map data to be in world space. +We're going to create a matrix that will represent the coordinate space relative to our vertex normals. We're then going to use that to transform our normal map data to be in world space. ## The tangent, and the bitangent -We have one of the 3 vectors we need, the normal. What about the others? These are the tangent, and bitangent vectors. A tangent represents any vector that is parallel with a surface (aka. doesn't intersect with it). The tangent is always perpendicular to the normal vector. The bitangent is a tangent vector that is perpendicular to the other tangent vector. Together the tangent, bitangent, and normal represent the x, y, and z axes respectively. +We have one of the 3 vectors we need, the normal. What about the others? These are the tangent and bitangent vectors. A tangent represents any vector that is parallel with a surface (aka. doesn't intersect with it). The tangent is always perpendicular to the normal vector. The bitangent is a tangent vector that is perpendicular to the other tangent vector. Together the tangent, bitangent, and normal represent the x, y, and z axes respectively. -Some model formats include the tanget and bitangent (sometimes called the binormal) in the vertex data, but OBJ does not. We'll have to calculate them manually. Luckily we can derive our tangent, and bitangent from our existing vertex data. Take a look at the following diagram. +Some model formats include the tanget and bitangent (sometimes called the binormal) in the vertex data, but OBJ does not. We'll have to calculate them manually. Luckily we can derive our tangent and bitangent from our existing vertex data. Take a look at the following diagram. ![](./tangent_space.png) -Basically we can use the edges of our triangles, and our normal to calculate the tangent and bitangent. But first, we need to update our `ModelVertex` struct in `model.rs`. +Basically, we can use the edges of our triangles, and our normal to calculate the tangent and bitangent. But first, we need to update our `ModelVertex` struct in `model.rs`. ```rust #[repr(C)] @@ -227,7 +227,7 @@ impl Vertex for ModelVertex { } ``` -Now we can calculate the new tangent, and bitangent vectors. +Now we can calculate the new tangent and bitangent vectors. ```rust impl Model { @@ -337,7 +337,7 @@ impl Model { ## World Space to Tangent Space -Since the normal map by default is in tangent space, we need to transform all the other variables used in that calculation to tangent space as well. We'll need to construct the tangent matrix in the vertex shader. First we need our `VertexInput` to include the tangent and bitangents we calculated earlier. +Since the normal map by default is in tangent space, we need to transform all the other variables used in that calculation to tangent space as well. We'll need to construct the tangent matrix in the vertex shader. First, we need our `VertexInput` to include the tangent and bitangents we calculated earlier. ```wgsl struct VertexInput { @@ -349,7 +349,7 @@ struct VertexInput { }; ``` -Next we'll construct the `tangent_matrix` and then transform the vertex, light and view position into tangent space. +Next, we'll construct the `tangent_matrix` and then transform the vertex's light and view position into tangent space. ```wgsl struct VertexOutput { @@ -395,7 +395,7 @@ fn vs_main( } ``` -Finally we'll update the fragment shader to use these transformed lighting values. +Finally, we'll update the fragment shader to use these transformed lighting values. ```wgsl [[stage(fragment)]] @@ -419,7 +419,7 @@ We get the following from this calculation. We've been using `Rgba8UnormSrgb` for all our textures. The `Srgb` bit specifies that we will be using [standard red green blue color space](https://en.wikipedia.org/wiki/SRGB). This is also known as linear color space. Linear color space has less color density. Even so, it is often used for diffuse textures, as they are typically made in `Srgb` color space. -Normal textures aren't made with `Srgb`. Using `Rgba8UnormSrgb` can changes how the GPU samples the texture. This can make the resulting simulation [less accurate](https://medium.com/@bgolus/generating-perfect-normal-maps-for-unity-f929e673fc57#b86c). We can avoid these issues by using `Rgba8Unorm` when we create the texture. Let's add an `is_normal_map` method to our `Texture` struct. +Normal textures aren't made with `Srgb`. Using `Rgba8UnormSrgb` can change how the GPU samples the texture. This can make the resulting simulation [less accurate](https://medium.com/@bgolus/generating-perfect-normal-maps-for-unity-f929e673fc57#b86c). We can avoid these issues by using `Rgba8Unorm` when we create the texture. Let's add an `is_normal_map` method to our `Texture` struct. ```rust pub fn from_image( @@ -599,7 +599,7 @@ where } ``` -I found a cobblestone texture with matching normal map, and created a `debug_material` for that. +I found a cobblestone texture with a matching normal map and created a `debug_material` for that. ```rust // main.rs diff --git a/docs/intermediate/tutorial12-camera/README.md b/docs/intermediate/tutorial12-camera/README.md index efe55889..6b500b3e 100644 --- a/docs/intermediate/tutorial12-camera/README.md +++ b/docs/intermediate/tutorial12-camera/README.md @@ -18,7 +18,7 @@ Once 0.13 comes out I'll revert to using the version published on crates.io. I've been putting this off for a while. Implementing a camera isn't specifically related to using WGPU properly, but it's been bugging me so let's do it. -`main.rs` is getting a little crowded, so let's create a `camera.rs` file to put our camera code. The first thing we're going to put in it in is some imports and our `OPENGL_TO_WGPU_MATRIX`. +`main.rs` is getting a little crowded, so let's create a `camera.rs` file to put our camera code. The first things we're going to put in it are some imports and our `OPENGL_TO_WGPU_MATRIX`. ```rust use cgmath::*; @@ -50,7 +50,7 @@ instant = "0.1" ## The Camera -Next we need create a new `Camera` struct. We're going to be using a FPS style camera, so we'll store the position and the yaw (horizontal rotation), and pitch (vertical rotation). We'll have a `calc_matrix` method to create our view matrix. +Next, we need to create a new `Camera` struct. We're going to be using an FPS-style camera, so we'll store the position and the yaw (horizontal rotation), and pitch (vertical rotation). We'll have a `calc_matrix` method to create our view matrix. ```rust #[derive(Debug)] @@ -129,9 +129,9 @@ impl Projection { } ``` -On thing to note: `cgmath` currently returns a right-handed projection matrix from the `perspective` function. This means that the z-axis points out of the screen. If you want the z-axis to be *into* the screen (aka. a left-handed projection matrix), you'll have to code your own. +One thing to note: `cgmath` currently returns a right-handed projection matrix from the `perspective` function. This means that the z-axis points out of the screen. If you want the z-axis to be *into* the screen (aka. a left-handed projection matrix), you'll have to code your own. -You can tell the difference between a right-handed coordinate system and a left-handed one by using your hands. Point your thumb to the right. This is the x-axis. Point your pointer finger up. This is the y-axis. Extend your middle finger. This is the z-axis. On your right hand your middle finger should be pointing towards you. On your left hand it should be pointing away. +You can tell the difference between a right-handed coordinate system and a left-handed one by using your hands. Point your thumb to the right. This is the x-axis. Point your pointer finger up. This is the y-axis. Extend your middle finger. This is the z-axis. On your right hand, your middle finger should be pointing towards you. On your left hand, it should be pointing away. ![./left_right_hand.gif](./left_right_hand.gif) @@ -343,7 +343,7 @@ fn resize(&mut self, new_size: winit::dpi::PhysicalSize) { } ``` -`input()` will need to be updated as well. Up to this point we have been using `WindowEvent`s for our camera controls. While this works, it's not the best solution. The [winit docs](https://docs.rs/winit/0.24.0/winit/event/enum.WindowEvent.html?search=#variant.CursorMoved) inform us that OS will often transform the data for the `CursorMoved` event to allow effects such as cursor acceleration. +`input()` will need to be updated as well. Up to this point, we have been using `WindowEvent`s for our camera controls. While this works, it's not the best solution. The [winit docs](https://docs.rs/winit/0.24.0/winit/event/enum.WindowEvent.html?search=#variant.CursorMoved) inform us that OS will often transform the data for the `CursorMoved` event to allow effects such as cursor acceleration. Now to fix this we could change the `input()` function to process `DeviceEvent` instead of `WindowEvent`, but keyboard and button presses don't get emitted as `DeviceEvent`s on MacOS and WASM. Instead, we'll just remove the `CursorMoved` check in `input()`, and a manual call to `camera_controller.process_mouse()` in the `run()` function. @@ -425,7 +425,7 @@ fn main() { } ``` -The `update` function requires a bit more explanation. The `update_camera` function on the `CameraController` has a parameter `dt: Duration` which is the delta time or time between frames. This is to help smooth out the camera movement so that it's not locked be the framerate. Currently we aren't calculating `dt`, so I decided to pass it into `update` as a parameter. +The `update` function requires a bit more explanation. The `update_camera` function on the `CameraController` has a parameter `dt: Duration` which is the delta time or time between frames. This is to help smooth out the camera movement so that it's not locked by the framerate. Currently, we aren't calculating `dt`, so I decided to pass it into `update` as a parameter. ```rust fn update(&mut self, dt: instant::Duration) { @@ -470,7 +470,7 @@ fn main() { } ``` -With that we should be able to move our camera wherever we want. +With that, we should be able to move our camera wherever we want. ![./screenshot.png](./screenshot.png) diff --git a/docs/intermediate/tutorial13-threading/README.md b/docs/intermediate/tutorial13-threading/README.md index 6c57c1a6..97486bee 100644 --- a/docs/intermediate/tutorial13-threading/README.md +++ b/docs/intermediate/tutorial13-threading/README.md @@ -16,7 +16,7 @@ Once 0.13 comes out I'll revert to using the version published on crates.io. -The main selling point of Vulkan, DirectX 12, Metal, and by extension Wgpu is that these APIs is that they designed from the ground up to be thread safe. Up to this point we have been doing everything on a single thread. That's about to change. +The main selling point of Vulkan, DirectX 12, Metal, and by extension Wgpu is that these APIs is that they designed from the ground up to be thread-safe. Up to this point, we have been doing everything on a single thread. That's about to change.
@@ -28,7 +28,7 @@ We won't go over multithreading rendering as we don't have enough different type ## Parallelizing loading models and textures -Currently we load the materials and meshes of our model one at a time. This is a perfect opportunity for multithreading! All our changes will be in `model.rs`. Let's first start with the materials. We'll convert the regular for loop into a `par_iter().map()`. +Currently, we load the materials and meshes of our model one at a time. This is a perfect opportunity for multithreading! All our changes will be in `model.rs`. Let's first start with the materials. We'll convert the regular for loop into a `par_iter().map()`. ```rust // resources.rs @@ -72,7 +72,7 @@ impl Model { } ``` -Next we can update the meshes to be loaded in parallel. +Next, we can update the meshes to be loaded in parallel. ```rust impl Model { @@ -145,7 +145,7 @@ Elapsed (Original): 309.596382ms Elapsed (Threaded): 199.645027ms ``` -We're not loading that many resources, so the speed up is minimal. We'll be doing more stuff with threading, but this is a good introduction. +We're not loading that many resources, so the speedup is minimal. We'll be doing more stuff with threading, but this is a good introduction. diff --git a/docs/news/0.12/readme.md b/docs/news/0.12/readme.md index ec17f79d..cd83afe3 100644 --- a/docs/news/0.12/readme.md +++ b/docs/news/0.12/readme.md @@ -1,6 +1,6 @@ # Update to 0.12! -There's not a ton of changes in this release, so the migration +There are not a ton of changes in this release, so the migration wasn't too painful. ## Multi view added @@ -12,7 +12,7 @@ as render attachments. ## No more block attribute The WGSL spec has changed and the `block` attribute is no longer a thing. -This means that structs in WGSL no longer need to be anotated to be used +This means that structs in WGSL no longer need to be annotated to be used as uniform input. For example: ```wgsl @@ -74,4 +74,4 @@ imports and uses (ie. `anyhow::Result`). This was mostly an issue on my build scripts for some of the showcase examples. The main tutorial examples weren't affected, and the changes are minor, so -if your curious feel free to look at the repo. \ No newline at end of file +if you're curious feel free to look at the repo. \ No newline at end of file diff --git a/docs/news/pre-0.12/readme.md b/docs/news/pre-0.12/readme.md index bcc5d275..93d41ff8 100644 --- a/docs/news/pre-0.12/readme.md +++ b/docs/news/pre-0.12/readme.md @@ -2,7 +2,7 @@ ## Pong working on the web -This took a little while to figure out. I ended up using wasm-pack to create the wasm as I was having trouble with getting wasm-bindgen to work. I figured it out eventually but decided to keep using wasm-pack as I felt that the work flow would be more friendly to readers. +This took a little while to figure out. I ended up using wasm-pack to create the wasm as I was having trouble with getting wasm-bindgen to work. I figured it out eventually but decided to keep using wasm-pack as I felt that the workflow would be more friendly to readers. I would have released this sooner, but I wanted to add support for touch so that people on their phones could play the game. It appears that winit doesn't record touch events for WASM, so I shelved that idea. @@ -26,7 +26,7 @@ self.queue.submit(iter::once(encoder.finish())); output.present(); ``` -There a good deal of internal changes such as WebGL support (which I really need to cover). You can check out more on wgpu's [changelog](https://github.com/gfx-rs/wgpu/blob/master/CHANGELOG.md#wgpu-011-2021-10-07). +There are a good deal of internal changes such as WebGL support (which I really need to cover). You can check out more on wgpu's [changelog](https://github.com/gfx-rs/wgpu/blob/master/CHANGELOG.md#wgpu-011-2021-10-07). ## Pong is fixed for 0.10 @@ -67,11 +67,11 @@ let view = output .create_view(&wgpu::TextureViewDescriptor::default()); ``` -The Pong and imgui examples are broken again. I may remove the imgui example as the corresponding crate already has examples on how to use it. I'm also considering reworking the Pong example, but I may end up just updating it. +The Pong and imgui examples are broken again. I may remove the imgui example as the corresponding crate already has examples of how to use it. I'm also considering reworking the Pong example, but I may end up just updating it. ## Pong and imgui demos are fixed! -The `imgui_wgpu` and `wgpu_glyph` crates have been updated to `wgpu` 0.8 so I was able to fixed the demos! They both still use GLSL, and I don't think I'll be changing that for now. I may switch them over to `naga` at some point. +The `imgui_wgpu` and `wgpu_glyph` crates have been updated to `wgpu` 0.8 so I was able to fix the demos! They both still use GLSL, and I don't think I'll be changing that for now. I may switch them over to `naga` at some point. ## 0.8 and WGSL @@ -87,7 +87,7 @@ Since I needed to make a bunch of changes to the code base to make the glsl, and ### Some of the showcase examples are broken -The `wgpu_glyph`, and `imgui-wgpu` crates currently depend on `wgpu` 0.7, which is causing the `pong` and `imgui-demo` to not compile. I decided to excluded them from the workspace until the underlying crates update to using `wgpu` 0.8. (Feel free to submit a issue or even PR when that happens!) +The `wgpu_glyph`, and `imgui-wgpu` crates currently depend on `wgpu` 0.7, which is causing the `pong` and `imgui-demo` to not compile. I decided to exclude them from the workspace until the underlying crates update to using `wgpu` 0.8. (Feel free to submit a issue or even PR when that happens!) ### Various API changes @@ -105,11 +105,11 @@ The `wgpu_glyph`, and `imgui-wgpu` crates currently depend on `wgpu` 0.7, which ## 0.7 -There were a lot of changes particularly to the `RenderPipelineDescriptor`. Most other things have not changed. You can check out the [0.9 PR](https://github.com/sotrh/learn-wgpu/pull/140) for the full details. +There were a lot of changes, particularly to the `RenderPipelineDescriptor`. Most other things have not changed. You can check out the [0.9 PR](https://github.com/sotrh/learn-wgpu/pull/140) for the full details. ## November 2020 Cleanup, Content Freeze, and Patreon -School is starting to ramp up, so I haven't had as much time to work on the site as I would like to. Because of that there were some issues piling up. I decided to tackle a bunch of them in one go. Here's a snapshot of what I did: +School is starting to ramp up, so I haven't had as much time to work on the site as I would like to. Because of that, there were some issues piling up. I decided to tackle a bunch of them in one go. Here's a snapshot of what I did: * The tutorial now handles `SurfaceError` properly * I'm now using bytemuck's derive feature on all buffer data structs. @@ -119,7 +119,7 @@ School is starting to ramp up, so I haven't had as much time to work on the site * I made a [compute pipeline showcase](../showcase/compute) that computes the tangent and bitangent for each vertex in a model. * I made a [imgui showcase](../showcase/imgui-demo). It's very basic, but it should be a good starting point. -Now in the headline I mentioned a "Content Freeze". Wgpu is still a moving target. The migration from `0.4` to `0.5` was lot of work. The same goes for `0.5` to `0.6`. I'm expected the next migration to be just as much work. As such, I won't be added much content until the API becomes a bit more stable. That being said, I still plan on resolving any issues with the content. +Now in the headline, I mentioned a "Content Freeze". Wgpu is still a moving target. The migration from `0.4` to `0.5` was a lot of work. The same goes for `0.5` to `0.6`. I expect the next migration to be just as much work. As such, I won't be adding much content until the API becomes a bit more stable. That being said, I still plan on resolving any issues with the content. One more thing. This is actually quite awkward for me (especially since I'll be slowing down development), but I've started a [patreon](https://www.patreon.com/sotrh). My job doesn't give me a ton of hours, so things are a bit tight. You are by no means obligated to donate, but I would appreciate it. @@ -127,7 +127,7 @@ You can find out more about contributing to this project on the [introduction pa ## 0.6 -This took me way too long. The changes weren't difficult, but I had to do a lot of copy pasting. The main changes are using `queue.write_buffer()` and `queue.write_texture()` everywhere. I won't get into the nitty gritty, but you can checkout the [pull request](https://github.com/sotrh/learn-wgpu/pull/90) if you're interested. +This took me way too long. The changes weren't difficult, but I had to do a lot of copy pasting. The main changes are using `queue.write_buffer()` and `queue.write_texture()` everywhere. I won't get into the nitty gritty, but you can check out the [pull request](https://github.com/sotrh/learn-wgpu/pull/90) if you're interested. ## Added Pong Showcase @@ -153,11 +153,11 @@ The [lighting tutorial](/intermediate/tutorial10-lighting/) was not up to par, s ## Updated texture tutorials -Up to this point, we created textures manually everytime. I've pulled out the texture creation code into a new `texture.rs` file and included it every tutorial from the [textures tutorial](/beginner/tutorial5-textures/#cleaning-things-up) onward. +Up to this point, we created textures manually every time. I've pulled out the texture creation code into a new `texture.rs` file and included it in every tutorial from the [textures tutorial](/beginner/tutorial5-textures/#cleaning-things-up) onward. -## Fixed panics do to not specifying the correct `usage` +## Fixed panics due to not specifying the correct `usage` -Wgpu has become more strict about what `BufferUsages`s and `TextureUsages`s are required when performing certain operations. For example int the [Wgpu without a window example](/intermediate/windowless/), the `texture_desc` only specified the usage to by `COPY_SRC`. This caused a crash when the `texture` was used as a render target. Adding `OUTPUT_ATTACHMENT` fixed the issue. +Wgpu has become more strict about what `BufferUsages`s and `TextureUsages`s are required when performing certain operations. For example in the [Wgpu without a window example](/intermediate/windowless/), the `texture_desc` only specified the usage to by `COPY_SRC`. This caused a crash when the `texture` was used as a render target. Adding `OUTPUT_ATTACHMENT` fixed the issue. ## Updating Winit from 0.20.0-alpha5 to 0.20 @@ -168,7 +168,7 @@ There were a lot of small changes to how the dpi stuff works. You can see all th * `State::size` is now `PhysicalSize` instead of the pre 0.20 `LogicalSize`. * `EventsCleared` is now `MainEventsCleared`. -I may have missed a change, but I made sure that all the examples compile an run, so if you have trouble with your code you can use them as a reference. +I may have missed a change, but I made sure that all the examples compile and run, so if you have trouble with your code you can use them as a reference. ## Changed tutorial examples to use a src directory @@ -196,4 +196,4 @@ I don't know if this is a change from 0.4, but you use `wgpu = "0.4"` line in de ## New/Recent Articles - + \ No newline at end of file diff --git a/docs/showcase/README.md b/docs/showcase/README.md index bf14a954..0edf97cf 100644 --- a/docs/showcase/README.md +++ b/docs/showcase/README.md @@ -1,3 +1,3 @@ # Foreword -The articles in this section are not meant to be tutorials. They are showcases of the various things you can do with `wgpu`. I won't go over specifics of creating `wgpu` resources, as those will be covered elsewhere. The code for these examples is still available however, and will be accessible on Github. +The articles in this section are not meant to be tutorials. They are showcases of the various things you can do with `wgpu`. I won't go over the specifics of creating `wgpu` resources, as those will be covered elsewhere. The code for these examples is still available however and will be accessible on Github. diff --git a/docs/showcase/alignment/README.md b/docs/showcase/alignment/README.md index a08aec6e..fcb8deab 100644 --- a/docs/showcase/alignment/README.md +++ b/docs/showcase/alignment/README.md @@ -2,32 +2,21 @@
-This page is currently being reworked. I want to understand the topics a bit better, but -as 0.12 is out I want to release what I have for now. +This page is currently being reworked. I want to understand the topics a bit better, but as 0.12 is out I want to release what I have for now.
## Alignment of vertex and index buffers -Vertex buffers require defining a `VertexBufferLayout`, so the memory alignment is whatever -you tell WebGPU it should be. This can be really convenient for keeping down memory usage -on the GPU. +Vertex buffers require defining a `VertexBufferLayout`, so the memory alignment is whatever you tell WebGPU it should be. This can be really convenient for keeping down memory usage on the GPU. -The Index Buffer use the alignment of whatever primitive type you specify via the `IndexFormat` -you pass into `RenderEncoder::set_index_buffer()`. +The Index Buffer uses the alignment of whatever primitive type you specify via the `IndexFormat` you pass into `RenderEncoder::set_index_buffer()`. ## Alignment of Uniform and Storage buffers -GPUs are designed to process thousands of pixels in parallel. In order to achieve this, -some sacrifices had to be made. Graphics hardware likes to have all the bytes you intend -on processing aligned by powers of 2. The exact specifics of why this is are beyond -my level of knowledge, but it's important to know so that you can trouble shoot why your -shaders aren't working. +GPUs are designed to process thousands of pixels in parallel. In order to achieve this, some sacrifices had to be made. Graphics hardware likes to have all the bytes you intend on processing aligned by powers of 2. The exact specifics of why this is are beyond my level of knowledge, but it's important to know so that you can troubleshoot why your shaders aren't working. - + Let's take a look at the following table: @@ -39,9 +28,7 @@ Let's take a look at the following table: | vec3<T> | **16** | 12 | | vec4<T> | 16 | 16 | -You can see for `vec3` the alignment is the next power of 2 from the size, 16. This can -catch beginners (and even veterans) off guard as it's not the most intuitive. This becomes especially -important when we start laying out structs. Take the light struct from the [lighting tutorial](../../intermediate/tutorial10-lighting/#seeing-the-light): +You can see for `vec3` the alignment is the next power of 2 from the size, 16. This can catch beginners (and even veterans) off guard as it's not the most intuitive. This becomes especially important when we start laying out structs. Take the light struct from the [lighting tutorial](../../intermediate/tutorial10-lighting/#seeing-the-light): You can see the full table of the alignments in section [4.3.7.1 of the WGSL spec](https://www.w3.org/TR/WGSL/#alignment-and-size) @@ -52,10 +39,7 @@ struct Light { }; ``` -So what's the alignment of this scruct? Your first guess would be that it's the sum of -the alignments of the individual fields. That might make sense if we were in Rust-land, -but in shader-land, it's a little more involved. The alignment for a given struct is given -by the following equation: +So what's the alignment of this struct? Your first guess would be that it's the sum of the alignments of the individual fields. That might make sense if we were in Rust-land, but in shader-land, it's a little more involved. The alignment for a given struct is given by the following equation: ``` // S is the struct in question @@ -63,8 +47,7 @@ by the following equation: AlignOf(S) = max(AlignOfMember(S, M1), ... , AlignOfMember(S, Mn)) ``` -Basically the alignment of the struct is the maximum of the alignments of the members of -the struct. This means that: +Basically, the alignment of the struct is the maximum of the alignments of the members of the struct. This means that: ``` AlignOf(Light) @@ -73,13 +56,11 @@ AlignOf(Light) = 16 ``` -This is why the `LightUniform` has those padding fields. WGPU won't accept it if the data -is not aligned correctly. +This is why the `LightUniform` has those padding fields. WGPU won't accept it if the data is not aligned correctly. ## How to deal with alignment issues -In general 16, is the max alignment you'll see. In that case you might think that we should -be able to do something like the following: +In general, 16 is the max alignment you'll see. In that case, you might think that we should be able to do something like the following: ```rust #[repr(C, align(16))] @@ -90,9 +71,7 @@ struct LightUniform { } ``` -But this won't compile. The [bytemuck crate](https://docs.rs/bytemuck/) doesn't work with -structs with implicit padding bytes. Rust can't guarantee that the memory between the fields -has been initialized properly. This gave be an error when I tried it: +But this won't compile. The [bytemuck crate](https://docs.rs/bytemuck/) doesn't work with structs with implicit padding bytes. Rust can't guarantee that the memory between the fields has been initialized properly. This gave me an error when I tried it: ``` error[E0512]: cannot transmute between types of different sizes, or dependently-sized types @@ -107,4 +86,4 @@ error[E0512]: cannot transmute between types of different sizes, or dependently- ## Additional resources -If you're looking for more information check out the [right-up](https://gist.github.com/teoxoy/936891c16c2a3d1c3c5e7204ac6cd76c) by @teoxoy. \ No newline at end of file +If you're looking for more information check out the [write-up](https://gist.github.com/teoxoy/936891c16c2a3d1c3c5e7204ac6cd76c) by @teoxoy. \ No newline at end of file diff --git a/docs/showcase/compute/README.md b/docs/showcase/compute/README.md index 36138eba..64dc5f20 100644 --- a/docs/showcase/compute/README.md +++ b/docs/showcase/compute/README.md @@ -1,6 +1,6 @@ # Compute Example: Tangents and Bitangents -This proved more difficult than I anticipated. The first problem I encountered was some vertex data corruption due to the shader reading my vertex data incorrectly. I was using my `ModelVertex` struct I used in the [normal mapping tutorial](/intermediate/tutorial11-normals/). +This proved more difficult than I anticipated. The first problem I encountered was some vertex data corruption due to the shader reading my vertex data incorrectly. I was using the `ModelVertex` struct I used in the [normal mapping tutorial](/intermediate/tutorial11-normals/). ```rust #[repr(C)] @@ -26,11 +26,11 @@ struct ModelVertex { }; ``` -At first glance, this seems just fine, but OpenGL experts would likely see a problem with the structure. Our fields aren't aligned properly to support the `std430` alignment that storage buffers require.. I won't get into detail but you can check out the [alignment showcase](../alignment) if you want to know more. To summarize, the `vec2` for the `tex_coords` was messing up the byte alignment, corrupting the vertex data resulting in the following: +At first glance, this seems just fine, but OpenGL experts would likely see a problem with the structure. Our fields aren't aligned properly to support the `std430` alignment that storage buffers require... I won't get into detail but you can check out the [alignment showcase](../alignment) if you want to know more. To summarize, the `vec2` for the `tex_coords` was messing up the byte alignment, corrupting the vertex data resulting in the following: ![./corruption.png](./corruption.png) -I could have fixed this by adding a padding field after `tex_coords` on the Rust side, but that would require modifying the `VertexBufferLayout`. I ended up solving this problem by using the components of the vectors directly and resulted with a struct like this: +I could have fixed this by adding a padding field after `tex_coords` on the Rust side, but that would require modifying the `VertexBufferLayout`. I ended up solving this problem by using the components of the vectors directly which resulted in a struct like this: ```glsl struct ModelVertex { @@ -44,7 +44,7 @@ struct ModelVertex { Since `std430` will use the alignment of the largest element of the struct, using all floats means the struct will be aligned to 4 bytes. This is alignment matches what `ModelVertex` uses in Rust. This was kind of a pain to work with, but it fixed the corruption issue. -The second problem required me to rethink how I was computing the tangent and bitangent. The previous algorithm I was using only computed the tangent and bitangent for each triangle and set all the vertices in that triangle to use the same tangent and bitangent. While this is fine in a single threaded context, the code breaks down when trying to compute the triangles in parallel. The reason is that multiple triangles can share the same vertices. This means that when we go to save the resulting tangents, we inevitably end up trying to write to the same vertex from multiple different threads which is a big no no. You can see the issue with this method below: +The second problem required me to rethink how I was computing the tangent and bitangent. The previous algorithm I was using only computed the tangent and bitangent for each triangle and set all the vertices in that triangle to use the same tangent and bitangent. While this is fine in a single-threaded context, the code breaks down when trying to compute the triangles in parallel. The reason is that multiple triangles can share the same vertices. This means that when we go to save the resulting tangents, we inevitably end up trying to write to the same vertex from multiple different threads which is a big no no. You can see the issue with this method below: ![./black_triangles.png](./black_triangles.png) @@ -52,7 +52,7 @@ Those black triangles were the result of multiple GPU threads trying to modify t ![./render_doc_output.png](./render_doc_output.png) -While on the CPU we could introduce a synchronization primitive such as a `Mutex` to fix this issue, AFAIK there isn't really such a thing on the GPU. Instead I decided to swap my code to work with each vertex individually. There are some hurdles with that, but those will be easier to explain in code. Let's start with the `main` function. +While on the CPU we could introduce a synchronization primitive such as a `Mutex` to fix this issue, AFAIK there isn't really such a thing on the GPU. Instead, I decided to swap my code to work with each vertex individually. There are some hurdles with that, but those will be easier to explain in code. Let's start with the `main` function. ```glsl void main() { @@ -62,7 +62,7 @@ void main() { } ``` -We use the `gl_GlobalInvocationID.x` to get the index of the vertex we want to compute the tangents for. I opted to put the actual calculation into it's own method. Let's take a look at that. +We use the `gl_GlobalInvocationID.x` to get the index of the vertex we want to compute the tangents for. I opted to put the actual calculation into its own method. Let's take a look at that. ```glsl ModelVertex calcTangentBitangent(uint vertexIndex) { @@ -130,7 +130,7 @@ ModelVertex calcTangentBitangent(uint vertexIndex) { ## Possible Improvements -Looping over every triangle for every vertex is likely raising some red flags for some of you. In a single threaded context, this algorithm would end up being O(N*M). As we are utilizing the high number of threads available to our GPU, this is less of an issue, but it still means our GPU is burning more cycles than it needs to. +Looping over every triangle for every vertex is likely raising some red flags for some of you. In a single-threaded context, this algorithm would end up being O(N*M). As we are utilizing the high number of threads available to our GPU, this is less of an issue, but it still means our GPU is burning more cycles than it needs to. One way I came up with to possibly improve performance is to store the index of each triangle in a hash map like structure with the vertex index as keys. Here's some pseudo code: @@ -154,7 +154,7 @@ for (i, (_v, t_list)) in triangle_map.iter().enumerate() { } ``` -I ultimately decided against this method as it was more complicated, and I haven't had time to benchmark it to see if it's faster that the simple method. +I ultimately decided against this method as it was more complicated, and I haven't had time to benchmark it to see if it's faster than the simple method. ## Results @@ -162,4 +162,4 @@ The tangents and bitangents are now getting calculated correctly and on the GPU! ![./results.png](./results.png) - + \ No newline at end of file diff --git a/docs/showcase/gifs/README.md b/docs/showcase/gifs/README.md index 8147ceb4..28d5a16e 100644 --- a/docs/showcase/gifs/README.md +++ b/docs/showcase/gifs/README.md @@ -1,6 +1,6 @@ -# Creating gifs +# Creating gifs -Sometimes you've created a nice simulation/animation, and you want to show it off. While you can record a video, that might be a bit overkill to break out your video recording if you just want something to post on twitter. That's where what [GIF](https://en.wikipedia.org/wiki/GIF)s are for. +Sometimes you've created a nice simulation/animation, and you want to show it off. While you can record a video, that might be a bit overkill to break out your video recording if you just want something to post on Twitter. That's where what [GIF](https://en.wikipedia.org/wiki/GIF)s are for. Also, GIF is pronounced GHIF, not JIF as JIF is not only [peanut butter](https://en.wikipedia.org/wiki/Jif_%28peanut_butter%29), it is also a [different image format](https://filext.com/file-extension/JIF). @@ -25,7 +25,7 @@ fn save_gif(path: &str, frames: &mut Vec>, speed: i32, size: u16) -> Res ``` - +