From 54121a5c541cea42018266e3e63f4dd2f209088d Mon Sep 17 00:00:00 2001 From: rhysd Date: Thu, 30 Dec 2021 23:17:16 +0900 Subject: [PATCH] fix some typos --- docs/beginner/tutorial2-surface/README.md | 4 ++-- docs/beginner/tutorial6-uniforms/README.md | 4 ++-- docs/intermediate/tutorial10-lighting/README.md | 2 +- docs/showcase/README.md | 4 ++-- docs/showcase/compute/README.md | 2 +- docs/showcase/gifs/README.md | 2 +- 6 files changed, 9 insertions(+), 9 deletions(-) diff --git a/docs/beginner/tutorial2-surface/README.md b/docs/beginner/tutorial2-surface/README.md index f0c75283..d17426d1 100644 --- a/docs/beginner/tutorial2-surface/README.md +++ b/docs/beginner/tutorial2-surface/README.md @@ -72,7 +72,7 @@ The `adapter` is a handle to our actual graphics card. You can use this to get i * `power_preference` has two variants: `LowPower`, and `HighPerformance`. This means will pick an adapter that favors battery life such as a integrated GPU when using `LowPower`. `HighPerformance` as will pick an adapter for more power hungry yet more performant GPU's such as your dedicated graphics card. WGPU will favor `LowPower` if there is no adapter for the `HighPerformance` option. * The `compatible_surface` field tells wgpu to find an adapter that can present to the supplied surface. -* The `force_fallback_adapter` forces wgpu to pick an adapter that will work on all harware. This usually means that the rendering backend will use a "software" system, instead of hardware such as a GPU. +* The `force_fallback_adapter` forces wgpu to pick an adapter that will work on all hardware. This usually means that the rendering backend will use a "software" system, instead of hardware such as a GPU.
@@ -391,7 +391,7 @@ Some of you may be able to tell what's going on just by looking at it, but I'd b } ``` -A `RenderPassDescriptor` only has three fields: `label`, `color_attachments` and `depth_stencil_attachment`. The `color_attachements` describe where we are going to draw our color to. We use the `TextureView` we created earlier to make sure that we render to the screen. +A `RenderPassDescriptor` only has three fields: `label`, `color_attachments` and `depth_stencil_attachment`. The `color_attachments` describe where we are going to draw our color to. We use the `TextureView` we created earlier to make sure that we render to the screen. We'll use `depth_stencil_attachment` later, but we'll set it to `None` for now. diff --git a/docs/beginner/tutorial6-uniforms/README.md b/docs/beginner/tutorial6-uniforms/README.md index cf3e2a4e..21055f08 100644 --- a/docs/beginner/tutorial6-uniforms/README.md +++ b/docs/beginner/tutorial6-uniforms/README.md @@ -33,9 +33,9 @@ impl Camera { ``` The `build_view_projection_matrix` is where the magic happens. -1. The `view` matrix moves the world to be at the position and rotation of the camera. It's essentialy an inverse of whatever the transform matrix of the camera would be. +1. The `view` matrix moves the world to be at the position and rotation of the camera. It's essentially an inverse of whatever the transform matrix of the camera would be. 2. The `proj` matrix wraps the scene to give the effect of depth. Without this, objects up close would be the same size as objects far away. -3. The coordinate system in Wgpu is based on DirectX, and Metal's coordinate systems. That means that in [normalized device coordinates](https://github.com/gfx-rs/gfx/tree/master/src/backend/dx12#normalized-coordinates) the x axis and y axis are in the range of -1.0 to +1.0, and the z axis is 0.0 to +1.0. The `cgmath` crate (as well as most game math crates) are built for OpenGL's coordinate system. This matrix will scale and translate our scene from OpenGL's coordinate sytem to WGPU's. We'll define it as follows. +3. The coordinate system in Wgpu is based on DirectX, and Metal's coordinate systems. That means that in [normalized device coordinates](https://github.com/gfx-rs/gfx/tree/master/src/backend/dx12#normalized-coordinates) the x axis and y axis are in the range of -1.0 to +1.0, and the z axis is 0.0 to +1.0. The `cgmath` crate (as well as most game math crates) are built for OpenGL's coordinate system. This matrix will scale and translate our scene from OpenGL's coordinate system to WGPU's. We'll define it as follows. ```rust #[rustfmt::skip] diff --git a/docs/intermediate/tutorial10-lighting/README.md b/docs/intermediate/tutorial10-lighting/README.md index 00a8ebd0..809062cc 100644 --- a/docs/intermediate/tutorial10-lighting/README.md +++ b/docs/intermediate/tutorial10-lighting/README.md @@ -321,7 +321,7 @@ let light_render_pipeline = { }; ``` -I chose to create a seperate layout for the `light_render_pipeline`, as it doesn't need all the resources that the regular `render_pipeline` needs (main just the textures). +I chose to create a separate layout for the `light_render_pipeline`, as it doesn't need all the resources that the regular `render_pipeline` needs (main just the textures). With that in place we need to write the actual shaders. diff --git a/docs/showcase/README.md b/docs/showcase/README.md index 98250e1d..bf14a954 100644 --- a/docs/showcase/README.md +++ b/docs/showcase/README.md @@ -1,3 +1,3 @@ -# Foreward +# Foreword -The articles in this section are not meant to be tutorials. They are showcases of the various things you can do with `wgpu`. I won't go over specifics of creating `wgpu` resources, as those will be covered elsewhere. The code for these examples is still available however, and will be accessible on Github. \ No newline at end of file +The articles in this section are not meant to be tutorials. They are showcases of the various things you can do with `wgpu`. I won't go over specifics of creating `wgpu` resources, as those will be covered elsewhere. The code for these examples is still available however, and will be accessible on Github. diff --git a/docs/showcase/compute/README.md b/docs/showcase/compute/README.md index 2dc89505..684142e0 100644 --- a/docs/showcase/compute/README.md +++ b/docs/showcase/compute/README.md @@ -130,7 +130,7 @@ ModelVertex calcTangentBitangent(uint vertexIndex) { ## Possible Improvements -Looping over every triangle for every vertex is likely raising some red flags for some of you. In a single threaded context, this algorithm would end up being O(N*M). As we are utilizing the high number of threads availble to our GPU, this is less of an issue, but it still means our GPU is burning more cycles than it needs to. +Looping over every triangle for every vertex is likely raising some red flags for some of you. In a single threaded context, this algorithm would end up being O(N*M). As we are utilizing the high number of threads available to our GPU, this is less of an issue, but it still means our GPU is burning more cycles than it needs to. One way I came up with to possibly improve performance is to store the index of each triangle in a hash map like structure with the vertex index as keys. Here's some pseudo code: diff --git a/docs/showcase/gifs/README.md b/docs/showcase/gifs/README.md index 2fe39df4..8147ceb4 100644 --- a/docs/showcase/gifs/README.md +++ b/docs/showcase/gifs/README.md @@ -41,7 +41,7 @@ fn save_gif(path: &str, frames: &mut Vec>, speed: i32, size: u16) -> Res } ``` --> -All we need to use this code is the frames of the GIF, how fast it should run, and the size of the GIF (you could use width and height seperately, but I didn't). +All we need to use this code is the frames of the GIF, how fast it should run, and the size of the GIF (you could use width and height separately, but I didn't). ## How do we make the frames?