Some small fixes for beginner tutorials 1 to 5

pull/281/head
Niklas Eicker 2 years ago
parent 37a097ad97
commit 064a0a5efd

3
.gitignore vendored

@ -1,6 +1,7 @@
node_modules/
target/
.vscode/
.idea/
/image.png
/output*.*
@ -9,4 +10,4 @@ output/
/trace
*trace.zip
secrets.txt
secrets.txt

@ -23,7 +23,7 @@ As of version 0.10, wgpu require's cargo's [newest feature resolver](https://doc
## env_logger
It is very important to enable logging via `env_logger::init();`.
When wgpu hits any error it panics with a generic message, while logging the real error via the log crate.
This means if you dont include `env_logger::init()` wgpu will fail silently, leaving you very confused!
This means if you don't include `env_logger::init()` wgpu will fail silently, leaving you very confused!
## The code
There's not much going on here yet, so I'm just going to post the code in full. Just paste this into your `main.rs` or equivalent.
@ -63,6 +63,6 @@ fn main() {
```
All this does is create a window, and keep it open until user closes it, or presses escape. Next tutorial we'll actually start using wgpu!
All this does is create a window, and keep it open until the user closes it, or presses escape. In the next tutorial we'll actually start using wgpu!
<AutoGithubLink/>

@ -1,7 +1,7 @@
# The Surface
## First, some house keeping: State
For convenience we're going to pack all the fields into a struct, and create some methods on that.
## First, some housekeeping: State
For convenience, we're going to pack all the fields into a struct and create some methods on that.
```rust
// main.rs
@ -119,7 +119,7 @@ The `features` field on `DeviceDescriptor`, allows us to specify what extra feat
<div class="note">
The graphics card you have limits the features you can use. If you want to use certain features you may need to limit what devices you support, or provide work arounds.
The graphics card you have limits the features you can use. If you want to use certain features you may need to limit what devices you support, or provide workarounds.
You can get a list of features supported by your device using `adapter.features()`, or `device.features()`.
@ -127,7 +127,7 @@ You can view a full list of features [here](https://docs.rs/wgpu/0.10.1/wgpu/str
</div>
The `limits` field describes the limit of certain types of resource we can create. We'll use the defaults for this tutorial, so we can support most devices. You can view a list of limits [here](https://docs.rs/wgpu/0.10.1/wgpu/struct.Limits.html).
The `limits` field describes the limit of certain types of resources that we can create. We'll use the defaults for this tutorial, so we can support most devices. You can view a list of limits [here](https://docs.rs/wgpu/0.10.1/wgpu/struct.Limits.html).
```rust
let config = wgpu::SurfaceConfiguration {
@ -140,7 +140,7 @@ The `limits` field describes the limit of certain types of resource we can creat
surface.configure(&device, &config);
```
Here we are defining a config for our surface. This will define how the surface creates its underlying `SurfaceTexture`s. We will talk about `SurfaceTexture` when we get to the `render` function. For now lets talk about some of the config fields.
Here we are defining a config for our surface. This will define how the surface creates its underlying `SurfaceTexture`s. We will talk about `SurfaceTexture` when we get to the `render` function. For now lets talk about the config's fields.
The `usage` field describes how `SurfaceTexture`s will be used. `RENDER_ATTACHMENT` specifies that the textures will be used to write to the screen (we'll talk about more `TextureUsages`s later).
@ -197,7 +197,7 @@ pub fn resize(&mut self, new_size: winit::dpi::PhysicalSize<u32>) {
}
```
There's nothing really different here from configurating the `surface` initially, so I won't get into it.
There's nothing really different here from the initial `surface` configuration, so I won't get into it.
We call this method in `main()` in the event loop for the following events.
@ -307,7 +307,7 @@ We also need to create a `CommandEncoder` to create the actual commands to send
});
```
Now we can actually get to clearing the screen (long time coming). We need to use the `encoder` to create a `RenderPass`. The `RenderPass` has all the methods for the actual drawing. The code for creating a `RenderPass` is a bit nested, so I'll copy it all here beafore talking about its pieces.
Now we can actually get to clearing the screen (long time coming). We need to use the `encoder` to create a `RenderPass`. The `RenderPass` has all the methods for the actual drawing. The code for creating a `RenderPass` is a bit nested, so I'll copy it all here before talking about its pieces.
```rust
{
@ -338,7 +338,7 @@ Now we can actually get to clearing the screen (long time coming). We need to us
}
```
First things first, let's talk about the `{}`. `encoder.begin_render_pass(...)` borrows `encoder` mutably (aka `&mut self`). We can't call `encoder.finish()` until we release that mutable borrow. The block (`{}`) around `encoder.begin_render_pass(...)` tells rust to drop any variables within them when the code leaves that scope thus releasing the mutable borrow on `encoder` and allowing us to `finish()` it. If you don't like the `{}`, you can also use `drop(render_pass)` to achieve the same effect.
First things first, let's talk about the extra block (`{}`) around `encoder.begin_render_pass(...)`. `begin_render_pass()` borrows `encoder` mutably (aka `&mut self`). We can't call `encoder.finish()` until we release that mutable borrow. The block tells rust to drop any variables within it when the code leaves that scope thus releasing the mutable borrow on `encoder` and allowing us to `finish()` it. If you don't like the `{}`, you can also use `drop(render_pass)` to achieve the same effect.
We can get the same results by removing the `{}`, and the `let _render_pass =` line, but we need access to the `_render_pass` in the next tutorial, so we'll leave it as is.
@ -415,7 +415,7 @@ The `RenderPassColorAttachment` has the `view` field which informs `wgpu` what t
The `resolve_target` is the texture that will receive the resolved output. This will be the same as `view` unless multisampling is enabled. We don't need to specify this, so we leave it as `None`.
The `ops` field takes a `wpgu::Operations` object. This tells wgpu what to do with the colors on the screen (specified by `frame.view`). The `load` field tells wgpu how to handle colors stored from the previous frame. Currently we are clearing the screen with a bluish color. The `store` field tells wgpu with we want to store the rendered results to the `Texture` behind our `TextureView` (in this case it's the `SurfaceTexture`). We use `true` as we do want to store our render results. There are cases when you wouldn't want to but those
The `ops` field takes a `wpgu::Operations` object. This tells wgpu what to do with the colors on the screen (specified by `frame.view`). The `load` field tells wgpu how to handle colors stored from the previous frame. Currently, we are clearing the screen with a bluish color. The `store` field tells wgpu whether we want to store the rendered results to the `Texture` behind our `TextureView` (in this case it's the `SurfaceTexture`). We use `true` as we do want to store our render results. There are cases when you wouldn't want to but those
<div class="note">

@ -21,11 +21,11 @@ The vertices are then converted into fragments. Every pixel in the result image
## WGSL
WebGPU supports two shader languages natively: SPIR-V, and WGSL. SPIR-V is actually a binary format developed by Khronos to be a compilation target for other languages such as GLSL and HLSL. It allows for easy porting of code. The only problem is that it's not human readable as it's a binary language. WGSL is meant to fix that. WGSL's development focuses on getting it to easily convert into SPIR-V. WGPU even allows us to supply WGSL for our shaders.
WebGPU supports two shader languages natively: SPIR-V, and WGSL. SPIR-V is actually a binary format developed by Khronos to be a compilation target for other languages such as GLSL and HLSL. It allows for easy porting of code. The only problem is that it's not human-readable as it's a binary language. WGSL is meant to fix that. WGSL's development focuses on getting it to easily convert into SPIR-V. WGPU even allows us to supply WGSL for our shaders.
<div class="note">
If you've gone through this tutorial before you'll likely notice that I've switched from using GLSL to using WGSL. Given that GLSL support is a secondary concern and that WGSL is the first class language of WGPU, I've elected to convert all the tutorials to use WGSL. Some of the showcase examples still use GLSL, but the main tutorial and all examples going forward will be using WGSL.
If you've gone through this tutorial before you'll likely notice that I've switched from using GLSL to using WGSL. Given that GLSL support is a secondary concern and that WGSL is the first class language of WGPU, I've elected to convert all the tutorials to use WGSL. Some showcase examples still use GLSL, but the main tutorial and all examples going forward will be using WGSL.
</div>
@ -61,7 +61,7 @@ First we declare `struct` to store the output of our vertex shader. This consist
<div class="note">
Vector types such as `vec4` are generic. Currently you must specify the type of value the vector will contain. Thus a 3D vector using 32bit floats would be `vec3<f32>`.
Vector types such as `vec4` are generic. Currently, you must specify the type of value the vector will contain. Thus, a 3D vector using 32bit floats would be `vec3<f32>`.
</div>
@ -111,11 +111,11 @@ fn fs_main(in: VertexOutput) -> [[location(0)]] vec4<f32> {
}
```
All this does is set the color of the current fragment to brown color.
This sets the color of the current fragment to brown.
<div class="note">
Notice that the entry point for the vertex shader was named `vs_main` and that the entry point for the fragment shader is called `fs_main`. In earlier versions of wgpu it was ok to both name these functions the same, but newer versions of the [WGSL spec](https://www.w3.org/TR/WGSL/#declaration-and-scope) require these names to be different. Therefore the above mentioned naming scheme (which is adopted from the `wgpu` examples) is used throughout the tutorial.
Notice that the entry point for the vertex shader was named `vs_main` and that the entry point for the fragment shader is called `fs_main`. In earlier versions of wgpu it was ok to both name these functions the same, but newer versions of the [WGSL spec](https://www.w3.org/TR/WGSL/#declaration-and-scope) require these names to be different. Therefore, the above mentioned naming scheme (which is adopted from the `wgpu` examples) is used throughout the tutorial.
</div>
@ -181,10 +181,10 @@ let render_pipeline = device.create_render_pipeline(&wgpu::RenderPipelineDescrip
```
Two things to note here:
1. Here you can specify which function inside of the shader should be called, which is known as the `entry_point`. These are the functions we marked with `[[stage(vertex)]]` and `[[stage(fragment)]]`
2. The `buffers` field tells `wgpu` what type of vertices we want to pass to the vertex shader. We're specifying the vertices in the vertex shader itself so we'll leave this empty. We'll put something there in the next tutorial.
1. Here you can specify which function inside the shader should be the `entry_point`. These are the functions we marked with `[[stage(vertex)]]` and `[[stage(fragment)]]`
2. The `buffers` field tells `wgpu` what type of vertices we want to pass to the vertex shader. We're specifying the vertices in the vertex shader itself, so we'll leave this empty. We'll put something there in the next tutorial.
3. The `fragment` is technically optional, so you have to wrap it in `Some()`. We need it if we want to store color data to the `surface`.
4. The `targets` field tells `wgpu` what color outputs it should set up.Currently we only need one for the `surface`. We use the `surface`'s format so that copying to it is easy, and we specify that the blending should just replace old pixel data with new data. We also tell `wgpu` to write to all colors: red, blue, green, and alpha. *We'll talk more about*`color_state` *when we talk about textures.*
4. The `targets` field tells `wgpu` what color outputs it should set up.Currently, we only need one for the `surface`. We use the `surface`'s format so that copying to it is easy, and we specify that the blending should just replace old pixel data with new data. We also tell `wgpu` to write to all colors: red, blue, green, and alpha. *We'll talk more about*`color_state` *when we talk about textures.*
```rust
primitive: wgpu::PrimitiveState {
@ -205,7 +205,7 @@ Two things to note here:
The `primitive` field describes how to interpret our vertices when converting them into triangles.
1. Using `PrimitiveTopology::TriangleList` means that each three vertices will correspond to one triangle.
2. The `front_face` and `cull_mode` fields tell `wgpu` how to determine whether a given triangle is facing forward or not. `FrontFace::Ccw` means that a triangle is facing forward if the vertices are arranged in a counter clockwise direction. Triangles that are not considered facing forward are culled (not included in the render) as specified by `CullMode::Back`. We'll cover culling a bit more when we cover `Buffer`s.
2. The `front_face` and `cull_mode` fields tell `wgpu` how to determine whether a given triangle is facing forward or not. `FrontFace::Ccw` means that a triangle is facing forward if the vertices are arranged in a counter-clockwise direction. Triangles that are not considered facing forward are culled (not included in the render) as specified by `CullMode::Back`. We'll cover culling a bit more when we cover `Buffer`s.
```rust
depth_stencil: None, // 1.
@ -219,13 +219,13 @@ The `primitive` field describes how to interpret our vertices when converting th
The rest of the method is pretty simple:
1. We're not using a depth/stencil buffer currently, so we leave `depth_stencil` as `None`. *This will change later*.
2. This determines how many samples this pipeline will use. Multisampling is a complex topic, so we won't get into it here.
2. `count` determines how many samples the pipeline will use. Multisampling is a complex topic, so we won't get into it here.
3. `mask` specifies which samples should be active. In this case we are using all of them.
4. `alpha_to_coverage_enabled` has to do with anti-aliasing. We're not covering anti-aliasing here, so we'll leave this as false now.
<!-- https://gamedev.stackexchange.com/questions/22507/what-is-the-alphatocoverage-blend-state-useful-for -->
Now all we have to do is save the `render_pipeline` to `State` and then we can use it!
Now all we have to do is add the `render_pipeline` to `State` and then we can use it!
```rust
// new()

@ -7,7 +7,7 @@ You were probably getting sick of me saying stuff like "we'll get to that when w
A buffer is a blob of data on the GPU. A buffer is guaranteed to be contiguous, meaning that all the data is stored sequentially in memory. Buffers are generally used to store simple things like structs or arrays, but it can store more complex stuff such as graph structures like trees (provided all the nodes are stored together and don't reference anything outside of the buffer). We are going to use buffers a lot, so let's get started with two of the most important ones: the vertex buffer, and the index buffer.
## The vertex buffer
Previously we've stored vertex data directly in the vertex shader. While that worked fine to get our bootstraps on, it simply won't do for the long-term. The types of objects we need to draw will vary in size, and recompiling the shader whenever we need to update the model would massively slow down our program. Instead we are going to use buffers to store the vertex data we want to draw. Before we do that though we need to describe what a vertex looks like. We'll do this by creating a new struct.
Previously we've stored vertex data directly in the vertex shader. While that worked fine to get our bootstraps on, it simply won't do for the long-term. The types of objects we need to draw will vary in size, and recompiling the shader whenever we need to update the model would massively slow down our program. Instead, we are going to use buffers to store the vertex data we want to draw. Before we do that though we need to describe what a vertex looks like. We'll do this by creating a new struct.
```rust
// main.rs
@ -19,7 +19,7 @@ struct Vertex {
}
```
Our vertices will all have a position and a color. The position represents the x, y, and z of the vertex in 3d space. The color is the red, green, and blue values for the vertex. We need the `Vertex` to be copyable so we can create a buffer with it.
Our vertices will all have a position and a color. The position represents the x, y, and z of the vertex in 3d space. The color is the red, green, and blue values for the vertex. We need the `Vertex` to be `Copy` so we can create a buffer with it.
Next we need the actual data that will make up our triangle. Below `Vertex` add the following.
@ -32,7 +32,7 @@ const VERTICES: &[Vertex] = &[
];
```
We arrange the vertices in counter clockwise order: top, bottom left, bottom right. We do it this way partially out of tradition, but mostly because we specified in the `rasterization_state` of the `render_pipeline` that we want the `front_face` of our triangle to be `wgpu::FrontFace::Ccw` so that we cull the back face. This means that any triangle that should be facing us should have its vertices in counter clockwise order.
We arrange the vertices in counter-clockwise order: top, bottom left, bottom right. We do it this way partially out of tradition, but mostly because we specified in the `rasterization_state` of the `render_pipeline` that we want the `front_face` of our triangle to be `wgpu::FrontFace::Ccw` so that we cull the back face. This means that any triangle that should be facing us should have its vertices in counter-clockwise order.
Now that we have our vertex data, we need to store it in a buffer. Let's add a `vertex_buffer` field to `State`.
@ -76,7 +76,7 @@ You'll note that we're using [bytemuck](https://docs.rs/bytemuck/1.2.0/bytemuck/
bytemuck = { version = "1.4", features = [ "derive" ] }
```
We're also going to need to implement two traits to get `bytemuck` to work. These are [bytemuck::Pod](https://docs.rs/bytemuck/1.3.0/bytemuck/trait.Pod.html) and [bytemuck::Zeroable](https://docs.rs/bytemuck/1.3.0/bytemuck/trait.Zeroable.html). `Pod` indicates that our `Vertex` is "Plain Old Data", and thus can be interpretted as a `&[u8]`. `Zeroable` indicates that we can use `std::mem::zeroed()`. We can modify our `Vertex` struct to derive these methods.
We're also going to need to implement two traits to get `bytemuck` to work. These are [bytemuck::Pod](https://docs.rs/bytemuck/1.3.0/bytemuck/trait.Pod.html) and [bytemuck::Zeroable](https://docs.rs/bytemuck/1.3.0/bytemuck/trait.Zeroable.html). `Pod` indicates that our `Vertex` is "Plain Old Data", and thus can be interpreted as a `&[u8]`. `Zeroable` indicates that we can use `std::mem::zeroed()`. We can modify our `Vertex` struct to derive these methods.
```rust
#[repr(C)]
@ -98,7 +98,7 @@ unsafe impl bytemuck::Zeroable for Vertex {}
</div>
Finally we can add our `vertex_buffer` to our `State` struct.
Finally, we can add our `vertex_buffer` to our `State` struct.
```rust
Self {
@ -115,7 +115,7 @@ Self {
## So what do I do with it?
We need to tell the `render_pipeline` to use this buffer when we are drawing, but first we need to tell the `render_pipeline` how to read the buffer. We do this using `VertexBufferLayout`s and the `vertex_buffers` field that I promised we'd talk about when we created the `render_pipeline`.
A `VertexBufferLayout` defines how a buffer is layed out in memory. Without this, the render_pipeline has no idea how to map the buffer in the shader. Here's what the descriptor for a buffer full of `Vertex` would look like.
A `VertexBufferLayout` defines how a buffer is represented in memory. Without this, the render_pipeline has no idea how to map the buffer in the shader. Here's what the descriptor for a buffer full of `Vertex` would look like.
```rust
wgpu::VertexBufferLayout {
@ -139,11 +139,11 @@ wgpu::VertexBufferLayout {
1. The `array_stride` defines how wide a vertex is. When the shader goes to read the next vertex, it will skip over `array_stride` number of bytes. In our case, array_stride will probably be 24 bytes.
2. `step_mode` tells the pipeline how often it should move to the next vertex. This seems redundant in our case, but we can specify `wgpu::VertexStepMode::Instance` if we only want to change vertices when we start drawing a new instance. We'll cover instancing in a later tutorial.
3. Vertex attributes describe the individual parts of the vertex. Generally this is a 1:1 mapping with a struct's fields, which it is in our case.
4. This defines the `offset` in bytes that this attribute starts. The first attribute is usually zero, and any future attributes are the collective `size_of` the previous attributes data.
5. This tells the shader what location to store this attribute at. For example `[[location(0)]] x: vec3<f32>` in the vertex shader would correspond to the position field of the struct, while `[[location(1)]] x: vec3<f32>` would be the color field.
4. This defines the `offset` in bytes until the attribute starts. For the first attribute the offset is usually zero. For any later attributes, the offset is the sum over `size_of` of the previous attributes' data.
5. This tells the shader what location to store this attribute at. For example `[[location(0)]] x: vec3<f32>` in the vertex shader would correspond to the `position` field of the `Vertex` struct, while `[[location(1)]] x: vec3<f32>` would be the `color` field.
6. `format` tells the shader the shape of the attribute. `Float32x3` corresponds to `vec3<f32>` in shader code. The max value we can store in an attribute is `Float32x4` (`Uint32x4`, and `Sint32x4` work as well). We'll keep this in mind for when we have to store things that are bigger than `Float32x4`.
For you visually learners, our vertex buffer looks like this.
For you visual learners, our vertex buffer looks like this.
![A figure of the VertexBufferLayout](./vb_desc.png)
@ -259,7 +259,7 @@ render_pass.draw(0..self.num_vertices, 0..1);
Before our changes will have any effect, we need to update our vertex shader to get its data from the vertex buffer. We'll also have it include the vertex color as well.
```glsl
```wgsl
// Vertex shader
struct VertexInput {

@ -81,7 +81,7 @@ queue.write_texture(
<div class="note">
The old way of writing data to a texture was to copy the pixel data to a buffer and then copy it to the texture. Using `write_texture` is a bit more efficient as it uses one less buffer - I'll leave it here though in case you need it.
The old way of writing data to a texture was to copy the pixel data to a buffer and then copy it to the texture. Using `write_texture` is a bit more efficient as it uses one buffer less - I'll leave it here though in case you need it.
```rust
let buffer = device.create_buffer_init(
@ -140,7 +140,7 @@ let diffuse_sampler = device.create_sampler(&wgpu::SamplerDescriptor {
});
```
The `address_mode_*` parameters determine what to do if the sampler gets a texture coordinate that's outside of the texture itself. We have a few options to choose from:
The `address_mode_*` parameters determine what to do if the sampler gets a texture coordinate that's outside the texture itself. We have a few options to choose from:
* `ClampToEdge`: Any texture coordinates outside the texture will return the color of the nearest pixel on the edges of the texture.
* `Repeat`: The texture will repeat as texture coordinates exceed the textures dimensions.
@ -220,7 +220,7 @@ let diffuse_bind_group = device.create_bind_group(
);
```
Looking at this you might get a bit of déjà vu! That's because a `BindGroup` is a more specific declaration of the `BindGroupLayout`. The reason why they're separate is it allows us to swap out `BindGroup`s on the fly, so long as they all share the same `BindGroupLayout`. Each texture and sampler we create will need to be added to a `BindGroup`. For our purposes, we'll create a new bind group for each texture.
Looking at this you might get a bit of déjà vu! That's because a `BindGroup` is a more specific declaration of the `BindGroupLayout`. The reason they're separate is that it allows us to swap out `BindGroup`s on the fly, so long as they all share the same `BindGroupLayout`. Each texture and sampler we create will need to be added to a `BindGroup`. For our purposes, we'll create a new bind group for each texture.
Now that we have our `diffuse_bind_group`, let's add it to our `State` struct:
@ -234,12 +234,12 @@ struct State {
render_pipeline: wgpu::RenderPipeline,
vertex_buffer: wgpu::Buffer,
index_buffer: wgpu::Buffer,
num_indicies: u32,
num_indices: u32,
diffuse_bind_group: wgpu::BindGroup, // NEW!
}
```
And make sure we return these fields in the `new` method:
Make sure we return these fields in the `new` method:
```rust
impl State {
@ -340,11 +340,11 @@ Lastly we need to change `VERTICES` itself. Replace the existing definition with
```rust
// Changed
const VERTICES: &[Vertex] = &[
Vertex { position: [-0.0868241, 0.49240386, 0.0], tex_coords: [0.4131759, 0.00759614], }, // A
Vertex { position: [-0.49513406, 0.06958647, 0.0], tex_coords: [0.0048659444, 0.43041354], }, // B
Vertex { position: [-0.21918549, -0.44939706, 0.0], tex_coords: [0.28081453, 0.949397], }, // C
Vertex { position: [0.35966998, -0.3473291, 0.0], tex_coords: [0.85967, 0.84732914], }, // D
Vertex { position: [0.44147372, 0.2347359, 0.0], tex_coords: [0.9414737, 0.2652641], }, // E
Vertex { position: [-0.0868241, 0.49240386, 0.0], tex_coords: [0.4131759, 0.99240386], }, // A
Vertex { position: [-0.49513406, 0.06958647, 0.0], tex_coords: [0.0048659444, 0.56958647], }, // B
Vertex { position: [-0.21918549, -0.44939706, 0.0], tex_coords: [0.28081453, 0.05060294], }, // C
Vertex { position: [0.35966998, -0.3473291, 0.0], tex_coords: [0.85967, 0.1526709], }, // D
Vertex { position: [0.44147372, 0.2347359, 0.0], tex_coords: [0.9414737, 0.7347359], }, // E
];
```
@ -378,7 +378,7 @@ fn vs_main(
Now that we have our vertex shader outputting our `tex_coords`, we need to change the fragment shader to take them in. With these coordinates, we'll finally be able to use our sampler to get a color from our texture.
```glsl
```wgsl
// Fragment shader
[[group(0), binding(0)]]
@ -404,7 +404,7 @@ That's weird, our tree is upside down! This is because wgpu's world coordinates
![happy-tree-uv-coords.png](./happy-tree-uv-coords.png)
We can get our triangle right-side up by inverting the y coordinate of each texture coordinate:
We can get our triangle right-side up by replacing the y coordinate `y` of each texture coordinate with `1 - y`:
```rust
const VERTICES: &[Vertex] = &[
@ -423,7 +423,7 @@ With that in place, we now have our tree right-side up on our hexagon:
## Cleaning things up
For convenience sake, let's pull our texture code into its module. We'll first need to add the [anyhow](https://docs.rs/anyhow/) crate to our `Cargo.toml` file to simplify error handling;
For convenience, let's pull our texture code into its own module. We'll first need to add the [anyhow](https://docs.rs/anyhow/) crate to our `Cargo.toml` file to simplify error handling;
```toml
[dependencies]
@ -539,7 +539,7 @@ let diffuse_texture = texture::Texture::from_bytes(&device, &queue, diffuse_byte
// Everything up until `let texture_bind_group_layout = ...` can now be removed.
```
We still need to store the bind group separately so that `Texture` doesn't need know how the `BindGroup` is laid out. Creating the `diffuse_bind_group` changes slightly to use the `view` and `sampler` fields of our `diffuse_texture`:
We still need to store the bind group separately so that `Texture` doesn't need know how the `BindGroup` is laid out. The creation of `diffuse_bind_group` slightly changes to use the `view` and `sampler` fields of `diffuse_texture`:
```rust
let diffuse_bind_group = device.create_bind_group(

Loading…
Cancel
Save