You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
learn-wgpu/docs/beginner/tutorial9-models/README.md

518 lines
18 KiB
Markdown

# Model Loading
Up to this point we've been creating our models manually. While this is an acceptable way to do this, but it's really slow if we want to include complex models with lots of polygons. Because of this, we're going modify our code to leverage the obj model format so that we can create a model in a software such as blender and display it in our code.
Our `main.rs` file is getting pretty cluttered, let's create a `model.rs` file that we can put our model loading code into.
```rust
// model.rs
pub trait Vertex {
fn desc<'a>() -> wgpu::VertexBufferLayout<'a>;
}
#[repr(C)]
#[derive(Copy, Clone, Debug, bytemuck::Pod, bytemuck::Zeroable)]
pub struct ModelVertex {
position: [f32; 3],
tex_coords: [f32; 2],
normal: [f32; 3],
}
impl Vertex for ModelVertex {
fn desc<'a>() -> wgpu::VertexBufferLayout<'a> {
todo!();
}
}
```
You'll notice a couple of things here. In `main.rs` we had `Vertex` as a struct, here we're using a trait. We could have multiple vertex types (model, UI, instance data, etc.). Making `Vertex` a trait will allow us to abstract our the `VertexBufferLayout` creation code to make creating `RenderPipeline`s simpler.
Another thing to mention is the `normal` field in `ModelVertex`. We won't use this until we talk about lighting, but will add it to the struct for now.
Let's define our `VertexBufferLayout`.
```rust
impl Vertex for ModelVertex {
fn desc<'a>() -> wgpu::VertexBufferLayout<'a> {
use std::mem;
wgpu::VertexBufferLayout {
array_stride: mem::size_of::<ModelVertex>() as wgpu::BufferAddress,
step_mode: wgpu::VertexStepMode::Vertex,
attributes: &[
wgpu::VertexAttribute {
offset: 0,
shader_location: 0,
format: wgpu::VertexFormat::Float32x3,
},
wgpu::VertexAttribute {
offset: mem::size_of::<[f32; 3]>() as wgpu::BufferAddress,
shader_location: 1,
format: wgpu::VertexFormat::Float32x2,
},
wgpu::VertexAttribute {
offset: mem::size_of::<[f32; 5]>() as wgpu::BufferAddress,
shader_location: 2,
format: wgpu::VertexFormat::Float32x3,
},
],
}
}
}
```
This is basically the same as the original `VertexBufferLayout`, but we added a `VertexAttribute` for the `normal`. Remove the `Vertex` struct in `main.rs` as we won't need it anymore, and use our new `Vertex` from model for the `RenderPipeline`.
```rust
let render_pipeline = device.create_render_pipeline(&wgpu::RenderPipelineDescriptor {
// ...
vertex: wgpu::VertexState {
// ...
buffers: &[model::ModelVertex::desc(), InstanceRaw::desc()],
4 years ago
},
// ...
});
```
Since the `desc` method is implemented on the `Vertex` trait, the trait needs to be imported before the method will be accessible. Put the import towards the top of the file with the others.
```rust
use model::Vertex;
```
With all that in place we need a model to render. If you have one already that's great, but I've supplied a [zip file](https://github.com/sotrh/learn-wgpu/blob/master/code/beginner/tutorial9-models/res/cube.zip) with the model and all of it's textures. We're going to put this model in a new `res` folder next to the existing `src` folder.
## Accessing files in the res folder
When cargo builds and runs our program it sets what's known as the current working directory. This directory is usually the folder containing your projects root `Cargo.toml`. The path to our res folder may differ depending on the structure of the project. In the `res` folder for the example code for this section tutorial is at `code/beginner/tutorial9-models/res/`. When loading our model we could use this path, and just append `cube.obj`. This is fine, but if we change our projects structure, our code will break.
We're going to fix that by modifying our build script to copy our `res` folder to where cargo creates our executable, and we'll reference it from there. Create a file called `build.rs` and add the following:
```rust
use anyhow::*;
use fs_extra::copy_items;
use fs_extra::dir::CopyOptions;
use std::env;
fn main() -> Result<()> {
// This tells cargo to rerun this script if something in /res/ changes.
println!("cargo:rerun-if-changed=res/*");
let out_dir = env::var("OUT_DIR")?;
let mut copy_options = CopyOptions::new();
copy_options.overwrite = true;
let mut paths_to_copy = Vec::new();
paths_to_copy.push("res/");
copy_items(&paths_to_copy, out_dir, &copy_options)?;
Ok(())
}
```
<div class="note">
Make sure to put `build.rs` in the same folder as the `Cargo.toml`. If you don't, cargo won't run it when your crate builds.
</div>
<div class="note">
The `OUT_DIR` is an environment variable that cargo uses to specify where our application will be built.
</div>
You'll need to modify your `Cargo.toml` for this to work properly. Add the following below your `[dependencies]` block.
```toml
[build-dependencies]
anyhow = "1.0"
fs_extra = "1.2"
glob = "0.3"
```
## Loading models with TOBJ
We're going to use the [tobj](https://docs.rs/tobj/3.0/tobj/) library to load our model. Let's add it to our `Cargo.toml`.
```toml
[dependencies]
# other dependencies...
tobj = "3.0"
```
Before we can load our model though, we need somewhere to put it.
```rust
// model.rs
pub struct Model {
pub meshes: Vec<Mesh>,
pub materials: Vec<Material>,
}
```
You'll notice that our `Model` struct has a `Vec` for the `meshes`, and for `materials`. This is important as our obj file can include multiple meshes and materials. We still need to create the `Mesh` and `Material` classes, so let's do that.
```rust
pub struct Material {
pub name: String,
pub diffuse_texture: texture::Texture,
pub bind_group: wgpu::BindGroup,
}
pub struct Mesh {
pub name: String,
pub vertex_buffer: wgpu::Buffer,
pub index_buffer: wgpu::Buffer,
pub num_elements: u32,
pub material: usize,
}
```
The `Material` is pretty simple, it's just the name and one texture. Our cube obj actually has 2 textures, but one is a normal map, and we'll get to those [later](../../intermediate/tutorial11-normals). The name is more for debugging purposes.
Speaking of textures, we'll need to add a `load()` method to `Texture` in `texture.rs`.
```rust
use std::path::Path;
pub fn load<P: AsRef<Path>>(
device: &wgpu::Device,
queue: &wgpu::Queue,
path: P,
) -> Result<Self> {
// Needed to appease the borrow checker
let path_copy = path.as_ref().to_path_buf();
let label = path_copy.to_str();
let img = image::open(path)?;
Self::from_image(device, queue, &img, label)
}
```
The `load` method will be useful when we load the textures for our models, as `include_bytes!` requires that we know the name of the file at compile time which we can't really guarantee with model textures.
While we're at it let's import `texture.rs` in `model.rs`.
```rust
use crate::texture;
```
We also need to make a subtle change on `from_image()` method in `texture.rs`. PNGs work fine with `as_rgba8()`, as they have an alpha channel. But, JPEGs don't have an alpha channel, and the code would panic if we try to call `as_rgba8()` on the JPEG texture image we are going to use. Instead, we can use `to_rgba8()` to handle such an image, which will generate a new image buffer with alpha channel even if the original image does not have one.
```rust
let rgba = img.to_rgba8();
```
Since `rgba` is now a new image buffer, and not a reference to the original image's buffer, when it is used in the call to `write_texture` later, it needs to be passed as a reference instead.
```rust
//...
&rgba, // UPDATED!
wgpu::ImageDataLayout {
```
`Mesh` holds a vertex buffer, an index buffer, and the number of indices in the mesh. We're using an `usize` for the material. This `usize` will be used to index the `materials` list when it comes time to draw.
With all that out of the way, we can get to loading our model.
```rust
impl Model {
4 years ago
pub fn load<P: AsRef<Path>>(
device: &wgpu::Device,
queue: &wgpu::Queue,
4 years ago
layout: &wgpu::BindGroupLayout,
path: P,
) -> Result<Self> {
let (obj_models, obj_materials) = tobj::load_obj(path.as_ref(), &LoadOptions {
triangulate: true,
single_index: true,
..Default::default()
},
)?;
let obj_materials = obj_materials?;
4 years ago
// We're assuming that the texture files are stored with the obj file
let containing_folder = path.as_ref().parent()
.context("Directory has no parent")?;
let mut materials = Vec::new();
for mat in obj_materials {
let diffuse_path = mat.diffuse_texture;
let diffuse_texture = texture::Texture::load(device, queue, containing_folder.join(diffuse_path))?;
4 years ago
let bind_group = device.create_bind_group(&wgpu::BindGroupDescriptor {
layout,
entries: &[
wgpu::BindGroupEntry {
4 years ago
binding: 0,
resource: wgpu::BindingResource::TextureView(&diffuse_texture.view),
},
wgpu::BindGroupEntry {
4 years ago
binding: 1,
resource: wgpu::BindingResource::Sampler(&diffuse_texture.sampler),
},
],
label: None,
});
materials.push(Material {
name: mat.name,
diffuse_texture,
bind_group,
});
}
let mut meshes = Vec::new();
for m in obj_models {
let mut vertices = Vec::new();
for i in 0..m.mesh.positions.len() / 3 {
vertices.push(ModelVertex {
position: [
m.mesh.positions[i * 3],
m.mesh.positions[i * 3 + 1],
m.mesh.positions[i * 3 + 2],
],
4 years ago
tex_coords: [m.mesh.texcoords[i * 2], m.mesh.texcoords[i * 2 + 1]],
normal: [
m.mesh.normals[i * 3],
m.mesh.normals[i * 3 + 1],
m.mesh.normals[i * 3 + 2],
],
});
}
let vertex_buffer = device.create_buffer_init(
&wgpu::util::BufferInitDescriptor {
label: Some(&format!("{:?} Vertex Buffer", path.as_ref())),
contents: bytemuck::cast_slice(&vertices),
usage: wgpu::BufferUsages::VERTEX,
}
4 years ago
);
let index_buffer = device.create_buffer_init(
&wgpu::util::BufferInitDescriptor {
label: Some(&format!("{:?} Index Buffer", path.as_ref())),
contents: bytemuck::cast_slice(&m.mesh.indices),
usage: wgpu::BufferUsages::INDEX,
}
4 years ago
);
meshes.push(Mesh {
name: m.name,
vertex_buffer,
index_buffer,
num_elements: m.mesh.indices.len() as u32,
material: m.mesh.material_id.unwrap_or(0),
});
}
4 years ago
Ok(Self { meshes, materials })
}
}
```
## Rendering a mesh
Before we can draw the model, we need to be able to draw an individual mesh. Let's create a trait called `DrawModel`, and implement it for `RenderPass`.
```rust
pub trait DrawModel<'a> {
fn draw_mesh(&mut self, mesh: &'a Mesh);
4 years ago
fn draw_mesh_instanced(
&mut self,
mesh: &'a Mesh,
4 years ago
instances: Range<u32>,
);
}
impl<'a, 'b> DrawModel<'b> for wgpu::RenderPass<'a>
4 years ago
where
'b: 'a,
{
fn draw_mesh(&mut self, mesh: &'b Mesh) {
self.draw_mesh_instanced(mesh, 0..1);
}
4 years ago
fn draw_mesh_instanced(
&mut self,
mesh: &'b Mesh,
instances: Range<u32>,
){
self.set_vertex_buffer(0, mesh.vertex_buffer.slice(..));
self.set_index_buffer(mesh.index_buffer.slice(..), wgpu::IndexFormat::Uint32);
self.draw_indexed(0..mesh.num_elements, 0, instances);
}
}
```
We could have put this methods in `impl Model`, but I felt it made more sense to have the `RenderPass` do all the rendering, as that's kind of it's job. This does mean we have to import `DrawModel` when we go to render though.
```rust
// main.rs
render_pass.set_vertex_buffer(1, self.instance_buffer.slice(..));
render_pass.set_pipeline(&self.render_pipeline);
render_pass.set_bind_group(0, &self.diffuse_bind_group, &[]);
render_pass.set_bind_group(1, &self.camera_bind_group, &[]);
use model::DrawModel;
render_pass.draw_mesh_instanced(&self.obj_model.meshes[0], 0..self.instances.len() as u32);
```
Before that though we need to actually load the model and save it to `State`. Put the following in `State::new()`.
```rust
let res_dir = std::path::Path::new(env!("OUT_DIR")).join("res");
let obj_model = model::Model::load(
&device,
&queue,
&texture_bind_group_layout,
res_dir.join("cube.obj"),
).unwrap();
```
<div class="note">
We're using `OUT_DIR` here to get at our `res` folder.
</div>
Our new model is a bit bigger than our previous one so we're gonna need to adjust the spacing on our instances a bit.
```rust
const SPACE_BETWEEN: f32 = 3.0;
let instances = (0..NUM_INSTANCES_PER_ROW).flat_map(|z| {
(0..NUM_INSTANCES_PER_ROW).map(move |x| {
let x = SPACE_BETWEEN * (x as f32 - NUM_INSTANCES_PER_ROW as f32 / 2.0);
let z = SPACE_BETWEEN * (z as f32 - NUM_INSTANCES_PER_ROW as f32 / 2.0);
let position = cgmath::Vector3 { x, y: 0.0, z };
let rotation = if position.is_zero() {
cgmath::Quaternion::from_axis_angle(cgmath::Vector3::unit_z(), cgmath::Deg(0.0))
} else {
cgmath::Quaternion::from_axis_angle(position.normalize(), cgmath::Deg(45.0))
};
Instance {
position, rotation,
}
})
}).collect::<Vec<_>>();
```
With all that done, you should get something like this.
![cubes.png](./cubes.png)
## Using the correct textures
If you look at the texture files for our obj, you'll see that they don't match up to our obj. The texture we want to see is this one,
![cube-diffuse.jpg](./cube-diffuse.jpg)
but we're still getting our happy tree texture.
The reason for this is quite simple. Though we've created our textures we haven't created a bind group to give to the `RenderPass`. We're still using our old `diffuse_bind_group`. If we want to change that we need to use the bind group from our materials - the `bind_group` member of the `Material` struct.
We're going to add a material parameter to `DrawModel`.
```rust
pub trait DrawModel<'a> {
fn draw_mesh(&mut self, mesh: &'a Mesh, material: &'a Material, camera_bind_group: &'a wgpu::BindGroup);
4 years ago
fn draw_mesh_instanced(
&mut self,
mesh: &'a Mesh,
material: &'a Material,
4 years ago
instances: Range<u32>,
camera_bind_group: &'a wgpu::BindGroup,
4 years ago
);
}
impl<'a, 'b> DrawModel<'b> for wgpu::RenderPass<'a>
4 years ago
where
'b: 'a,
{
fn draw_mesh(&mut self, mesh: &'b Mesh, material: &'b Material, camera_bind_group: &'b wgpu::BindGroup) {
self.draw_mesh_instanced(mesh, material, 0..1, camera_bind_group);
}
4 years ago
fn draw_mesh_instanced(
&mut self,
mesh: &'b Mesh,
material: &'b Material,
instances: Range<u32>,
camera_bind_group: &'b wgpu::BindGroup,
4 years ago
) {
self.set_vertex_buffer(0, mesh.vertex_buffer.slice(..));
self.set_index_buffer(mesh.index_buffer.slice(..), wgpu::IndexFormat::Uint32);
self.set_bind_group(0, &material.bind_group, &[]);
self.set_bind_group(1, camera_bind_group, &[]);
self.draw_indexed(0..mesh.num_elements, 0, instances);
}
}
```
We need to change the render code to reflect this.
```rust
render_pass.set_vertex_buffer(1, self.instance_buffer.slice(..));
render_pass.set_pipeline(&self.render_pipeline);
let mesh = &self.obj_model.meshes[0];
let material = &self.obj_model.materials[mesh.material];
render_pass.draw_mesh_instanced(mesh, material, 0..self.instances.len() as u32, &self.camera_bind_group);
```
With all that in place we should get the following.
![cubes-correct.png](./cubes-correct.png)
## Rendering the entire model
Right now we are specifying the mesh and the material directly. This is useful if we want to draw a mesh with a different material. We're also not rendering other parts of the model (if we had some). Let's create a method for `DrawModel` that will draw all the parts of the model with their respective materials.
```rust
pub trait DrawModel<'a> {
// ...
fn draw_model(&mut self, model: &'a Model, camera_bind_group: &'a wgpu::BindGroup);
4 years ago
fn draw_model_instanced(
&mut self,
model: &'a Model,
4 years ago
instances: Range<u32>,
camera_bind_group: &'a wgpu::BindGroup,
4 years ago
);
}
impl<'a, 'b> DrawModel<'b> for wgpu::RenderPass<'a>
4 years ago
where
'b: 'a, {
// ...
fn draw_model(&mut self, model: &'b Model, camera_bind_group: &'b wgpu::BindGroup) {
self.draw_model_instanced(model, 0..1, camera_bind_group);
}
4 years ago
fn draw_model_instanced(
&mut self,
model: &'b Model,
instances: Range<u32>,
camera_bind_group: &'b wgpu::BindGroup,
4 years ago
) {
for mesh in &model.meshes {
let material = &model.materials[mesh.material];
self.draw_mesh_instanced(mesh, material, instances.clone(), camera_bind_group);
}
}
}
```
The code in `main.rs` will change accordingly.
```rust
render_pass.set_vertex_buffer(1, self.instance_buffer.slice(..));
render_pass.set_pipeline(&self.render_pipeline);
render_pass.draw_model_instanced(&self.obj_model, 0..self.instances.len() as u32, &self.camera_bind_group);
```
<AutoGithubLink/>