Some of you reading this are very experienced with opening up windows in Rust and probably have your favorite windowing library, but this guide is designed for everybody, so it's something that we need to cover. Luckily, if you don't need to read this if you know what you're doing. One thing that you do need to know is that whatever windowing solution you use needs to support the [raw-window-handle](https://github.com/rust-windowing/raw-window-handle) crate.
Some of you reading this are very experienced with opening up windows in Rust and probably have your favorite windowing library, but this guide is designed for everybody, so it's something that we need to cover. Luckily, you don't need to read this if you know what you're doing. One thing that you do need to know is that whatever windowing solution you use needs to support the [raw-window-handle](https://github.com/rust-windowing/raw-window-handle) crate.
## What crates are we using?
For the beginner stuff, we're going to keep things very simple, we'll add things as we go, but I've listed the relevant `Cargo.toml` bits below.
@ -14,7 +14,7 @@ wgpu = "0.5.0"
futures = "0.3.4"
```
If you're on Windows, you can specify Vulkan as you desired backend by removing the `wgpu = "0.5.0"` and adding the following.
If you're on Windows, you can specify Vulkan as your desired backend instead of DirectX by removing the `wgpu = "0.5.0"` and adding the following.
``` toml
[dependencies.wgpu]
@ -33,7 +33,7 @@ There's not much going on here yet, so I'm just going to post the code in full.
use winit::{
event::*,
event_loop::{EventLoop, ControlFlow},
window::{WindowBuilder},
window::{Window, WindowBuilder},
};
fn main() {
@ -43,7 +43,6 @@ fn main() {
.unwrap();
event_loop.run(move |event, _, control_flow| {
*control_flow = ControlFlow::Poll;
match event {
Event::WindowEvent {
ref event,
@ -73,4 +72,4 @@ fn main() {
All this does is create a window, and keep it open until until user closes it, or presses escape. Next tutorial we'll actually start using wgpu!
For convenience we're going to pack all the fields into a struct, and create some methods on that.
```rust
@ -43,7 +42,7 @@ impl State {
I'm glossing over `State`s fields, but they'll make more sense as I explain the code behind the methods.
## new()
## State::new()
The code for this is pretty straight forward, but let's break this down a bit.
```rust
@ -144,7 +143,7 @@ let mut state = block_on(State::new(&window));
```
## resize()
If we want to support resizing in our application, we're going to need to recreate the `swap_chain` everytime the window's size changes. That's the reason we stored the `hidpi_factor`, the logical `size`, and the `sc_desc` used to create the swapchain. With all of these, the resize method is very simple.
If we want to support resizing in our application, we're going to need to recreate the `swap_chain` everytime the window's size changes. That's the reason we stored the physical `size` and the `sc_desc` used to create the swapchain. With all of these, the resize method is very simple.
// RedrawRequested will only trigger once, unless we manually
@ -331,7 +330,7 @@ Some of you may be able to tell what's going on just by looking at it, but I'd b
}
```
A `RenderPassDescriptor` only has two fields: `color_attachments` and `depth_stencil_attachment`. The `color_attachements` describe where we are going to draw our color too.
A `RenderPassDescriptor` only has two fields: `color_attachments` and `depth_stencil_attachment`. The `color_attachements` describe where we are going to draw our color to.
We'll use `depth_stencil_attachment` later, but we'll set it to `None` for now.
@ -7,7 +7,7 @@ If you're familiar with OpenGL, you may remember using shader programs. You can
Shaders are mini programs that you send to the gpu to perform operations on your data. There are 3 main types of shader: vertex, fragment, and compute. There are others such as geometry shaders, but they're more of an advanced topic. For now we're just going to use vertex, and fragment shaders.
## Vertex, fragment.. what are those?
A vertex is a point in 3d space (can also be 2d). These vertices are then bundle in groups of 2s to form lines and/or 3s to form triangles.
A vertex is a point in 3d space (can also be 2d). These vertices are then bundled in groups of 2s to form lines and/or 3s to form triangles.
<imgsrc="./tutorial3-pipeline-vertices.png"/>
@ -110,11 +110,12 @@ Now let's move to the `new()` method, and start making the pipeline. We'll have
let vs_src = include_str!("shader.vert");
let fs_src = include_str!("shader.frag");
let vs_spirv = glsl_to_spirv::compile(vs_src, glsl_to_spirv::ShaderType::Vertex).unwrap();
let fs_spirv = glsl_to_spirv::compile(fs_src, glsl_to_spirv::ShaderType::Fragment).unwrap();
let mut compiler = shaderc::Compiler::new().unwrap();
let vs_spirv = compiler.compile_into_spirv(vs_src, shaderc::ShaderKind::Vertex, "shader.vert", "main", None).unwrap();
let fs_spirv = compiler.compile_into_spirv(fs_src, shaderc::ShaderKind::Fragment, "shader.frag", "main", None).unwrap();
let vs_data = wgpu::read_spirv(vs_spirv).unwrap();
let fs_data = wgpu::read_spirv(fs_spirv).unwrap();
let vs_data = wgpu::read_spirv(std::io::Cursor::new(vs_spirv.as_binary_u8())).unwrap();
let fs_data = wgpu::read_spirv(std::io::Cursor::new(fs_spirv.as_binary_u8())).unwrap();
let vs_module = device.create_shader_module(&vs_data);
let fs_module = device.create_shader_module(&fs_data);
@ -143,7 +144,7 @@ let render_pipeline = device.create_render_pipeline(&wgpu::RenderPipelineDescrip
}),
```
Two things to note here:
Two things to note here:
1. You can specify an `entry_point` for your shaders. I normally use `"main"` as that's what it would be in OpenGL, but feel free to use whatever name you like.
2. The `fragment_stage` is technically optional, so you have to wrap it in `Some()`. I've never used a vertex shader without a fragment shader, but the option is available if you need it.
Our vertices will all have a position and a color. The position represents the x, y, and z of the vertex in 3d space. The color is the red, greed, and blue values for the vertex. We need the `Vertex` to be copyable so we can create a buffer with it.
Our vertices will all have a position and a color. The position represents the x, y, and z of the vertex in 3d space. The color is the red, green, and blue values for the vertex. We need the `Vertex` to be copyable so we can create a buffer with it.
Next we need the actual data to will make up our triangle. Below `Vertex` add the following.
Next we need the actual data that will make up our triangle. Below `Vertex` add the following.
@ -109,7 +110,7 @@ let diffuse_sampler = device.create_sampler(&wgpu::SamplerDescriptor {
mipmap_filter: wgpu::FilterMode::Nearest,
lod_min_clamp: -100.0,
lod_max_clamp: 100.0,
compare_function: wgpu::CompareFunction::Always,
compare: wgpu::CompareFunction::Always,
});
```
@ -128,7 +129,7 @@ Mipmaps are a complex topic, and will require [their own section](/todo). Suffic
`lod_(min/max)_clamp` are also related to mipmapping, so will skip over them.
The `compare_function` is often use in filtering. This is used in techniques such as [shadow mapping](/todo). We don't really care here, but the options are `Never`, `Less`, `Equal`, `LessEqual`, `Greater`, `NotEqual`, `GreaterEqual`, and `Always`.
The `compare` is often use in filtering. This is used in techniques such as [shadow mapping](/todo). We don't really care here, but the options are `Never`, `Less`, `Equal`, `LessEqual`, `Greater`, `NotEqual`, `GreaterEqual`, and `Always`.
All these different resources are nice and all, but they doesn't do us much good if we can't plug them in anywhere. This is where `BindGroup`s and `PipelineLayout`s come in.
@ -342,7 +343,7 @@ If we run our program now we should get the following result.
![an upside down tree on a hexagon](./upside-down.png)
That's weird, our tree is upside down! This is because wgpu's coordinate system has positive y values going down while texture coords have y as up.
That's weird, our tree is upside down! This is because wgpu's coordinate system has positive y values going down while texture coords have y as up.
The `build_view_projection_matrix` is where the magic happens.
1. The `view` matrix moves the world to be at the position and rotation of the camera. It's essentialy an the inverse of whatever the transform matrix of the camera would be.
2. The `proj` matrix warps the scene to give the effect of depth. Without this, objects up close would be the same size as objects far away.
The `build_view_projection_matrix` is where the magic happens.
1. The `view` matrix moves the world to be at the position and rotation of the camera. It's essentialy an inverse of whatever the transform matrix of the camera would be.
2. The `proj` matrix wraps the scene to give the effect of depth. Without this, objects up close would be the same size as objects far away.
3. The coordinate system in Wgpu is based on DirectX, and Metal's coordinate systems. That means that in [normalized device coordinates](https://github.com/gfx-rs/gfx/tree/master/src/backend/dx12#normalized-coordinates) the x axis and y axis are in the range of -1.0 to +1.0, and the z axis is 0.0 to +1.0. The `cgmath` crate (as well as most game math crates) are built for OpenGL's coordinate system. This matrix will scale and translate our scene from OpenGL's coordinate sytem to WGPU's. We'll define it as follows.
```rust
@ -87,7 +87,7 @@ Now that we have our camera, and it can make us a view projection matrix, we nee
## The uniform buffer
Up to this point we've used `Buffer`s to store our vertex and index data, and even to load our textures. We going to use them again to create what's known as a uniform buffer. A uniform is a blob of data that is available to every invocation of a set of shaders. We've technically already used uniforms for our texture and sampler. We're going to use them again to store our view projection matrix. To start let's create a struct to hold our `Uniforms`.
Up to this point we've used `Buffer`s to store our vertex and index data, and even to load our textures. We are going to use them again to create what's known as a uniform buffer. A uniform is a blob of data that is available to every invocation of a set of shaders. We've technically already used uniforms for our texture and sampler. We're going to use them again to store our view projection matrix. To start let's create a struct to hold our `Uniforms`.
```rust
#[repr(C)] // We need this for Rust to store our data correctly for the shaders
@ -411,7 +411,7 @@ Have our model rotate on it's own independently of the the camera. *Hint: you'll
Up to this point we've been drawing just one object. Most games have hundreds of objects on screen at the same time. If we wanted to draw multiple instances of our model, we could copy the vertex buffer and modify it's vertices to be in the right place, but this would be hilariously inefficient. We have our model, and we now how to position it in 3d space with a matrix, like we did the camera, so all we have to do is change the matrix we're using when we draw.
Up to this point we've been drawing just one object. Most games have hundreds of objects on screen at the same time. If we wanted to draw multiple instances of our model, we could copy the vertex buffer and modify it's vertices to be in the right place, but this would be hilariously inefficient. We have our model, and we now how to position it in 3d space with a matrix, like we did the camera, so all we have to do is change the matrix we're using when we draw.
## The naive method
@ -42,13 +42,13 @@ struct Instance {
impl Instance {
fn to_matrix(&self) -> cgmath::Matrix4<f32> {
cgmath::Matrix4::from_translation(self.position)
cgmath::Matrix4::from_translation(self.position)
* cgmath::Matrix4::from(self.rotation)
}
}
```
Next we'll add `instances: Vec<Instance>,` to `State` and create our instances in new with the following in `new()`.
Next we'll add `instances: Vec<Instance>,` to `State` and create our instances with the following in `new()`.
```rust
// ...
@ -95,7 +95,7 @@ layout(location=1) in vec2 a_tex_coords;
layout(location=0) out vec2 v_tex_coords;
layout(set=1, binding=0)
layout(set=1, binding=0)
uniform Uniforms {
mat4 u_view_proj;
mat4 u_model; // NEW!
@ -201,7 +201,7 @@ layout(location=1) in vec2 a_tex_coords;
layout(location=0) out vec2 v_tex_coords;
layout(set=1, binding=0)
layout(set=1, binding=0)
uniform Uniforms {
mat4 u_view_proj;
mat4 u_model[100];
@ -259,7 +259,7 @@ Since we're using `bytemuck` for casting our data to `&[u8]`, we're going to nee
We create a storage buffer in a similar way as any other buffer.
@ -334,12 +336,12 @@ layout(location=1) in vec2 a_tex_coords;
layout(location=0) out vec2 v_tex_coords;
layout(set=1, binding=0)
layout(set=1, binding=0)
uniform Uniforms {
mat4 u_view_proj;
};
layout(set=1, binding=1)
layout(set=1, binding=1)
buffer Instances {
mat4 s_models[];
};
@ -352,7 +354,7 @@ void main() {
You can see that we got rid of the `u_model` field from the `Uniforms` block and create a new `Instances` located at `set=1, binding=1` corresponding with our bind group layout. Another thing to notice is that we use the `buffer` keyword for the block instead of `uniform`. The details of the `buffer` can be found on [the OpenGL wiki](https://www.khronos.org/opengl/wiki/Shader_Storage_Buffer_Object).
This method is nice because it allows use to store more data overall as storage buffers can theoretically store as much data as the GPU can handle, where uniform buffers are capped. This does mean that storage buffers are slower that uniform buffers as they are stored like other buffers such as textures as and therefore aren't as close in memory, but that usually won't matter much if you're dealing with large amounts of data.
This method is nice because it allows us to store more data overall as storage buffers can theoretically store as much data as the GPU can handle, where uniform buffers are capped. This does mean that storage buffers are slower that uniform buffers as they are stored like other buffers such as textures as and therefore aren't as close in memory, but that usually won't matter much if you're dealing with large amounts of data.
Another benefit to storage buffers is that they can be written to by the shader, unlike uniform buffers. If we want to mutate a large amount of data with a compute shader, we'd use a writeable storage buffer for our output (and potentially input as well).
@ -413,7 +415,7 @@ Let's unpack this a bit.
2. Vertex attributes have a limited size: `Float4` or the equivalent. This means that our instance buffer will take up multiple attribute slots. 4 in our case.
3. Since we're using 2 slots for our `Vertex` struct, we need to start the `shader_location` at 2.
Now we need to add our a `VertexBufferDescriptor` to our `render_pipeline`.
Now we need to add a `VertexBufferDescriptor` to our `render_pipeline`.
```rust
let render_pipeline = device.create_render_pipeline(&wgpu::RenderPipelineDescriptor {
@ -460,7 +462,7 @@ layout(location=2) in mat4 a_model; // NEW!
Make sure you update the `depth_texture`*after* you update `sc_desc`. If you don't, your program will crash as the `depth_texture` will be a different size than the `swap_chain` texture.
The last change we need to make is in the `render()` function. We've created the `depth_texture`, but we're not currently using it. We use it by attaching it to the `depth_stencil_attachment` of a render pass.