mirror of
https://github.com/sotrh/learn-wgpu.git
synced 2024-11-11 19:10:34 +00:00
docs for tutorials 1-9: typos, references to main.rs
supposed to be to lib.rs
, and readability changes
This commit is contained in:
parent
d99a4f7d15
commit
33bd66e368
@ -15,12 +15,12 @@ wgpu = "0.12"
|
||||
```
|
||||
|
||||
## Using Rust's new resolver
|
||||
As of version 0.10, wgpu require's cargo's [newest feature resolver](https://doc.rust-lang.org/cargo/reference/resolver.html#feature-resolver-version-2), which is the default in the 2021 edition (any new project started with Rust version 1.56.0 or newer). However, if you are still using the 2018 edition, you must include `resolver = "2"` in either the `[package]` section of `Cargo.toml` if you are working on a single crate, or the `[workspace]` section of the root `Cargo.toml` in a workspace.
|
||||
As of version 0.10, wgpu requires cargo's [newest feature resolver](https://doc.rust-lang.org/cargo/reference/resolver.html#feature-resolver-version-2), which is the default in the 2021 edition (any new project started with Rust version 1.56.0 or newer). However, if you are still using the 2018 edition, you must include `resolver = "2"` in either the `[package]` section of `Cargo.toml` if you are working on a single crate, or the `[workspace]` section of the root `Cargo.toml` in a workspace.
|
||||
|
||||
## env_logger
|
||||
It is very important to enable logging via `env_logger::init();`.
|
||||
When wgpu hits any error it panics with a generic message, while logging the real error via the log crate.
|
||||
This means if you don't include `env_logger::init()` wgpu will fail silently, leaving you very confused!
|
||||
This means if you don't include `env_logger::init()`, wgpu will fail silently, leaving you very confused!
|
||||
|
||||
## The code
|
||||
There's not much going on here yet, so I'm just going to post the code in full. Just paste this into your `lib.rs` or equivalent.
|
||||
@ -60,7 +60,7 @@ pub fn run() {
|
||||
|
||||
```
|
||||
|
||||
All this does is create a window, and keep it open until the user closes it, or presses escape. Next we'll need a `main.rs` to actually run the code. It's quite simple it just imports `run()` and, well runs it!
|
||||
All this does is create a window, and keep it open until the user closes it, or presses escape. Next, we'll need a `main.rs` to run the code. It's quite simple, it just imports `run()` and, well, runs it!
|
||||
|
||||
```rust
|
||||
use tutorial1_window::run;
|
||||
@ -70,30 +70,30 @@ fn main() {
|
||||
}
|
||||
```
|
||||
|
||||
If you only want to support desktop, that's all you have to do! In the next tutorial we'll actually start using wgpu!
|
||||
If you only want to support desktops, that's all you have to do! In the next tutorial, we'll start using wgpu!
|
||||
|
||||
## Added support for the web
|
||||
|
||||
If I go through this tutorial about WebGPU and never talk about using it on the web, then I'd hardly call this tutorial complete. Fortunately getting a wgpu application running in a browser is not too difficult once you get things set up.
|
||||
|
||||
Lets start with the changes we need to make to are `Cargo.toml`:
|
||||
Let's start with the changes we need to make to are `Cargo.toml`:
|
||||
|
||||
```toml
|
||||
[lib]
|
||||
crate-type = ["cdylib", "rlib"]
|
||||
```
|
||||
|
||||
These lines tell cargo that we want to allow our crate to build a native Rust static library (rlib) and a C/C++ compatible library (cdylib). We need the rlib if we want to run wgpu in a desktop environment. We need the cdylib to create the Web Assembly that browser will actually run.
|
||||
These lines tell cargo that we want to allow our crate to build a native Rust static library (rlib) and a C/C++ compatible library (cdylib). We need rlib if we want to run wgpu in a desktop environment. We need cdylib to create the Web Assembly that the browser will run.
|
||||
|
||||
<div class="note">
|
||||
|
||||
## Web Assembly
|
||||
|
||||
Web Assembly ie WASM, is a binary format supported by most modern browsers that allows lower level languages such as Rust to run on a web page. This allows us to right the bulk of our application in Rust and use a few lines of Javascript to get it running in a web browser.
|
||||
Web Assembly i.e. WASM, is a binary format supported by most modern browsers that allows lower-level languages such as Rust to run on a web page. This allows us to right the bulk of our application in Rust and use a few lines of Javascript to get it running in a web browser.
|
||||
|
||||
</div>
|
||||
|
||||
Now all we need are some more dependencies that are specific to running in WASM:
|
||||
Now, all we need are some more dependencies that are specific to running in WASM:
|
||||
|
||||
```toml
|
||||
[dependencies]
|
||||
@ -112,29 +112,29 @@ web-sys = { version = "0.3", features = [
|
||||
]}
|
||||
```
|
||||
|
||||
The [cfg-if](https://docs.rs/cfg-if) crate adds a macro that makes using platform specific code more manageable.
|
||||
The [cfg-if](https://docs.rs/cfg-if) crate adds a macro that makes using platform-specific code more manageable.
|
||||
|
||||
The `[target.'cfg(target_arch = "wasm32")'.dependencies]` line tells cargo to only include these dependencies if we are targeting the `wasm32` architecture. The next few dependencies are just make interfacing with javascript a lot easier.
|
||||
The `[target.'cfg(target_arch = "wasm32")'.dependencies]` line tells cargo to only include these dependencies if we are targeting the `wasm32` architecture. The next few dependencies just make interfacing with javascript a lot easier.
|
||||
|
||||
* [console_error_panic_hook](https://docs.rs/console_error_panic_hook) configures the `panic!` macro to send errors to the javascript console. Without this when you encounter panics, you'll be left in the dark for what caused them.
|
||||
* [console_error_panic_hook](https://docs.rs/console_error_panic_hook) configures the `panic!` macro to send errors to the javascript console. Without this when you encounter panics, you'll be left in the dark about what caused them.
|
||||
* [console_log](https://docs.rs/console_log) implements the [log](https://docs.rs/log) API. It sends all logs to the javascript console. It can be configured to only send logs of a particular log level. This is also great for debugging.
|
||||
* We need to enable WebGL feature on wgpu if we want to run on most current browsers. Support is in the works for using the WebGPU api directly, but that is only possible on experimental versions of browsers such as Firefox Nightly and Chrome Canary.<br>
|
||||
You're welcome to test this code on these browsers (and the wgpu devs would appreciate it as well), but for sake of simplicity I'm going to stick to using the WebGL feature until the WebGPU api gets to a more stable state.<br>
|
||||
If you want more details check out the guide for compiling for the web on [wgpu's repo](https://github.com/gfx-rs/wgpu/wiki/Running-on-the-Web-with-WebGPU-and-WebGL)
|
||||
* [wasm-bindgen](https://docs.rs/wasm-bindgen) is the most important dependency in this list. It's responsible for generating the boilerplate code that will tell the browser how to use our crate. It also allows us to expose methods in Rust that will can be used in Javascript, and vice-versa.<br>
|
||||
I won't get into the specifics of wasm-bindgen, so if you need a primer (or just a refresher) check out [this](https://rustwasm.github.io/wasm-bindgen/)
|
||||
You're welcome to test this code on these browsers (and the wgpu devs would appreciate it as well), but for sake of simplicity, I'm going to stick to using the WebGL feature until the WebGPU api gets to a more stable state.<br>
|
||||
If you want more details check out the guide for compiling for the web on [wgpu's repo](https://github.com/gfx-rs/wgpu/wiki/Running-on-the-Web-with-WebGPU-and-WebGL)
|
||||
* [wasm-bindgen](https://docs.rs/wasm-bindgen) is the most important dependency in this list. It's responsible for generating the boilerplate code that will tell the browser how to use our crate. It also allows us to expose methods in Rust that can be used in Javascript, and vice-versa.<br>
|
||||
I won't get into the specifics of wasm-bindgen, so if you need a primer (or just a refresher) check out [this](https://rustwasm.github.io/wasm-bindgen/)
|
||||
* [web-sys](https://docs.rs/web-sys) is a crate that includes many methods and structures that are available in a normal javascript application: `get_element_by_id`, `append_child`. The features listed are only the bare minimum of what we need currently.
|
||||
|
||||
## More code
|
||||
|
||||
First we need to import `wasm-bindgen` in `lib.rs`:
|
||||
First, we need to import `wasm-bindgen` in `lib.rs`:
|
||||
|
||||
```rust
|
||||
#[cfg(target_arch="wasm32")]
|
||||
use wasm_bindgen::prelude::*;
|
||||
```
|
||||
|
||||
Next we need to tell wasm-bindgen to run our `run()` function when the WASM is loaded:
|
||||
Next, we need to tell wasm-bindgen to run our `run()` function when the WASM is loaded:
|
||||
|
||||
```rust
|
||||
#[cfg_attr(target_arch="wasm32", wasm_bindgen(start))]
|
||||
@ -149,16 +149,16 @@ Then we need to toggle what logger we are using based on if we are in WASM land
|
||||
cfg_if::cfg_if! {
|
||||
if #[cfg(target_arch = "wasm32")] {
|
||||
std::panic::set_hook(Box::new(console_error_panic_hook::hook));
|
||||
console_log::init_with_level(log::Level::Warn).expect("Could't initialize logger");
|
||||
console_log::init_with_level(log::Level::Warn).expect("Couldn't initialize logger");
|
||||
} else {
|
||||
env_logger::init();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This will setup `console_log` and `console_error_panic_hook` in a web build, and will initialize `env_logger` in a normal build. This is important as `env_logger` doesn't support Web Assembly at the moment.
|
||||
This will set up `console_log` and `console_error_panic_hook` in a web build, and will initialize `env_logger` in a normal build. This is important as `env_logger` doesn't support Web Assembly at the moment.
|
||||
|
||||
Next, after we create our event loop and window, we need to add an canvas to the html document that we will host our application:
|
||||
Next, after we create our event loop and window, we need to add a canvas to the HTML document that we will host our application:
|
||||
|
||||
```rust
|
||||
#[cfg(target_arch = "wasm32")]
|
||||
@ -183,21 +183,21 @@ Next, after we create our event loop and window, we need to add an canvas to the
|
||||
|
||||
<div class="note">
|
||||
|
||||
The `"wasm-example"` id is specific to my project (aka. this tutorial). You can substitute this for what ever id your using in your html. Alternatively you could add the canvas directly to the `<body>` as they do in the wgpu repo. This part is ultimately up to you.
|
||||
The `"wasm-example"` id is specific to my project (aka. this tutorial). You can substitute this for whatever id you're using in your HTML. Alternatively, you could add the canvas directly to the `<body>` as they do in the wgpu repo. This part is ultimately up to you.
|
||||
|
||||
</div>
|
||||
|
||||
That's all the web specific code we need for now. Next thing we need to do is build the Web Assembly itself.
|
||||
That's all the web-specific code we need for now. The next thing we need to do is build the Web Assembly itself.
|
||||
|
||||
## Wasm Pack
|
||||
|
||||
Now you can build a wgpu application with just wasm-bindgen, but I ran into some issues doing that. For one, you need to install wasm-bindgen on your computer as well as include it as a dependency. They version you install as a dependency **needs** to exactly match the version you installed, otherwise your build will fail.
|
||||
Now you can build a wgpu application with just wasm-bindgen, but I ran into some issues doing that. For one, you need to install wasm-bindgen on your computer as well as include it as a dependency. The version you install as a dependency **needs** to exactly match the version you installed, otherwise, your build will fail.
|
||||
|
||||
In order to get around this shortcoming, and to make the lives of everyone reading this easier, I opted to add [wasm-pack](https://rustwasm.github.io/docs/wasm-pack/) to the mix. Wasm-pack handles installing the correct version of wasm-bindgen for you, and it supports building for different types of web targets as well: browser, NodeJS, and bundlers such as webpack.
|
||||
To get around this shortcoming, and to make the lives of everyone reading this easier, I opted to add [wasm-pack](https://rustwasm.github.io/docs/wasm-pack/) to the mix. Wasm-pack handles installing the correct version of wasm-bindgen for you, and it supports building for different types of web targets as well: browser, NodeJS, and bundlers such as webpack.
|
||||
|
||||
In order to use wasm-pack, first you need to [install it](https://rustwasm.github.io/wasm-pack/installer/).
|
||||
To use wasm-pack, first, you need to [install it](https://rustwasm.github.io/wasm-pack/installer/).
|
||||
|
||||
Once you've done that, we can use it to build our crate. If you only have one crate in your project, you can just use `wasm-pack build`. If your using a workspace, you'll have to specify what crate you want to build. Imagine your crate is a directory called `game`, you would use:
|
||||
Once you've done that, we can use it to build our crate. If you only have one crate in your project, you can just use `wasm-pack build`. If you're using a workspace, you'll have to specify what crate you want to build. Imagine your crate is a directory called `game`, you would use:
|
||||
|
||||
```bash
|
||||
wasm-pack build game
|
||||
@ -220,7 +220,7 @@ If you intend to use your WASM module in a plain HTML website, you'll need to te
|
||||
wasm-pack build --target web
|
||||
```
|
||||
|
||||
You'll then need run the WASM code in an ES6 Module:
|
||||
You'll then need to run the WASM code in an ES6 Module:
|
||||
|
||||
```html
|
||||
<!DOCTYPE html>
|
||||
@ -234,17 +234,17 @@ You'll then need run the WASM code in an ES6 Module:
|
||||
</head>
|
||||
|
||||
<body>
|
||||
<script type="module">
|
||||
import init from "./pkg/pong.js";
|
||||
init().then(() => {
|
||||
console.log("WASM Loaded");
|
||||
});
|
||||
</script>
|
||||
<style>
|
||||
canvas {
|
||||
background-color: black;
|
||||
}
|
||||
</style>
|
||||
<script type="module">
|
||||
import init from "./pkg/pong.js";
|
||||
init().then(() => {
|
||||
console.log("WASM Loaded");
|
||||
});
|
||||
</script>
|
||||
<style>
|
||||
canvas {
|
||||
background-color: black;
|
||||
}
|
||||
</style>
|
||||
</body>
|
||||
|
||||
</html>
|
||||
|
@ -1,10 +1,10 @@
|
||||
# The Surface
|
||||
|
||||
## First, some housekeeping: State
|
||||
For convenience, we're going to pack all the fields into a struct and create some methods on that.
|
||||
For convenience, we're going to pack all the fields into a struct and create some methods for it.
|
||||
|
||||
```rust
|
||||
// main.rs
|
||||
// lib.rs
|
||||
use winit::window::Window;
|
||||
|
||||
struct State {
|
||||
@ -39,10 +39,10 @@ impl State {
|
||||
}
|
||||
```
|
||||
|
||||
I'm glossing over `State`s fields, but they'll make more sense as I explain the code behind the methods.
|
||||
I'm glossing over `State`s fields, but they'll make more sense as I explain the code behind these methods.
|
||||
|
||||
## State::new()
|
||||
The code for this is pretty straight forward, but let's break this down a bit.
|
||||
The code for this is pretty straightforward, but let's break it down a bit.
|
||||
|
||||
```rust
|
||||
impl State {
|
||||
@ -70,7 +70,7 @@ is to create `Adapter`s and `Surface`s.
|
||||
|
||||
The `adapter` is a handle to our actual graphics card. You can use this to get information about the graphics card such as its name and what backend the adapter uses. We use this to create our `Device` and `Queue` later. Let's discuss the fields of `RequestAdapterOptions`.
|
||||
|
||||
* `power_preference` has two variants: `LowPower`, and `HighPerformance`. This means will pick an adapter that favors battery life such as a integrated GPU when using `LowPower`. `HighPerformance` as will pick an adapter for more power hungry yet more performant GPU's such as your dedicated graphics card. WGPU will favor `LowPower` if there is no adapter for the `HighPerformance` option.
|
||||
* `power_preference` has two variants: `LowPower`, and `HighPerformance`. `LowPower` will pick an adapter that favors battery life, such as an integrated GPU. `HighPerformance` will pick an adapter for more power-hungry yet more performant GPU's such as a dedicated graphics card. WGPU will favor `LowPower` if there is no adapter for the `HighPerformance` option.
|
||||
* The `compatible_surface` field tells wgpu to find an adapter that can present to the supplied surface.
|
||||
* The `force_fallback_adapter` forces wgpu to pick an adapter that will work on all hardware. This usually means that the rendering backend will use a "software" system, instead of hardware such as a GPU.
|
||||
|
||||
@ -91,7 +91,7 @@ let adapter = instance
|
||||
|
||||
Another thing to note is that `Adapter`s are locked to a specific backend. If you are on Windows and have 2 graphics cards you'll have at least 4 adapters available to use, 2 Vulkan and 2 DirectX.
|
||||
|
||||
For more fields you can use to refine your search [check out the docs](https://docs.rs/wgpu/latest/wgpu/struct.Adapter.html).
|
||||
For more fields you can use to refine your search, [check out the docs](https://docs.rs/wgpu/latest/wgpu/struct.Adapter.html).
|
||||
|
||||
</div>
|
||||
|
||||
@ -125,7 +125,7 @@ The `features` field on `DeviceDescriptor`, allows us to specify what extra feat
|
||||
|
||||
<div class="note">
|
||||
|
||||
The graphics card you have limits the features you can use. If you want to use certain features you may need to limit what devices you support, or provide workarounds.
|
||||
The graphics card you have limits the features you can use. If you want to use certain features you may need to limit what devices you support or provide workarounds.
|
||||
|
||||
You can get a list of features supported by your device using `adapter.features()`, or `device.features()`.
|
||||
|
||||
@ -146,7 +146,7 @@ The `limits` field describes the limit of certain types of resources that we can
|
||||
surface.configure(&device, &config);
|
||||
```
|
||||
|
||||
Here we are defining a config for our surface. This will define how the surface creates its underlying `SurfaceTexture`s. We will talk about `SurfaceTexture` when we get to the `render` function. For now lets talk about the config's fields.
|
||||
Here we are defining a config for our surface. This will define how the surface creates its underlying `SurfaceTexture`s. We will talk about `SurfaceTexture` when we get to the `render` function. For now, let's talk about the config's fields.
|
||||
|
||||
The `usage` field describes how `SurfaceTexture`s will be used. `RENDER_ATTACHMENT` specifies that the textures will be used to write to the screen (we'll talk about more `TextureUsages`s later).
|
||||
|
||||
@ -158,7 +158,7 @@ The `format` defines how `SurfaceTexture`s will be stored on the gpu. Different
|
||||
Make sure that the width and height of the `SurfaceTexture` are not 0, as that can cause your app to crash.
|
||||
</div>
|
||||
|
||||
`present_mode` uses `wgpu::PresentMode` enum which determines how to sync the surface with the display. The option we picked, `FIFO`, will cap the display rate at the displays framerate. This is essentially VSync. This is also the most optimal mode on mobile. There are other options and you can see all of them [in the docs](https://docs.rs/wgpu/latest/wgpu/enum.PresentMode.html)
|
||||
`present_mode` uses `wgpu::PresentMode` enum which determines how to sync the surface with the display. The option we picked, `FIFO`, will cap the display rate at the display's framerate. This is essentially VSync. This is also the most optimal mode on mobile. There are other options and you can see all of them [in the docs](https://docs.rs/wgpu/latest/wgpu/enum.PresentMode.html)
|
||||
|
||||
Now that we've configured our surface properly we can add these new fields at the end of the method.
|
||||
|
||||
@ -175,7 +175,7 @@ Now that we've configured our surface properly we can add these new fields at th
|
||||
}
|
||||
```
|
||||
|
||||
Since our `State::new()` method is async we need to change run to be async as well so that we can await it.
|
||||
Since our `State::new()` method is async we need to change `run()` to be async as well so that we can await it.
|
||||
|
||||
```rust
|
||||
pub async fn run() {
|
||||
@ -205,11 +205,11 @@ fn main() {
|
||||
|
||||
<div class="warning">
|
||||
|
||||
Don't use `block_on` inside of an async function if you plan to support WASM. Futures have to be run using the browsers executor. If you try to bring your own you code will crash when you encounter a future that doesn't execute immediately.
|
||||
Don't use `block_on` inside of an async function if you plan to support WASM. Futures have to be run using the browser's executor. If you try to bring your own your code will crash when you encounter a future that doesn't execute immediately.
|
||||
|
||||
</div>
|
||||
|
||||
If we try to build WASM now it will fail because `wasm-bindgen` doesn't support using async functions as `start` methods. You could switch to calling `run` manually in javascript, but for simplicity we'll add the [wasm-bindgen-futures](https://docs.rs/wasm-bindgen-futures) crate to our WASM dependencies as that doesn't require us to change any code. Your dependecies should look something like this:
|
||||
If we try to build WASM now it will fail because `wasm-bindgen` doesn't support using async functions as `start` methods. You could switch to calling `run` manually in javascript, but for simplicity, we'll add the [wasm-bindgen-futures](https://docs.rs/wasm-bindgen-futures) crate to our WASM dependencies as that doesn't require us to change any code. Your dependencies should look something like this:
|
||||
|
||||
```toml
|
||||
[dependencies]
|
||||
@ -234,7 +234,7 @@ web-sys = { version = "0.3", features = [
|
||||
```
|
||||
|
||||
## resize()
|
||||
If we want to support resizing in our application, we're going to need to reconfigure the `surface` everytime the window's size changes. That's the reason we stored the physical `size` and the `config` used to configure the `surface`. With all of these, the resize method is very simple.
|
||||
If we want to support resizing in our application, we're going to need to reconfigure the `surface` every time the window's size changes. That's the reason we stored the physical `size` and the `config` used to configure the `surface`. With all of these, the resize method is very simple.
|
||||
|
||||
```rust
|
||||
// impl State
|
||||
@ -248,9 +248,9 @@ pub fn resize(&mut self, new_size: winit::dpi::PhysicalSize<u32>) {
|
||||
}
|
||||
```
|
||||
|
||||
There's nothing really different here from the initial `surface` configuration, so I won't get into it.
|
||||
There's nothing different here from the initial `surface` configuration, so I won't get into it.
|
||||
|
||||
We call this method in `main()` in the event loop for the following events.
|
||||
We call this method in `run()` in the event loop for the following events.
|
||||
|
||||
```rust
|
||||
match event {
|
||||
@ -284,10 +284,10 @@ fn input(&mut self, event: &WindowEvent) -> bool {
|
||||
}
|
||||
```
|
||||
|
||||
We need to do a little more work in the event loop. We want `State` to have priority over `main()`. Doing that (and previous changes) should have your loop looking like this.
|
||||
We need to do a little more work in the event loop. We want `State` to have priority over `run()`. Doing that (and previous changes) should have your loop looking like this.
|
||||
|
||||
```rust
|
||||
// main()
|
||||
// run()
|
||||
event_loop.run(move |event, _, control_flow| {
|
||||
match event {
|
||||
Event::WindowEvent {
|
||||
@ -333,7 +333,7 @@ We'll add some code here later on to move around objects.
|
||||
|
||||
## render()
|
||||
|
||||
Here's where the magic happens. First we need to get a frame to render to.
|
||||
Here's where the magic happens. First, we need to get a frame to render to.
|
||||
|
||||
```rust
|
||||
// impl State
|
||||
@ -358,7 +358,7 @@ We also need to create a `CommandEncoder` to create the actual commands to send
|
||||
});
|
||||
```
|
||||
|
||||
Now we can actually get to clearing the screen (long time coming). We need to use the `encoder` to create a `RenderPass`. The `RenderPass` has all the methods for the actual drawing. The code for creating a `RenderPass` is a bit nested, so I'll copy it all here before talking about its pieces.
|
||||
Now we can get to clearing the screen (long time coming). We need to use the `encoder` to create a `RenderPass`. The `RenderPass` has all the methods for the actual drawing. The code for creating a `RenderPass` is a bit nested, so I'll copy it all here before talking about its pieces.
|
||||
|
||||
```rust
|
||||
{
|
||||
@ -395,10 +395,10 @@ We can get the same results by removing the `{}`, and the `let _render_pass =` l
|
||||
|
||||
The last lines of the code tell `wgpu` to finish the command buffer, and to submit it to the gpu's render queue.
|
||||
|
||||
We need to update the event loop again to call this method. We'll also call update before it too.
|
||||
We need to update the event loop again to call this method. We'll also call `update()` before it too.
|
||||
|
||||
```rust
|
||||
// main()
|
||||
// run()
|
||||
event_loop.run(move |event, _, control_flow| {
|
||||
match event {
|
||||
// ...
|
||||
@ -462,7 +462,7 @@ wgpu::RenderPassColorAttachment {
|
||||
}
|
||||
```
|
||||
|
||||
The `RenderPassColorAttachment` has the `view` field which informs `wgpu` what texture to save the colors to. In this case we specify `view` that we created using `surface.get_current_texture()`. This means that any colors we draw to this attachment will get drawn to the screen.
|
||||
The `RenderPassColorAttachment` has the `view` field which informs `wgpu` what texture to save the colors to. In this case we specify the `view` that we created using `surface.get_current_texture()`. This means that any colors we draw to this attachment will get drawn to the screen.
|
||||
|
||||
The `resolve_target` is the texture that will receive the resolved output. This will be the same as `view` unless multisampling is enabled. We don't need to specify this, so we leave it as `None`.
|
||||
|
||||
@ -487,4 +487,4 @@ Modify the `input()` method to capture mouse events, and update the clear color
|
||||
|
||||
<WasmExample example="tutorial2_surface"></WasmExample>
|
||||
|
||||
<AutoGithubLink/>
|
||||
<AutoGithubLink/>
|
@ -3,15 +3,15 @@
|
||||
## What's a pipeline?
|
||||
If you're familiar with OpenGL, you may remember using shader programs. You can think of a pipeline as a more robust version of that. A pipeline describes all the actions the gpu will perform when acting on a set of data. In this section, we will be creating a `RenderPipeline` specifically.
|
||||
|
||||
## Wait shaders?
|
||||
Shaders are mini programs that you send to the gpu to perform operations on your data. There are 3 main types of shader: vertex, fragment, and compute. There are others such as geometry shaders, but they're more of an advanced topic. For now we're just going to use vertex, and fragment shaders.
|
||||
## Wait, shaders?
|
||||
Shaders are mini-programs that you send to the gpu to perform operations on your data. There are 3 main types of shader: vertex, fragment, and compute. There are others such as geometry shaders, but they're more of an advanced topic. For now, we're just going to use vertex, and fragment shaders.
|
||||
|
||||
## Vertex, fragment.. what are those?
|
||||
## Vertex, fragment... what are those?
|
||||
A vertex is a point in 3d space (can also be 2d). These vertices are then bundled in groups of 2s to form lines and/or 3s to form triangles.
|
||||
|
||||
<img src="./tutorial3-pipeline-vertices.png" />
|
||||
<img alt="Vertices Graphic" src="./tutorial3-pipeline-vertices.png" />
|
||||
|
||||
Most modern rendering uses triangles to make all shapes, from simple shapes (such as cubes), to complex ones (such as people). These triangles are stored as vertices which are the points that make up the corners of the triangles.
|
||||
Most modern rendering uses triangles to make all shapes, from simple shapes (such as cubes) to complex ones (such as people). These triangles are stored as vertices which are the points that make up the corners of the triangles.
|
||||
|
||||
<!-- Todo: Find/make an image to put here -->
|
||||
|
||||
@ -30,13 +30,13 @@ Note that, at the time of writing this, some WebGPU implementations also support
|
||||
|
||||
<div class="note">
|
||||
|
||||
If you've gone through this tutorial before you'll likely notice that I've switched from using GLSL to using WGSL. Given that GLSL support is a secondary concern and that WGSL is the first class language of WGPU, I've elected to convert all the tutorials to use WGSL. Some showcase examples still use GLSL, but the main tutorial and all examples going forward will be using WGSL.
|
||||
If you've gone through this tutorial before you'll likely notice that I've switched from using GLSL to using WGSL. Given that GLSL support is a secondary concern and that WGSL is the first-class language of WGPU, I've elected to convert all the tutorials to use WGSL. Some showcase examples still use GLSL, but the main tutorial and all examples going forward will be using WGSL.
|
||||
|
||||
</div>
|
||||
|
||||
<div class="note">
|
||||
|
||||
The WGSL spec and it's inclusion in WGPU is still in development. If you run into trouble using it, you may want the folks at [https://app.element.io/#/room/#wgpu:matrix.org](https://app.element.io/#/room/#wgpu:matrix.org) to take a look at your code.
|
||||
The WGSL spec and its inclusion in WGPU are still in development. If you run into trouble using it, you may want the folks at [https://app.element.io/#/room/#wgpu:matrix.org](https://app.element.io/#/room/#wgpu:matrix.org) to take a look at your code.
|
||||
|
||||
</div>
|
||||
|
||||
@ -62,7 +62,7 @@ fn vs_main(
|
||||
}
|
||||
```
|
||||
|
||||
First we declare `struct` to store the output of our vertex shader. This consists of only one field currently which is our vertex's `clip_position`. The `[[builtin(position)]]` bit tells WGPU that this is the value we want to use as the vertex's [clip coordinates](https://en.wikipedia.org/wiki/Clip_coordinates). This is analogous to GLSL's `gl_Position` variable.
|
||||
First, we declare `struct` to store the output of our vertex shader. This consists of only one field currently which is our vertex's `clip_position`. The `[[builtin(position)]]` bit tells WGPU that this is the value we want to use as the vertex's [clip coordinates](https://en.wikipedia.org/wiki/Clip_coordinates). This is analogous to GLSL's `gl_Position` variable.
|
||||
|
||||
<div class="note">
|
||||
|
||||
@ -82,7 +82,7 @@ The `f32()` and `i32()` bits are examples of casts.
|
||||
|
||||
<div class="note">
|
||||
|
||||
Variables defined with `var` can be modified, but must specify their type. Variables created with `let` can have their types inferred, but their value cannot be changed during the shader.
|
||||
Variables defined with `var` can be modified but must specify their type. Variables created with `let` can have their types inferred, but their value cannot be changed during the shader.
|
||||
|
||||
</div>
|
||||
|
||||
@ -105,7 +105,7 @@ We'll be adding more fields to `VertexOutput` later, so we might as well start u
|
||||
|
||||
</div>
|
||||
|
||||
Next up the fragment shader. Still in `shader.wgsl` add the following:
|
||||
Next up, the fragment shader. Still in `shader.wgsl` add the following:
|
||||
|
||||
```wgsl
|
||||
// Fragment shader
|
||||
@ -120,17 +120,17 @@ This sets the color of the current fragment to brown.
|
||||
|
||||
<div class="note">
|
||||
|
||||
Notice that the entry point for the vertex shader was named `vs_main` and that the entry point for the fragment shader is called `fs_main`. In earlier versions of wgpu it was ok to both name these functions the same, but newer versions of the [WGSL spec](https://www.w3.org/TR/WGSL/#declaration-and-scope) require these names to be different. Therefore, the above mentioned naming scheme (which is adopted from the `wgpu` examples) is used throughout the tutorial.
|
||||
Notice that the entry point for the vertex shader was named `vs_main` and that the entry point for the fragment shader is called `fs_main`. In earlier versions of wgpu it was ok for both these functions to have the same name, but newer versions of the [WGSL spec](https://www.w3.org/TR/WGSL/#declaration-and-scope) require these names to be different. Therefore, the above-mentioned naming scheme (which is adopted from the `wgpu` examples) is used throughout the tutorial.
|
||||
|
||||
</div>
|
||||
|
||||
The `[[location(0)]]` bit tells WGPU to store the `vec4` value returned by this function in the first color target. We'll get into what this is later.
|
||||
|
||||
## How do we use the shaders?
|
||||
This is the part where we finally make the thing in the title: the pipeline. First let's modify `State` to include the following.
|
||||
This is the part where we finally make the thing in the title: the pipeline. First, let's modify `State` to include the following.
|
||||
|
||||
```rust
|
||||
// main.rs
|
||||
// lib.rs
|
||||
struct State {
|
||||
surface: wgpu::Surface,
|
||||
device: wgpu::Device,
|
||||
@ -173,7 +173,7 @@ let render_pipeline_layout =
|
||||
});
|
||||
```
|
||||
|
||||
Finally we have all we need to create the `render_pipeline`.
|
||||
Finally, we have all we need to create the `render_pipeline`.
|
||||
|
||||
```rust
|
||||
let render_pipeline = device.create_render_pipeline(&wgpu::RenderPipelineDescriptor {
|
||||
@ -220,7 +220,7 @@ Several things to note here:
|
||||
|
||||
The `primitive` field describes how to interpret our vertices when converting them into triangles.
|
||||
|
||||
1. Using `PrimitiveTopology::TriangleList` means that each three vertices will correspond to one triangle.
|
||||
1. Using `PrimitiveTopology::TriangleList` means that every three vertices will correspond to one triangle.
|
||||
2. The `front_face` and `cull_mode` fields tell `wgpu` how to determine whether a given triangle is facing forward or not. `FrontFace::Ccw` means that a triangle is facing forward if the vertices are arranged in a counter-clockwise direction. Triangles that are not considered facing forward are culled (not included in the render) as specified by `CullMode::Back`. We'll cover culling a bit more when we cover `Buffer`s.
|
||||
|
||||
```rust
|
||||
@ -237,13 +237,13 @@ The `primitive` field describes how to interpret our vertices when converting th
|
||||
The rest of the method is pretty simple:
|
||||
1. We're not using a depth/stencil buffer currently, so we leave `depth_stencil` as `None`. *This will change later*.
|
||||
2. `count` determines how many samples the pipeline will use. Multisampling is a complex topic, so we won't get into it here.
|
||||
3. `mask` specifies which samples should be active. In this case we are using all of them.
|
||||
3. `mask` specifies which samples should be active. In this case, we are using all of them.
|
||||
4. `alpha_to_coverage_enabled` has to do with anti-aliasing. We're not covering anti-aliasing here, so we'll leave this as false now.
|
||||
5. `multiview` indicates how many array layers the render attachments can have. We won't be rendering to array textures so we can set this to `None`.
|
||||
|
||||
<!-- https://gamedev.stackexchange.com/questions/22507/what-is-the-alphatocoverage-blend-state-useful-for -->
|
||||
|
||||
Now all we have to do is add the `render_pipeline` to `State` and then we can use it!
|
||||
Now, all we have to do is add the `render_pipeline` to `State` and then we can use it!
|
||||
|
||||
```rust
|
||||
// new()
|
||||
@ -259,7 +259,7 @@ Self {
|
||||
```
|
||||
## Using a pipeline
|
||||
|
||||
If you run your program now, it'll take a little longer to start, but it will still show the blue screen we got in the last section. That's because while we created the `render_pipeline`, we need to modify the code in `render()` to actually use it.
|
||||
If you run your program now, it'll take a little longer to start, but it will still show the blue screen we got in the last section. That's because we created the `render_pipeline`, but we still need to modify the code in `render()` to actually use it.
|
||||
|
||||
```rust
|
||||
// render()
|
||||
@ -313,4 +313,4 @@ Create a second pipeline that uses the triangle's position data to create a colo
|
||||
|
||||
<WasmExample example="tutorial3_pipeline"></WasmExample>
|
||||
|
||||
<AutoGithubLink/>
|
||||
<AutoGithubLink/>
|
@ -4,13 +4,13 @@
|
||||
You were probably getting sick of me saying stuff like "we'll get to that when we talk about buffers". Well now's the time to finally talk about buffers, but first...
|
||||
|
||||
## What is a buffer?
|
||||
A buffer is a blob of data on the GPU. A buffer is guaranteed to be contiguous, meaning that all the data is stored sequentially in memory. Buffers are generally used to store simple things like structs or arrays, but it can store more complex stuff such as graph structures like trees (provided all the nodes are stored together and don't reference anything outside of the buffer). We are going to use buffers a lot, so let's get started with two of the most important ones: the vertex buffer, and the index buffer.
|
||||
A buffer is a blob of data on the GPU. A buffer is guaranteed to be contiguous, meaning that all the data is stored sequentially in memory. Buffers are generally used to store simple things like structs or arrays, but they can store more complex stuff such as graph structures like trees (provided all the nodes are stored together and don't reference anything outside of the buffer). We are going to use buffers a lot, so let's get started with two of the most important ones: the vertex buffer, and the index buffer.
|
||||
|
||||
## The vertex buffer
|
||||
Previously we've stored vertex data directly in the vertex shader. While that worked fine to get our bootstraps on, it simply won't do for the long-term. The types of objects we need to draw will vary in size, and recompiling the shader whenever we need to update the model would massively slow down our program. Instead, we are going to use buffers to store the vertex data we want to draw. Before we do that though we need to describe what a vertex looks like. We'll do this by creating a new struct.
|
||||
Previously we've stored vertex data directly in the vertex shader. While that worked fine to get our bootstraps on, it simply won't do for the long term. The types of objects we need to draw will vary in size, and recompiling the shader whenever we need to update the model would massively slow down our program. Instead, we are going to use buffers to store the vertex data we want to draw. Before we do that though we need to describe what a vertex looks like. We'll do this by creating a new struct.
|
||||
|
||||
```rust
|
||||
// main.rs
|
||||
// lib.rs
|
||||
#[repr(C)]
|
||||
#[derive(Copy, Clone, Debug)]
|
||||
struct Vertex {
|
||||
@ -21,10 +21,10 @@ struct Vertex {
|
||||
|
||||
Our vertices will all have a position and a color. The position represents the x, y, and z of the vertex in 3d space. The color is the red, green, and blue values for the vertex. We need the `Vertex` to be `Copy` so we can create a buffer with it.
|
||||
|
||||
Next we need the actual data that will make up our triangle. Below `Vertex` add the following.
|
||||
Next, we need the actual data that will make up our triangle. Below `Vertex` add the following.
|
||||
|
||||
```rust
|
||||
//main.rs
|
||||
// lib.rs
|
||||
const VERTICES: &[Vertex] = &[
|
||||
Vertex { position: [0.0, 0.5, 0.0], color: [1.0, 0.0, 0.0] },
|
||||
Vertex { position: [-0.5, -0.5, 0.0], color: [0.0, 1.0, 0.0] },
|
||||
@ -37,7 +37,7 @@ We arrange the vertices in counter-clockwise order: top, bottom left, bottom rig
|
||||
Now that we have our vertex data, we need to store it in a buffer. Let's add a `vertex_buffer` field to `State`.
|
||||
|
||||
```rust
|
||||
// main.rs
|
||||
// lib.rs
|
||||
struct State {
|
||||
// ...
|
||||
render_pipeline: wgpu::RenderPipeline,
|
||||
@ -62,9 +62,9 @@ let vertex_buffer = device.create_buffer_init(
|
||||
);
|
||||
```
|
||||
|
||||
To access the `create_buffer_init` method on `wgpu::Device` we'll have to import the [DeviceExt](https://docs.rs/wgpu/latest/wgpu/util/trait.DeviceExt.html#tymethod.create_buffer_init) extension trait. For more information on extension traits, check out [this article](http://xion.io/post/code/rust-extension-traits.html).
|
||||
To access the `create_buffer_init` method on `wgpu::Device` we'll have to import the [DeviceExt](https://docs.rs/wgpu/latest/wgpu/util/trait.DeviceExt.html#tymethod.create_buffer_init) extension trait. For more information on extension traits, check out [this article](http://xion.io/post/code/rust-extension-traits.html).
|
||||
|
||||
To import the extension trait, this line somewhere near the top of `main.rs`.
|
||||
To import the extension trait, put this line somewhere near the top of `lib.rs`.
|
||||
|
||||
```rust
|
||||
use wgpu::util::DeviceExt;
|
||||
@ -113,7 +113,7 @@ Self {
|
||||
```
|
||||
|
||||
## So what do I do with it?
|
||||
We need to tell the `render_pipeline` to use this buffer when we are drawing, but first we need to tell the `render_pipeline` how to read the buffer. We do this using `VertexBufferLayout`s and the `vertex_buffers` field that I promised we'd talk about when we created the `render_pipeline`.
|
||||
We need to tell the `render_pipeline` to use this buffer when we are drawing, but first, we need to tell the `render_pipeline` how to read the buffer. We do this using `VertexBufferLayout`s and the `vertex_buffers` field that I promised we'd talk about when we created the `render_pipeline`.
|
||||
|
||||
A `VertexBufferLayout` defines how a buffer is represented in memory. Without this, the render_pipeline has no idea how to map the buffer in the shader. Here's what the descriptor for a buffer full of `Vertex` would look like.
|
||||
|
||||
@ -138,8 +138,8 @@ wgpu::VertexBufferLayout {
|
||||
|
||||
1. The `array_stride` defines how wide a vertex is. When the shader goes to read the next vertex, it will skip over `array_stride` number of bytes. In our case, array_stride will probably be 24 bytes.
|
||||
2. `step_mode` tells the pipeline how often it should move to the next vertex. This seems redundant in our case, but we can specify `wgpu::VertexStepMode::Instance` if we only want to change vertices when we start drawing a new instance. We'll cover instancing in a later tutorial.
|
||||
3. Vertex attributes describe the individual parts of the vertex. Generally this is a 1:1 mapping with a struct's fields, which it is in our case.
|
||||
4. This defines the `offset` in bytes until the attribute starts. For the first attribute the offset is usually zero. For any later attributes, the offset is the sum over `size_of` of the previous attributes' data.
|
||||
3. Vertex attributes describe the individual parts of the vertex. Generally, this is a 1:1 mapping with a struct's fields, which is true in our case.
|
||||
4. This defines the `offset` in bytes until the attribute starts. For the first attribute, the offset is usually zero. For any later attributes, the offset is the sum over `size_of` of the previous attributes' data.
|
||||
5. This tells the shader what location to store this attribute at. For example `[[location(0)]] x: vec3<f32>` in the vertex shader would correspond to the `position` field of the `Vertex` struct, while `[[location(1)]] x: vec3<f32>` would be the `color` field.
|
||||
6. `format` tells the shader the shape of the attribute. `Float32x3` corresponds to `vec3<f32>` in shader code. The max value we can store in an attribute is `Float32x4` (`Uint32x4`, and `Sint32x4` work as well). We'll keep this in mind for when we have to store things that are bigger than `Float32x4`.
|
||||
|
||||
@ -150,7 +150,7 @@ For you visual learners, our vertex buffer looks like this.
|
||||
Let's create a static method on `Vertex` that returns this descriptor.
|
||||
|
||||
```rust
|
||||
// main.rs
|
||||
// lib.rs
|
||||
impl Vertex {
|
||||
fn desc<'a>() -> wgpu::VertexBufferLayout<'a> {
|
||||
wgpu::VertexBufferLayout {
|
||||
@ -233,14 +233,14 @@ render_pass.set_vertex_buffer(0, self.vertex_buffer.slice(..));
|
||||
render_pass.draw(0..3, 0..1);
|
||||
```
|
||||
|
||||
`set_vertex_buffer` takes two parameters. The first is what buffer slot to use for this vertex buffer. You can have multiple vertex buffers set at a time.
|
||||
`set_vertex_buffer` takes two parameters. The first is what buffer slot to use for this vertex buffer. You can have multiple vertex buffers set at a time.
|
||||
|
||||
The second parameter is the slice of the buffer to use. You can store as many objects in a buffer as your hardware allows, so `slice` allows us to specify which portion of the buffer to use. We use `..` to specify the entire buffer.
|
||||
|
||||
Before we continue, we should change the `render_pass.draw()` call to use the number of vertices specified by `VERTICES`. Add a `num_vertices` to `State`, and set it to be equal to `VERTICES.len()`.
|
||||
|
||||
```rust
|
||||
// main.rs
|
||||
// lib.rs
|
||||
|
||||
struct State {
|
||||
// ...
|
||||
@ -316,7 +316,7 @@ We technically don't *need* an index buffer, but they still are plenty useful. A
|
||||
|
||||
![A pentagon made of 3 triangles](./pentagon.png)
|
||||
|
||||
It has a total of 5 vertices, and 3 triangles. Now if we wanted to display something like this using just vertices we would need something like the following.
|
||||
It has a total of 5 vertices and 3 triangles. Now if we wanted to display something like this using just vertices we would need something like the following.
|
||||
|
||||
```rust
|
||||
const VERTICES: &[Vertex] = &[
|
||||
@ -334,12 +334,12 @@ const VERTICES: &[Vertex] = &[
|
||||
];
|
||||
```
|
||||
|
||||
You'll note though that some of the vertices are used more than once. C, and B get used twice, and E is repeated 3 times. Assuming that each float is 4 bytes, then that means of the 216 bytes we use for `VERTICES`, 96 of them are duplicate data. Wouldn't it be nice if we could list these vertices once? Well we can! That's where an index buffer comes into play.
|
||||
You'll note though that some of the vertices are used more than once. C, and B get used twice, and E is repeated 3 times. Assuming that each float is 4 bytes, then that means of the 216 bytes we use for `VERTICES`, 96 of them are duplicate data. Wouldn't it be nice if we could list these vertices once? Well, we can! That's where an index buffer comes into play.
|
||||
|
||||
Basically we store all the unique vertices in `VERTICES` and we create another buffer that stores indices to elements in `VERTICES` to create the triangles. Here's an example of that with our pentagon.
|
||||
Basically, we store all the unique vertices in `VERTICES` and we create another buffer that stores indices to elements in `VERTICES` to create the triangles. Here's an example of that with our pentagon.
|
||||
|
||||
```rust
|
||||
// main.rs
|
||||
// lib.rs
|
||||
const VERTICES: &[Vertex] = &[
|
||||
Vertex { position: [-0.0868241, 0.49240386, 0.0], color: [0.5, 0.0, 0.5] }, // A
|
||||
Vertex { position: [-0.49513406, 0.06958647, 0.0], color: [0.5, 0.0, 0.5] }, // B
|
||||
@ -355,9 +355,9 @@ const INDICES: &[u16] = &[
|
||||
];
|
||||
```
|
||||
|
||||
Now with this setup our `VERTICES` take up about 120 bytes and `INDICES` is just 18 bytes given that `u16` is 2 bytes wide. In this case, wgpu automatically adds 2 extra bytes padding to make sure the buffer is aligned to 4 bytes, but it's still just 20 bytes. All together our pentagon is 134 bytes in total. That means we saved 82 bytes! It may not seem like much, but when dealing with tri counts in the hundreds of thousands, indexing saves a lot of memory.
|
||||
Now with this setup, our `VERTICES` take up about 120 bytes and `INDICES` is just 18 bytes given that `u16` is 2 bytes wide. In this case, wgpu automatically adds 2 extra bytes of padding to make sure the buffer is aligned to 4 bytes, but it's still just 20 bytes. All together our pentagon is 134 bytes in total. That means we saved 82 bytes! It may not seem like much, but when dealing with tri counts in the hundreds of thousands, indexing saves a lot of memory.
|
||||
|
||||
There's a couple of things we need to change in order to use indexing. The first is we need to create a buffer to store the indices. In `State`'s `new()` method create the `index_buffer` after you create the `vertex_buffer`. Also change `num_vertices` to `num_indices` and set it equal to `INDICES.len()`.
|
||||
There are a couple of things we need to change in order to use indexing. The first is we need to create a buffer to store the indices. In `State`'s `new()` method, create the `index_buffer` after you create the `vertex_buffer`. Also, change `num_vertices` to `num_indices` and set it equal to `INDICES.len()`.
|
||||
|
||||
```rust
|
||||
let vertex_buffer = device.create_buffer_init(
|
||||
@ -432,9 +432,9 @@ With all that you should have a garishly magenta pentagon in your window.
|
||||
|
||||
## Color Correction
|
||||
|
||||
If you use a color picker on the magenta pentagon, you'll get a hex value of #BC00BC. If you convert this to RGB values you'll get (188, 0, 188). Dividing these values by 255 to get them into the [0, 1] range we get roughly (0.737254902, 0, 0.737254902). This is not the same as we are using for our vertex colors which is (0.5, 0.0, 0.5). The reason for this has to do with color spaces.
|
||||
If you use a color picker on the magenta pentagon, you'll get a hex value of #BC00BC. If you convert this to RGB values you'll get (188, 0, 188). Dividing these values by 255 to get them into the [0, 1] range we get roughly (0.737254902, 0, 0.737254902). This is not the same as what we are using for our vertex colors which are (0.5, 0.0, 0.5). The reason for this has to do with color spaces.
|
||||
|
||||
Most monitors use a color space known as sRGB. Our surface is (most likely depending on what is returned from `surface.get_preferred_format()`) using an sRGB texture format. The sRGB format stores colors according to their relative brightness instead of their actual brightness. The reason for this is that our eyes don't perceive light linearly. We notice more differences in darker colors than we do lighter colors.
|
||||
Most monitors use a color space known as sRGB. Our surface is (most likely depending on what is returned from `surface.get_preferred_format()`) using an sRGB texture format. The sRGB format stores colors according to their relative brightness instead of their actual brightness. The reason for this is that our eyes don't perceive light linearly. We notice more differences in darker colors than we do in lighter colors.
|
||||
|
||||
You get an approximation of the correct color using the following formula: `srgb_color = (rgb_color / 255) ^ 2.2`. Doing this with an RGB value of (188, 0, 188) will give us (0.511397819, 0.0, 0.511397819). A little off from our (0.5, 0.0, 0.5). While you could tweak the formula to get the desired values, you'll likely save a lot of time by using textures instead as they are stored as sRGB by default, so they don't suffer from the same color inaccuracies that vertex colors do. We'll cover textures in the next lesson.
|
||||
|
||||
@ -444,4 +444,4 @@ Create a more complex shape than the one we made (aka. more than three triangles
|
||||
|
||||
<WasmExample example="tutorial4_buffer"></WasmExample>
|
||||
|
||||
<AutoGithubLink/>
|
||||
<AutoGithubLink/>
|
@ -1,8 +1,8 @@
|
||||
# Textures and bind groups
|
||||
|
||||
Up to this point we have been drawing super simple shapes. While we can make a game with just triangles, trying to draw highly detailed objects would massively limit what devices could even run our game. However, we can get around this problem with **textures**.
|
||||
Up to this point, we have been drawing super simple shapes. While we can make a game with just triangles, trying to draw highly detailed objects would massively limit what devices could even run our game. However, we can get around this problem with **textures**.
|
||||
|
||||
Textures are images overlayed on a triangle mesh to make it seem more detailed. There are multiple types of textures such as normal maps, bump maps, specular maps and diffuse maps. We're going to talk about diffuse maps, or more simply, the color texture.
|
||||
Textures are images overlaid on a triangle mesh to make it seem more detailed. There are multiple types of textures such as normal maps, bump maps, specular maps, and diffuse maps. We're going to talk about diffuse maps, or more simply, the color texture.
|
||||
|
||||
## Loading an image from a file
|
||||
|
||||
@ -23,7 +23,7 @@ The jpeg decoder that `image` includes uses [rayon](https://docs.rs/rayon) to sp
|
||||
|
||||
<div class="note">
|
||||
|
||||
Decoding jpegs in WASM isn't very performant. If you want to speed up image loadding in general in WASM you could opt to use the browsers builtin decoders instead of `image` when building with `wasm-bindgen`. This will involve creating an `<img>` tag in Rust to get the image, and then a `<canvas>` to get the pixel data, but I'll leave this as an exercise for the reader.
|
||||
Decoding jpegs in WASM isn't very performant. If you want to speed up image loading in general in WASM you could opt to use the browser's built-in decoders instead of `image` when building with `wasm-bindgen`. This will involve creating an `<img>` tag in Rust to get the image, and then a `<canvas>` to get the pixel data, but I'll leave this as an exercise for the reader.
|
||||
|
||||
</div>
|
||||
|
||||
@ -41,7 +41,7 @@ use image::GenericImageView;
|
||||
let dimensions = diffuse_image.dimensions();
|
||||
```
|
||||
|
||||
Here we grab the bytes from our image file and load them into an image which is then converted into a `Vec` of rgba bytes. We also save the image's dimensions for when we create the actual `Texture`.
|
||||
Here we grab the bytes from our image file and load them into an image which is then converted into a `Vec` of rgba bytes. We also save the image's dimensions for when we create the actual `Texture`.
|
||||
|
||||
Now, let's create the `Texture`:
|
||||
|
||||
@ -158,18 +158,18 @@ let diffuse_sampler = device.create_sampler(&wgpu::SamplerDescriptor {
|
||||
The `address_mode_*` parameters determine what to do if the sampler gets a texture coordinate that's outside the texture itself. We have a few options to choose from:
|
||||
|
||||
* `ClampToEdge`: Any texture coordinates outside the texture will return the color of the nearest pixel on the edges of the texture.
|
||||
* `Repeat`: The texture will repeat as texture coordinates exceed the textures dimensions.
|
||||
* `Repeat`: The texture will repeat as texture coordinates exceed the texture's dimensions.
|
||||
* `MirrorRepeat`: Similar to `Repeat`, but the image will flip when going over boundaries.
|
||||
|
||||
![address_mode.png](./address_mode.png)
|
||||
|
||||
The `mag_filter` and `min_filter` options describe what to do when a fragment covers multiple pixels, or there are multiple fragments for a single pixel. This often comes into play when viewing a surface from up close, or from far away.
|
||||
The `mag_filter` and `min_filter` options describe what to do when a fragment covers multiple pixels, or there are multiple fragments for a single pixel. This often comes into play when viewing a surface from up close, or from far away.
|
||||
|
||||
There are 2 options:
|
||||
* `Linear`: Attempt to blend the in-between fragments so that they seem to flow together.
|
||||
* `Nearest`: In-between fragments will use the color of the nearest pixel. This creates an image that's crisper from far away, but pixelated up close. This can be desirable, however, if your textures are designed to be pixelated, like in pixel art games, or voxel games like Minecraft.
|
||||
* `Nearest`: In-between fragments will use the color of the nearest pixel. This creates an image that's crisper from far away but pixelated up close. This can be desirable, however, if your textures are designed to be pixelated, like in pixel art games, or voxel games like Minecraft.
|
||||
|
||||
Mipmaps are a complex topic, and will require [their own section in the future](/todo). For now, we can say that `mipmap_filter` functions similar to `(mag/min)_filter` as it tells the sampler how to blend between mipmaps.
|
||||
Mipmaps are a complex topic and will require [their own section in the future](/todo). For now, we can say that `mipmap_filter` functions similar to `(mag/min)_filter` as it tells the sampler how to blend between mipmaps.
|
||||
|
||||
I'm using some defaults for the other fields. If you want to see what they are, check [the wgpu docs](https://docs.rs/wgpu/latest/wgpu/struct.SamplerDescriptor.html).
|
||||
|
||||
@ -303,7 +303,7 @@ async fn new(...) {
|
||||
```
|
||||
|
||||
## A change to the VERTICES
|
||||
There's a few things we need to change about our `Vertex` definition. Up to now we've been using a `color` attribute to set the color of our mesh. Now that we're using a texture, we want to replace our `color` with `tex_coords`. These coordinates will then be passed to the `Sampler` to retrieve the appropriate color.
|
||||
There are a few things we need to change about our `Vertex` definition. Up to now, we've been using a `color` attribute to set the color of our mesh. Now that we're using a texture, we want to replace our `color` with `tex_coords`. These coordinates will then be passed to the `Sampler` to retrieve the appropriate color.
|
||||
|
||||
Since our `tex_coords` are two dimensional, we'll change the field to take two floats instead of three.
|
||||
|
||||
@ -344,7 +344,7 @@ impl Vertex {
|
||||
}
|
||||
```
|
||||
|
||||
Lastly we need to change `VERTICES` itself. Replace the existing definition with the following:
|
||||
Lastly, we need to change `VERTICES` itself. Replace the existing definition with the following:
|
||||
|
||||
```rust
|
||||
// Changed
|
||||
@ -359,7 +359,7 @@ const VERTICES: &[Vertex] = &[
|
||||
|
||||
## Shader time
|
||||
|
||||
With our new `Vertex` structure in place it's time to update our shaders. We'll first need to pass our `tex_coords` into the vertex shader and then use them over to our fragment shader to get the final color from the `Sampler`. Let's start with the vertex shader:
|
||||
With our new `Vertex` structure in place, it's time to update our shaders. We'll first need to pass our `tex_coords` into the vertex shader and then use them over to our fragment shader to get the final color from the `Sampler`. Let's start with the vertex shader:
|
||||
|
||||
```wgsl
|
||||
// Vertex shader
|
||||
@ -536,7 +536,7 @@ Notice that we're using `to_rgba8()` instead of `as_rgba8()`. PNGs work fine wit
|
||||
|
||||
</div>
|
||||
|
||||
We need to import `texture.rs` as a module, so somewhere at the top of `main.rs` add the following.
|
||||
We need to import `texture.rs` as a module, so somewhere at the top of `lib.rs` add the following.
|
||||
|
||||
```rust
|
||||
mod texture;
|
||||
@ -552,7 +552,7 @@ let diffuse_texture = texture::Texture::from_bytes(&device, &queue, diffuse_byte
|
||||
// Everything up until `let texture_bind_group_layout = ...` can now be removed.
|
||||
```
|
||||
|
||||
We still need to store the bind group separately so that `Texture` doesn't need know how the `BindGroup` is laid out. The creation of `diffuse_bind_group` slightly changes to use the `view` and `sampler` fields of `diffuse_texture`:
|
||||
We still need to store the bind group separately so that `Texture` doesn't need to know how the `BindGroup` is laid out. The creation of `diffuse_bind_group` slightly changes to use the `view` and `sampler` fields of `diffuse_texture`:
|
||||
|
||||
```rust
|
||||
let diffuse_bind_group = device.create_bind_group(
|
||||
@ -597,7 +597,7 @@ impl State {
|
||||
}
|
||||
```
|
||||
|
||||
Phew!
|
||||
Phew!
|
||||
|
||||
With these changes in place, the code should be working the same as it was before, but we now have a much easier way to create textures.
|
||||
|
||||
@ -607,4 +607,4 @@ Create another texture and swap it out when you press the space key.
|
||||
|
||||
<WasmExample example="tutorial5_textures"></WasmExample>
|
||||
|
||||
<AutoGithubLink/>
|
||||
<AutoGithubLink/>
|
@ -1,6 +1,6 @@
|
||||
# Uniform buffers and a 3d camera
|
||||
|
||||
While all of our previous work has seemed to be in 2d, we've actually been working in 3d the entire time! That's part of the reason why our `Vertex` structure has `position` be an array of 3 floats instead of just 2. We can't really see the 3d-ness of our scene, because we're viewing things head on. We're going to change our point of view by creating a `Camera`.
|
||||
While all of our previous work has seemed to be in 2d, we've actually been working in 3d the entire time! That's part of the reason why our `Vertex` structure has `position` be an array of 3 floats instead of just 2. We can't really see the 3d-ness of our scene, because we're viewing things head-on. We're going to change our point of view by creating a `Camera`.
|
||||
|
||||
## A perspective camera
|
||||
|
||||
@ -41,7 +41,7 @@ impl Camera {
|
||||
The `build_view_projection_matrix` is where the magic happens.
|
||||
1. The `view` matrix moves the world to be at the position and rotation of the camera. It's essentially an inverse of whatever the transform matrix of the camera would be.
|
||||
2. The `proj` matrix wraps the scene to give the effect of depth. Without this, objects up close would be the same size as objects far away.
|
||||
3. The coordinate system in Wgpu is based on DirectX, and Metal's coordinate systems. That means that in [normalized device coordinates](https://github.com/gfx-rs/gfx/tree/master/src/backend/dx12#normalized-coordinates) the x axis and y axis are in the range of -1.0 to +1.0, and the z axis is 0.0 to +1.0. The `cgmath` crate (as well as most game math crates) are built for OpenGL's coordinate system. This matrix will scale and translate our scene from OpenGL's coordinate system to WGPU's. We'll define it as follows.
|
||||
3. The coordinate system in Wgpu is based on DirectX, and Metal's coordinate systems. That means that in [normalized device coordinates](https://github.com/gfx-rs/gfx/tree/master/src/backend/dx12#normalized-coordinates) the x axis and y axis are in the range of -1.0 to +1.0, and the z axis is 0.0 to +1.0. The `cgmath` crate (as well as most game math crates) is built for OpenGL's coordinate system. This matrix will scale and translate our scene from OpenGL's coordinate system to WGPU's. We'll define it as follows.
|
||||
|
||||
```rust
|
||||
#[rustfmt::skip]
|
||||
@ -93,7 +93,7 @@ Now that we have our camera, and it can make us a view projection matrix, we nee
|
||||
|
||||
## The uniform buffer
|
||||
|
||||
Up to this point we've used `Buffer`s to store our vertex and index data, and even to load our textures. We are going to use them again to create what's known as a uniform buffer. A uniform is a blob of data that is available to every invocation of a set of shaders. We've technically already used uniforms for our texture and sampler. We're going to use them again to store our view projection matrix. To start let's create a struct to hold our uniform.
|
||||
Up to this point, we've used `Buffer`s to store our vertex and index data, and even to load our textures. We are going to use them again to create what's known as a uniform buffer. A uniform is a blob of data that is available to every invocation of a set of shaders. We've technically already used uniforms for our texture and sampler. We're going to use them again to store our view projection matrix. To start let's create a struct to hold our uniform.
|
||||
|
||||
```rust
|
||||
// We need this for Rust to store our data correctly for the shaders
|
||||
@ -139,7 +139,7 @@ let camera_buffer = device.create_buffer_init(
|
||||
|
||||
## Uniform buffers and bind groups
|
||||
|
||||
Cool, now that we have a uniform buffer, what do we do with it? The answer is we create a bind group for it. First we have to create the bind group layout.
|
||||
Cool, now that we have a uniform buffer, what do we do with it? The answer is we create a bind group for it. First, we have to create the bind group layout.
|
||||
|
||||
```rust
|
||||
let camera_bind_group_layout = device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
|
||||
@ -262,7 +262,7 @@ fn vs_main(
|
||||
```
|
||||
|
||||
1. Because we've created a new bind group, we need to specify which one we're using in the shader. The number is determined by our `render_pipeline_layout`. The `texture_bind_group_layout` is listed first, thus it's `group(0)`, and `camera_bind_group` is second, so it's `group(1)`.
|
||||
2. Multiplication order is important when it comes to matrices. The vector goes on the right, and the matrices gone on the left in order of importance.
|
||||
2. Multiplication order is important when it comes to matrices. The vector goes on the right, and the matrices go on the left in order of importance.
|
||||
|
||||
## A controller for our camera
|
||||
|
||||
@ -398,8 +398,8 @@ fn input(&mut self, event: &WindowEvent) -> bool {
|
||||
```
|
||||
|
||||
Up to this point, the camera controller isn't actually doing anything. The values in our uniform buffer need to be updated. There are a few main methods to do that.
|
||||
1. We can create a separate buffer and copy it's contents to our `camera_buffer`. The new buffer is known as a staging buffer. This method is usually how it's done as it allows the contents of the main buffer (in this case `camera_buffer`) to only be accessible by the gpu. The gpu can do some speed optimizations which it couldn't if we could access the buffer via the cpu.
|
||||
2. We can call on of the mapping method's `map_read_async`, and `map_write_async` on the buffer itself. These allow us to access a buffer's contents directly, but requires us to deal with the `async` aspect of these methods this also requires our buffer to use the `BufferUsages::MAP_READ` and/or `BufferUsages::MAP_WRITE`. We won't talk about it here, but you check out [Wgpu without a window](../../showcase/windowless) tutorial if you want to know more.
|
||||
1. We can create a separate buffer and copy its contents to our `camera_buffer`. The new buffer is known as a staging buffer. This method is usually how it's done as it allows the contents of the main buffer (in this case `camera_buffer`) to only be accessible by the gpu. The gpu can do some speed optimizations which it couldn't if we could access the buffer via the cpu.
|
||||
2. We can call one of the mapping methods `map_read_async`, and `map_write_async` on the buffer itself. These allow us to access a buffer's contents directly but require us to deal with the `async` aspect of these methods this also requires our buffer to use the `BufferUsages::MAP_READ` and/or `BufferUsages::MAP_WRITE`. We won't talk about it here, but you check out [Wgpu without a window](../../showcase/windowless) tutorial if you want to know more.
|
||||
3. We can use `write_buffer` on `queue`.
|
||||
|
||||
We're going to use option number 3.
|
||||
@ -416,9 +416,9 @@ That's all we need to do. If you run the code now you should see a pentagon with
|
||||
|
||||
## Challenge
|
||||
|
||||
Have our model rotate on it's own independently of the the camera. *Hint: you'll need another matrix for this.*
|
||||
Have our model rotate on its own independently of the camera. *Hint: you'll need another matrix for this.*
|
||||
|
||||
|
||||
<WasmExample example="tutorial6_uniforms"></WasmExample>
|
||||
|
||||
<AutoGithubLink/>
|
||||
<AutoGithubLink/>
|
@ -4,7 +4,7 @@ Our scene right now is very simple: we have one object centered at (0,0,0). What
|
||||
|
||||
Instancing allows us to draw the same object multiple times with different properties (position, orientation, size, color, etc.). There are multiple ways of doing instancing. One way would be to modify the uniform buffer to include these properties and then update it before we draw each instance of our object.
|
||||
|
||||
We don't want to use this method for performance reasons. Updating the uniform buffer for each instance would require multiple buffer copies each frame. On top of that, our method to update the uniform buffer currently requires use to create a new buffer to store the updated data. That's a lot of time wasted between draw calls.
|
||||
We don't want to use this method for performance reasons. Updating the uniform buffer for each instance would require multiple buffer copies for each frame. On top of that, our method to update the uniform buffer currently requires us to create a new buffer to store the updated data. That's a lot of time wasted between draw calls.
|
||||
|
||||
If we look at the parameters for the `draw_indexed` function [in the wgpu docs](https://docs.rs/wgpu/latest/wgpu/struct.RenderPass.html#method.draw_indexed), we can see a solution to our problem.
|
||||
|
||||
@ -17,18 +17,18 @@ pub fn draw_indexed(
|
||||
)
|
||||
```
|
||||
|
||||
The `instances` parameter takes a `Range<u32>`. This parameter tells the GPU how many copies, or instances, of our model we want to draw. Currently we are specifying `0..1`, which instructs the GPU to draw our model once, and then stop. If we used `0..5`, our code would draw 5 instances.
|
||||
The `instances` parameter takes a `Range<u32>`. This parameter tells the GPU how many copies, or instances, of the model we want to draw. Currently, we are specifying `0..1`, which instructs the GPU to draw our model once, and then stop. If we used `0..5`, our code would draw 5 instances.
|
||||
|
||||
The fact that `instances` is a `Range<u32>` may seem weird as using `1..2` for instances would still draw 1 instance of our object. Seems like it would be simpler to just use a `u32` right? The reason it's a range is because sometimes we don't want to draw **all** of our objects. Sometimes we want to draw a selection of them, because others are not in frame, or we are debugging and want to look at a particular set of instances.
|
||||
The fact that `instances` is a `Range<u32>` may seem weird as using `1..2` for instances would still draw 1 instance of our object. Seems like it would be simpler to just use a `u32` right? The reason it's a range is that sometimes we don't want to draw **all** of our objects. Sometimes we want to draw a selection of them, because others are not in-frame, or we are debugging and want to look at a particular set of instances.
|
||||
|
||||
Ok, now we know how to draw multiple instances of an object, how do we tell wgpu what particular instance to draw? We are going to use something known as an instance buffer.
|
||||
|
||||
## The Instance Buffer
|
||||
|
||||
We'll create an instance buffer in a similar way to how we create a uniform buffer. First we'll create a struct called `Instance`.
|
||||
We'll create an instance buffer in a similar way to how we create a uniform buffer. First, we'll create a struct called `Instance`.
|
||||
|
||||
```rust
|
||||
// main.rs
|
||||
// lib.rs
|
||||
// ...
|
||||
|
||||
// NEW!
|
||||
@ -81,7 +81,7 @@ struct State {
|
||||
|
||||
The `cgmath` crate uses traits to provide common mathematical methods across its structs such as `Vector3`, and these traits must be imported before these methods can be called. For convenience, the `prelude` module within the crate provides the most common of these extension crates when it is imported.
|
||||
|
||||
To import this prelude module, put this line near the top of `main.rs`.
|
||||
To import this prelude module, put this line near the top of `lib.rs`.
|
||||
|
||||
```rust
|
||||
use cgmath::prelude::*;
|
||||
@ -94,7 +94,7 @@ const NUM_INSTANCES_PER_ROW: u32 = 10;
|
||||
const INSTANCE_DISPLACEMENT: cgmath::Vector3<f32> = cgmath::Vector3::new(NUM_INSTANCES_PER_ROW as f32 * 0.5, 0.0, NUM_INSTANCES_PER_ROW as f32 * 0.5);
|
||||
```
|
||||
|
||||
Now we can create the actual instances.
|
||||
Now we can create the actual instances.
|
||||
|
||||
```rust
|
||||
impl State {
|
||||
@ -127,11 +127,11 @@ Now that we have our data, we can create the actual `instance_buffer`.
|
||||
```rust
|
||||
let instance_data = instances.iter().map(Instance::to_raw).collect::<Vec<_>>();
|
||||
let instance_buffer = device.create_buffer_init(
|
||||
&wgpu::util::BufferInitDescriptor {
|
||||
label: Some("Instance Buffer"),
|
||||
contents: bytemuck::cast_slice(&instance_data),
|
||||
usage: wgpu::BufferUsages::VERTEX,
|
||||
}
|
||||
&wgpu::util::BufferInitDescriptor {
|
||||
label: Some("Instance Buffer"),
|
||||
contents: bytemuck::cast_slice(&instance_data),
|
||||
usage: wgpu::BufferUsages::VERTEX,
|
||||
}
|
||||
);
|
||||
```
|
||||
|
||||
@ -183,13 +183,13 @@ We need to add this descriptor to the render pipeline so that we can use it when
|
||||
|
||||
```rust
|
||||
let render_pipeline = device.create_render_pipeline(&wgpu::RenderPipelineDescriptor {
|
||||
// ...
|
||||
vertex: wgpu::VertexState {
|
||||
// ...
|
||||
// UPDATED!
|
||||
buffers: &[Vertex::desc(), InstanceRaw::desc()],
|
||||
},
|
||||
// ...
|
||||
// ...
|
||||
vertex: wgpu::VertexState {
|
||||
// ...
|
||||
// UPDATED!
|
||||
buffers: &[Vertex::desc(), InstanceRaw::desc()],
|
||||
},
|
||||
// ...
|
||||
});
|
||||
```
|
||||
|
||||
@ -197,10 +197,10 @@ Don't forget to return our new variables!
|
||||
|
||||
```rust
|
||||
Self {
|
||||
// ...
|
||||
// NEW!
|
||||
instances,
|
||||
instance_buffer,
|
||||
// ...
|
||||
// NEW!
|
||||
instances,
|
||||
instance_buffer,
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -6,15 +6,15 @@ Let's take a closer look at the last example at an angle.
|
||||
|
||||
Models that should be in the back are getting rendered ahead of ones that should be in the front. This is caused by the draw order. By default, pixel data from a new object will replace old pixel data.
|
||||
|
||||
There are two ways to solve this: sort the data from back to front, use what's known as a depth buffer.
|
||||
There are two ways to solve this: sort the data from back to front, or use what's known as a depth buffer.
|
||||
|
||||
## Sorting from back to front
|
||||
|
||||
This is the go to method for 2d rendering as it's pretty easier to know what's supposed to go in front of what. You can just use the z order. In 3d rendering it gets a little more tricky because the order of the objects changes based on the camera angle.
|
||||
This is the go-to method for 2d rendering as it's pretty easier to know what's supposed to go in front of what. You can just use the z order. In 3d rendering, it gets a little more tricky because the order of the objects changes based on the camera angle.
|
||||
|
||||
A simple way of doing this is to sort all the objects by their distance to the cameras position. There are flaws with this method though as when a large object is behind a small object, parts of the large object that should be in front of the small object will be rendered behind. We'll also run into issues with objects that overlap *themselves*.
|
||||
A simple way of doing this is to sort all the objects by their distance to the camera's position. There are flaws with this method though as when a large object is behind a small object, parts of the large object that should be in front of the small object will be rendered behind. We'll also run into issues with objects that overlap *themselves*.
|
||||
|
||||
If want to do this properly we need to have pixel level precision. That's where a *depth buffer* comes in.
|
||||
If want to do this properly we need to have pixel-level precision. That's where a *depth buffer* comes in.
|
||||
|
||||
## A pixels depth
|
||||
|
||||
@ -65,7 +65,7 @@ impl Texture {
|
||||
}
|
||||
```
|
||||
|
||||
1. We need the DEPTH_FORMAT for when we create the depth stage of the `render_pipeline` and creating the depth texture itself.
|
||||
1. We need the DEPTH_FORMAT for when we create the depth stage of the `render_pipeline` and for creating the depth texture itself.
|
||||
2. Our depth texture needs to be the same size as our screen if we want things to render correctly. We can use our `config` to make sure that our depth texture is the same size as our surface textures.
|
||||
3. Since we are rendering to this texture, we need to add the `RENDER_ATTACHMENT` flag to it.
|
||||
4. We technically don't *need* a sampler for a depth texture, but our `Texture` struct requires it, and we need one if we ever want to sample it.
|
||||
@ -77,7 +77,7 @@ We create our `depth_texture` in `State::new()`.
|
||||
let depth_texture = texture::Texture::create_depth_texture(&device, &config, "depth_texture");
|
||||
```
|
||||
|
||||
We need to modify our `render_pipeline` to allow depth testing.
|
||||
We need to modify our `render_pipeline` to allow depth testing.
|
||||
|
||||
```rust
|
||||
let render_pipeline = device.create_render_pipeline(&wgpu::RenderPipelineDescriptor {
|
||||
@ -112,14 +112,14 @@ pub enum CompareFunction {
|
||||
}
|
||||
```
|
||||
|
||||
2. There's another type of buffer called a stencil buffer. It's common practice to store the stencil buffer and depth buffer in the same texture. This fields control values for stencil testing. Since we aren't using a stencil buffer, we'll use default values. We'll cover stencil buffers [later](../../todo).
|
||||
2. There's another type of buffer called a stencil buffer. It's common practice to store the stencil buffer and depth buffer in the same texture. These fields control values for stencil testing. Since we aren't using a stencil buffer, we'll use default values. We'll cover stencil buffers [later](../../todo).
|
||||
|
||||
Don't forget to store the `depth_texture` in `State`.
|
||||
|
||||
```rust
|
||||
Self {
|
||||
// ...
|
||||
depth_texture,
|
||||
// ...
|
||||
depth_texture,
|
||||
}
|
||||
```
|
||||
|
||||
@ -141,15 +141,15 @@ The last change we need to make is in the `render()` function. We've created the
|
||||
|
||||
```rust
|
||||
let mut render_pass = encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
|
||||
// ...
|
||||
depth_stencil_attachment: Some(wgpu::RenderPassDepthStencilAttachment {
|
||||
view: &self.depth_texture.view,
|
||||
depth_ops: Some(wgpu::Operations {
|
||||
load: wgpu::LoadOp::Clear(1.0),
|
||||
store: true,
|
||||
}),
|
||||
stencil_ops: None,
|
||||
}),
|
||||
// ...
|
||||
depth_stencil_attachment: Some(wgpu::RenderPassDepthStencilAttachment {
|
||||
view: &self.depth_texture.view,
|
||||
depth_ops: Some(wgpu::Operations {
|
||||
load: wgpu::LoadOp::Clear(1.0),
|
||||
store: true,
|
||||
}),
|
||||
stencil_ops: None,
|
||||
}),
|
||||
});
|
||||
```
|
||||
|
||||
@ -163,4 +163,4 @@ Since the depth buffer is a texture, we can sample it in the shader. Because it'
|
||||
|
||||
<WasmExample example="tutorial8_depth"></WasmExample>
|
||||
|
||||
<AutoGithubLink/>
|
||||
<AutoGithubLink/>
|
@ -1,8 +1,8 @@
|
||||
# Model Loading
|
||||
|
||||
Up to this point we've been creating our models manually. While this is an acceptable way to do this, but it's really slow if we want to include complex models with lots of polygons. Because of this, we're going modify our code to leverage the obj model format so that we can create a model in a software such as blender and display it in our code.
|
||||
Up to this point we've been creating our models manually. While this is an acceptable way to do this, it's really slow if we want to include complex models with lots of polygons. Because of this, we're going to modify our code to leverage the `.obj` model format so that we can create a model in software such as blender and display it in our code.
|
||||
|
||||
Our `main.rs` file is getting pretty cluttered, let's create a `model.rs` file that we can put our model loading code into.
|
||||
Our `lib.rs` file is getting pretty cluttered, let's create a `model.rs` file that we can put our model loading code into.
|
||||
|
||||
```rust
|
||||
// model.rs
|
||||
@ -25,7 +25,7 @@ impl Vertex for ModelVertex {
|
||||
}
|
||||
```
|
||||
|
||||
You'll notice a couple of things here. In `main.rs` we had `Vertex` as a struct, here we're using a trait. We could have multiple vertex types (model, UI, instance data, etc.). Making `Vertex` a trait will allow us to abstract our the `VertexBufferLayout` creation code to make creating `RenderPipeline`s simpler.
|
||||
You'll notice a couple of things here. In `lib.rs` we had `Vertex` as a struct, here we're using a trait. We could have multiple vertex types (model, UI, instance data, etc.). Making `Vertex` a trait will allow us to abstract out the `VertexBufferLayout` creation code to make creating `RenderPipeline`s simpler.
|
||||
|
||||
Another thing to mention is the `normal` field in `ModelVertex`. We won't use this until we talk about lighting, but will add it to the struct for now.
|
||||
|
||||
@ -60,7 +60,7 @@ impl Vertex for ModelVertex {
|
||||
}
|
||||
```
|
||||
|
||||
This is basically the same as the original `VertexBufferLayout`, but we added a `VertexAttribute` for the `normal`. Remove the `Vertex` struct in `main.rs` as we won't need it anymore, and use our new `Vertex` from model for the `RenderPipeline`.
|
||||
This is basically the same as the original `VertexBufferLayout`, but we added a `VertexAttribute` for the `normal`. Remove the `Vertex` struct in `lib.rs` as we won't need it anymore, and use our new `Vertex` from `model` for the `RenderPipeline`.
|
||||
|
||||
```rust
|
||||
let render_pipeline = device.create_render_pipeline(&wgpu::RenderPipelineDescriptor {
|
||||
@ -79,11 +79,11 @@ Since the `desc` method is implemented on the `Vertex` trait, the trait needs to
|
||||
use model::Vertex;
|
||||
```
|
||||
|
||||
With all that in place we need a model to render. If you have one already that's great, but I've supplied a [zip file](https://github.com/sotrh/learn-wgpu/blob/master/code/beginner/tutorial9-models/res/cube.zip) with the model and all of it's textures. We're going to put this model in a new `res` folder next to the existing `src` folder.
|
||||
With all that in place, we need a model to render. If you have one already that's great, but I've supplied a [zip file](https://github.com/sotrh/learn-wgpu/blob/master/code/beginner/tutorial9-models/res/cube.zip) with the model and all of its textures. We're going to put this model in a new `res` folder next to the existing `src` folder.
|
||||
|
||||
## Accessing files in the res folder
|
||||
|
||||
When cargo builds and runs our program it sets what's known as the current working directory. This directory is usually the folder containing your projects root `Cargo.toml`. The path to our res folder may differ depending on the structure of the project. In the `res` folder for the example code for this section tutorial is at `code/beginner/tutorial9-models/res/`. When loading our model we could use this path, and just append `cube.obj`. This is fine, but if we change our projects structure, our code will break.
|
||||
When cargo builds and runs our program it sets what's known as the current working directory. This directory is usually the folder containing your project's root `Cargo.toml`. The path to our res folder may differ depending on the structure of the project. In the `res` folder for the example code for this section tutorial is at `code/beginner/tutorial9-models/res/`. When loading our model we could use this path, and just append `cube.obj`. This is fine, but if we change our project's structure, our code will break.
|
||||
|
||||
We're going to fix that by modifying our build script to copy our `res` folder to where cargo creates our executable, and we'll reference it from there. Create a file called `build.rs` and add the following:
|
||||
|
||||
@ -131,7 +131,7 @@ glob = "0.3"
|
||||
|
||||
## Accessing files from WASM
|
||||
|
||||
By design, you can't access files on a users filesystem in Web Assembly. Instead we'll serve those files up using a web serve, and then load those files into our code using an http request. In order to simplify this, let's create a file called `resources.rs` to handle this for us. We'll create two functions that will load text files and binary files respectively.
|
||||
By design, you can't access files on a user's filesystem in Web Assembly. Instead, we'll serve those files up using a web serve, and then load those files into our code using an http request. In order to simplify this, let's create a file called `resources.rs` to handle this for us. We'll create two functions that will load text files and binary files respectively.
|
||||
|
||||
```rust
|
||||
use std::io::{BufReader, Cursor};
|
||||
@ -195,7 +195,7 @@ pub async fn load_binary(file_name: &str) -> anyhow::Result<Vec<u8>> {
|
||||
|
||||
<div class="note">
|
||||
|
||||
We're using `OUT_DIR` on desktop to get at our `res` folder.
|
||||
We're using `OUT_DIR` on desktop to get to our `res` folder.
|
||||
|
||||
</div>
|
||||
|
||||
@ -415,7 +415,7 @@ where
|
||||
}
|
||||
```
|
||||
|
||||
We could have put this methods in an `impl Model`, but I felt it made more sense to have the `RenderPass` do all the rendering, as that's kind of it's job. This does mean we have to import `DrawModel` when we go to render though.
|
||||
We could have put these methods in an `impl Model`, but I felt it made more sense to have the `RenderPass` do all the rendering, as that's kind of its job. This does mean we have to import `DrawModel` when we go to render though.
|
||||
|
||||
```rust
|
||||
// lib.rs
|
||||
@ -528,7 +528,7 @@ let material = &self.obj_model.materials[mesh.material];
|
||||
render_pass.draw_mesh_instanced(mesh, material, 0..self.instances.len() as u32, &self.camera_bind_group);
|
||||
```
|
||||
|
||||
With all that in place we should get the following.
|
||||
With all that in place, we should get the following.
|
||||
|
||||
![cubes-correct.png](./cubes-correct.png)
|
||||
|
||||
@ -549,8 +549,8 @@ pub trait DrawModel<'a> {
|
||||
}
|
||||
|
||||
impl<'a, 'b> DrawModel<'b> for wgpu::RenderPass<'a>
|
||||
where
|
||||
'b: 'a, {
|
||||
where
|
||||
'b: 'a, {
|
||||
// ...
|
||||
fn draw_model(&mut self, model: &'b Model, camera_bind_group: &'b wgpu::BindGroup) {
|
||||
self.draw_model_instanced(model, 0..1, camera_bind_group);
|
||||
@ -570,7 +570,7 @@ where
|
||||
}
|
||||
```
|
||||
|
||||
The code in `main.rs` will change accordingly.
|
||||
The code in `lib.rs` will change accordingly.
|
||||
|
||||
```rust
|
||||
render_pass.set_vertex_buffer(1, self.instance_buffer.slice(..));
|
||||
|
Loading…
Reference in New Issue
Block a user