tutorial13 improvements

pull/512/head
Blatko1 5 months ago
parent 91cb5df479
commit 7d0bfbd06f
No known key found for this signature in database
GPG Key ID: 8D56CF822AB1CD85

@ -40,7 +40,7 @@ Our `LightUniform` represents a colored point in space. We're just going to use
The rule of thumb for alignment with WGSL structs is field alignments are always powers of 2. For example, a `vec3` may only have three float fields, giving it a size of 12. The alignment will be bumped up to the next power of 2 being 16. This means that you have to be more careful with how you layout your struct in Rust.
Some developers choose to use `vec4`s instead of `vec3`s to avoid alignment
issues. You can learn more about the alignment rules in the [wgsl spec](https://www.w3.org/TR/WGSL/#alignment-and-size)
issues. You can learn more about the alignment rules in the [WGSL spec](https://www.w3.org/TR/WGSL/#alignment-and-size)
</div>

@ -1,48 +1,26 @@
# High Dynamic Range Rendering
Up to this point we've been using the sRGB colorspace to render our scene.
While this is fine it limits what we can do with our lighting. We are using
`TextureFormat::Bgra8UnormSrgb` (on most systems) for our surface texture.
This means that we have 8bits for each of the color and alpha channels. While
the channels are stored as integers between 0 and 255 inclusively, they get
converted to and from floating point values between 0.0 and 1.0. The TL:DR of
this is that using 8bit textures we only get 256 possible values in each
channel.
The kicker with this is most of the precision gets used to represent darker
values of the scene. This means that bright objects like a light bulb have
the same value as exeedingly bright objects such as the sun. This inaccuracy
makes realistic lighting difficult to do right. Because of this, we are going
to switch our rendering system to use high dynamic range in order to give our
scene more flexibility and enable use to leverage more advanced techniques
such as Physically Based Rendering.
Up to this point, we've been using the sRGB colorspace to render our scene. While this is fine, it limits what we can do with our lighting. We are using `TextureFormat::Bgra8UnormSrgb` (on most systems) for our surface texture. This means we have 8 bits for each red, green, blue and alpha channel. While the channels are stored as integers between 0 and 255 inclusively, they get converted to and from floating point values between 0.0 and 1.0. The TL:DR of this is that using 8-bit textures, we only get 256 possible values in each channel.
The kicker with this is most of the precision gets used to represent darker values of the scene. This means that bright objects like light bulbs have the same value as exceedingly bright objects like the sun. This inaccuracy makes realistic lighting difficult to do right. Because of this, we are going to switch our rendering system to use high dynamic range in order to give our scene more flexibility and enable us to leverage more advanced techniques such as Physically Based Rendering.
## What is High Dynamic Range?
In laymans terms, a High Dynamic Range texture is a texture with more bits
per pixel. In addition to this, HDR textures are stored as floating point values
instead of integer values. This means that the texture can have brightness values
greater than 1.0 meaning you can have a dynamic range of brighter objects.
In layman's terms, a High Dynamic Range texture is a texture with more bits per pixel. In addition to this, HDR textures are stored as floating point values instead of integer values. This means that the texture can have brightness values greater than 1.0, meaning you can have a dynamic range of brighter objects.
## Switching to HDR
As of writing, wgpu doesn't allow us to use a floating point format such as
`TextureFormat::Rgba16Float` as the surface texture format (not all
monitors support that anyways), so we will have to render our scene in
an HDR format, then convert the values to a supported format such as
`TextureFormat::Bgra8UnormSrgb` using a technique called tonemapping.
As of writing, wgpu doesn't allow us to use a floating point format such as `TextureFormat::Rgba16Float` as the surface texture format (not all monitors support that anyway), so we will have to render our scene in an HDR format, then convert the values to a supported format, such as `TextureFormat::Bgra8UnormSrgb` using a technique called tonemapping.
<div class="note">
There are some talks about implementing HDR surface texture support in
wgpu. Here is a github issues if you want to contribute to that
effort: https://github.com/gfx-rs/wgpu/issues/2920
There are some talks about implementing HDR surface texture support in wgpu. Here is a GitHub issue if you want to contribute to that effort: https://github.com/gfx-rs/wgpu/issues/2920
</div>
Before we do that though we need to switch to using an HDR texture for rendering.
Before we do that, though, we need to switch to using an HDR texture for rendering.
To start we'll create a file called `hdr.rs` and put the some code in it:
To start, we'll create a file called `hdr.rs` and put some code in it:
```rust
use wgpu::Operations;
@ -235,14 +213,9 @@ fn create_render_pipeline(
## Tonemapping
The process of tonemapping is taking an HDR image and converting it to
a Standard Dynamic Range (SDR) which is usually sRGB. The exact
tonemapping curve you uses is ultimately up to your artistic needs, but
for this tutorial we'll use a popular one know as the Academy Color
Encoding System or ACES used throughout the game industry as well as the film industry.
The process of tonemapping is taking an HDR image and converting it to a Standard Dynamic Range (SDR), which is usually sRGB. The exact tonemapping curve you use is ultimately up to your artistic needs, but for this tutorial, we'll use a popular one known as the Academy Color Encoding System or ACES used throughout the game industry as well as the film industry.
With that let's jump into the the shader. Create a file called `hdr.wgsl`
and add the following code:
With that, let's jump into the the shader. Create a file called `hdr.wgsl` and add the following code:
```wgsl
// Maps HDR values to linear values
@ -259,8 +232,8 @@ fn aces_tone_map(hdr: vec3<f32>) -> vec3<f32> {
-0.07367, -0.00605, 1.07602,
);
let v = m1 * hdr;
let a = v * (v + 0.0245786) - 0.000090537;
let b = v * (0.983729 * v + 0.4329510) + 0.238081;
let a = v * (v + 0.0245786) - 0.000090537;
let b = v * (0.983729 * v + 0.4329510) + 0.238081;
return clamp(m2 * (a / b), vec3(0.0), vec3(1.0));
}
@ -302,8 +275,7 @@ fn fs_main(vs: VertexOutput) -> @location(0) vec4<f32> {
}
```
With those in place we can start using our HDR texture in our core
render pipeline. First we need to add the new `HdrPipeline` to `State`:
With those in place, we can start using our HDR texture in our core render pipeline. First, we need to add the new `HdrPipeline` to `State`:
```rust
// lib.rs
@ -334,8 +306,7 @@ impl State {
}
```
Then when we resize the window, we need to call `resize()` on our
`HdrPipeline`:
Then, when we resize the window, we need to call `resize()` on our `HdrPipeline`:
```rust
fn resize(&mut self, new_size: winit::dpi::PhysicalSize<u32>) {
@ -349,8 +320,7 @@ fn resize(&mut self, new_size: winit::dpi::PhysicalSize<u32>) {
}
```
Next in `render()` we need to switch the `RenderPass` to use our HDR
texture instead of the surface texture:
Next, in `render()`, we need to switch the `RenderPass` to use our HDR texture instead of the surface texture:
```rust
// render()
@ -375,8 +345,7 @@ let mut render_pass = encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
});
```
Finally after we draw all the objects in the frame we can run our
tonemapper with the surface texture as the output:
Finally, after we draw all the objects in the frame, we can run our tonemapper with the surface texture as the output:
```rust
// NEW!
@ -394,59 +363,35 @@ Here's what it looks like after implementing HDR:
## Loading HDR textures
Now that we have an HDR render buffer, we can start leveraging
HDR textures to their fullest. One of the main uses for HDR
textures is to store lighting information in the form of an
environment map.
Now that we have an HDR render buffer, we can start leveraging HDR textures to their fullest. One of the primary uses for HDR textures is to store lighting information in the form of an environment map.
This map can be used to light objects, display reflections and
also to make a skybox. We're going to create a skybox using HDR
texture, but first we need to talk about how environment maps are
stored.
This map can be used to light objects, display reflections and also to make a skybox. We're going to create a skybox using HDR texture, but first, we need to talk about how environment maps are stored.
## Equirectangular textures
An equirectangluar texture is a texture where a sphere is stretched
across a rectangular surface using what's known as an equirectangular
projection. This map of the Earth is an example of this projection.
An equirectangular texture is a texture where a sphere is stretched across a rectangular surface using what's known as an equirectangular projection. This map of the Earth is an example of this projection.
![map of the earth](https://upload.wikimedia.org/wikipedia/commons/thumb/8/83/Equirectangular_projection_SW.jpg/1024px-Equirectangular_projection_SW.jpg)
This projection maps the latitude values of the sphere to the
horizontal coordinates of the texture. The longitude values get
mapped to the vertical coordinates. This means that the vertical
middle of the texture is the equator (0° longitude) of the sphere,
the horizontal middle is the prime meridian (0° latitude) of the
sphere, the left and right edges of the texture are the anti-meridian
(+180°/-180° latitude) the top and bottom edges of the texture are
the north pole (90° longitude) and south pole (-90° longitude)
respectively.
This projection maps the latitude values of the sphere to the horizontal coordinates of the texture. The longitude values get mapped to the vertical coordinates. This means that the vertical middle of the texture is the equator (0° longitude) of the sphere, the horizontal middle is the prime meridian (0° latitude) of the sphere, the left and right edges of the texture are the anti-meridian (+180°/-180° latitude) the top and bottom edges of the texture are the north pole (90° longitude) and south pole (-90° longitude), respectively.
![equirectangular diagram](./equirectangular.svg)
This simple projection is easy to use, leading it to be one of the
most popular projections for storing spherical textures. You can
see the particular environment map we are going to use below.
This simple projection is easy to use, making it one of the most popular projections for storing spherical textures. You can see the particular environment map we are going to use below.
![equirectangular skybox](./kloofendal_43d_clear_puresky.jpg)
## Cube Maps
While we technically can use an equirectangular map directly as long
as we do some math to figure out the correct coordinates, it is a lot
more convenient to convert our environment map into a cube map.
While we can technically use an equirectangular map directly, as long as we do some math to figure out the correct coordinates, it is a lot more convenient to convert our environment map into a cube map.
<div class="info">
A cube map is special kind of texture that has 6 layers. Each layer
corresponds to a different face of an imaginary cube that is aligned
to the X, Y and Z axes. The layers are stored in the following order:
+X, -X, +Y, -Y, +Z, -Z.
A cube map is a special kind of texture that has six layers. Each layer corresponds to a different face of an imaginary cube that is aligned to the X, Y and Z axes. The layers are stored in the following order: +X, -X, +Y, -Y, +Z, -Z.
</div>
To prepare to store the cube texture, we are going to create
a new struct called `CubeTexture` in `texture.rs`.
To prepare to store the cube texture, we are going to create a new struct called `CubeTexture` in `texture.rs`.
```rust
pub struct CubeTexture {
@ -516,25 +461,13 @@ impl CubeTexture {
}
```
With this we can now write the code to load the HDR into
a cube texture.
With this, we can now write the code to load the HDR into a cube texture.
## Compute shaders
Up to this point we've been exclusively using render
pipelines, but I felt this was a good time to introduce
compute pipelines and by extension compute shaders. Compute
pipelines are a lot easier to setup. All you need is to tell
the pipeline what resources you want to use, what code you
want to run, and how many threads you'd like the GPU to use
when running your code. We're going to use a compute shader
to give each pixel in our cube textue a color from the
HDR image.
Before we can use compute shaders, we need to enable them
in wgpu. We can do that just need to change the line where
we specify what features we want to use. In `lib.rs`, change
the code where we request a device:
Up to this point, we've been exclusively using render pipelines, but I felt this was a good time to introduce the compute pipelines and, by extension, compute shaders. Compute pipelines are a lot easier to set up. All you need is to tell the pipeline what resources you want to use, what code you want to run, and how many threads you'd like the GPU to use when running your code. We're going to use a compute shader to give each pixel in our cube texture a color from the HDR image.
Before we can use compute shaders, we need to enable them in wgpu. We can do that by changing the line where we specify what features we want to use. In `lib.rs`, change the code where we request a device:
```rust
let (device, queue) = adapter
@ -554,16 +487,9 @@ let (device, queue) = adapter
<div class="warn">
You may have noted that we have switched from
`downlevel_webgl2_defaults()` to `downlevel_defaults()`.
This means that we are dropping support for WebGL2. The
reason for this is that WebGL2 doesn't support compute
shaders. WebGPU was built with compute shaders in mind. As
of writing the only browser that supports WebGPU is Chrome,
and some experimental browsers such as Firefox Nightly.
You may have noted that we have switched from `downlevel_webgl2_defaults()` to `downlevel_defaults()`. This means that we are dropping support for WebGL2. The reason for this is that WebGL2 doesn't support the compute shaders. WebGPU was built with compute shaders in mind. As of writing, the only browser that supports WebGPU is Chrome and some experimental browsers such as Firefox Nightly.
Consequently we are going to remove the webgl feature from
`Cargo.toml`. This line in particular:
Consequently, we are going to remove the WebGL feature from `Cargo.toml`. This line in particular:
```toml
wgpu = { version = "0.18", features = ["webgl"]}
@ -571,9 +497,7 @@ wgpu = { version = "0.18", features = ["webgl"]}
</div>
Now that we've told wgpu that we want to use compute
shaders, let's create a struct in `resource.rs` that we'll
use to load the HDR image into our cube map.
Now that we've told wgpu that we want to use the compute shaders, let's create a struct in `resource.rs` that we'll use to load the HDR image into our cube map.
```rust
pub struct HdrLoader {
@ -696,10 +620,10 @@ impl HdrLoader {
let dst_view = dst.texture().create_view(&wgpu::TextureViewDescriptor {
label,
// Normally you'd use `TextureViewDimension::Cube`
// Normally, you'd use `TextureViewDimension::Cube`
// for a cube texture, but we can't use that
// view dimension with a `STORAGE_BINDING`.
// We need to access the cube texure layers
// We need to access the cube texture layers
// directly.
dimension: Some(wgpu::TextureViewDimension::D2Array),
..Default::default()
@ -737,21 +661,13 @@ impl HdrLoader {
}
```
The `dispatch_workgroups` call tells the gpu to run our
code in batchs called workgroups. Each workgroup has a
number of worker threads called invocations that run the
code in parallel. Workgroups are organized as a 3d grid
with the dimensions we pass to `dispatch_workgroups`.
The `dispatch_workgroups` call tells the GPU to run our code in batches called workgroups. Each workgroup has a number of worker threads called invocations that run the code in parallel. Workgroups are organized as a 3d grid with the dimensions we pass to `dispatch_workgroups`.
In this example we have a workgroup grid divided into 16x16
chunks and storing the layer in z dimension.
In this example, we have a workgroup grid divided into 16x16 chunks and storing the layer in the z dimension.
## The compute shader
Now let's write a compute shader that will convert
our equirectangular texture to a cube texture. Create a file
called `equirectangular.wgsl`. We're going to break it down
chunk by chunk.
Now, let's write a compute shader that will convert our equirectangular texture to a cube texture. Create a file called `equirectangular.wgsl`. We're going to break it down chunk by chunk.
```wgsl
const PI: f32 = 3.1415926535897932384626433832795;
@ -765,10 +681,8 @@ struct Face {
Two things here:
1. wgsl doesn't have a builtin for PI so we need to specify
it ourselves.
2. each face of the cube map has an orientation to it, so we
need to store that.
1. WGSL doesn't have a built-in for PI, so we need to specify it ourselves.
2. each face of the cube map has an orientation to it, so we need to store that.
```wgsl
@group(0)
@ -780,19 +694,11 @@ var src: texture_2d<f32>;
var dst: texture_storage_2d_array<rgba32float, write>;
```
Here we have the only two bindings we need. The equirectangular
`src` texture and our `dst` cube texture. Some things to note:
about `dst`:
Here, we have the only two bindings we need. The equirectangular `src` texture and our `dst` cube texture. Some things to note about `dst`:
1. While `dst` is a cube texture, it's stored as a array of
2d textures.
2. The type of binding we're using here is a storage texture.
An array storage texture to be precise. This is a unique
binding only available to compute shaders. It allows us
to directly write to the texture.
3. When using a storage texture binding we need to specify the
format of the texture. If you try to bind a texture with
a different format, wgpu will panic.
1. While `dst` is a cube texture, it's stored as an array of 2d textures.
2. The type of binding we're using here is a storage texture. An array storage texture, to be precise. This is a unique binding only available to compute shaders. It allows us to write directly to the texture.
3. When using a storage texture binding, we need to specify the format of the texture. If you try to bind a texture with a different format, wgpu will panic.
```wgsl
@compute
@ -801,7 +707,7 @@ fn compute_equirect_to_cubemap(
@builtin(global_invocation_id)
gid: vec3<u32>,
) {
// If texture size is not divisible by 32 we
// If texture size is not divisible by 32, we
// need to make sure we don't try to write to
// pixels that don't exist.
if gid.x >= u32(textureDimensions(dst).x) {
@ -867,23 +773,17 @@ fn compute_equirect_to_cubemap(
}
```
While I commented some the previous code, there are some
things I want to go over that wouldn't fit well in a
comment.
While I commented in the previous code, there are some things I want to go over that wouldn't fit well in a comment.
The `workgroup_size` decorator tells the dimensions of the
workgroup's local grid of invocations. Because we are
dispatching one workgroup for every pixel in the texture,
we have each workgroup be a 16x16x1 grid. This means that each workgroup can have 256 threads to work with.
The `workgroup_size` decorator tells the dimensions of the workgroup's local grid of invocations. Because we are dispatching one workgroup for every pixel in the texture, we have each workgroup be a 16x16x1 grid. This means that each workgroup can have 256 threads to work with.
<div class="warn">
For Webgpu each workgroup can only have a max of 256 threads (also
called invocations).
For WebGPU, each workgroup can only have a max of 256 threads (also called invocations).
</div>
With this we can load the environment map in the `new()` function:
With this, we can load the environment map in the `new()` function:
```rust
let hdr_loader = resources::HdrLoader::new(&device);
@ -899,18 +799,9 @@ let sky_texture = hdr_loader.from_equirectangular_bytes(
## Skybox
No that we have an environment map to render. Let's use
it to make our skybox. There are different ways to render
a skybox. A standard way is to render a cube and map the
environment map on it. While that method works, it can
have some artifacts in the corners and edges where the
cubes faces meet.
Now that we have an environment map to render let's use it to make our skybox. There are different ways to render a skybox. A standard way is to render a cube and map the environment map on it. While that method works, it can have some artifacts in the corners and edges where the cube's faces meet.
Instead we are going to render to the entire screen and
compute the view direction from each pixel, and use that
to sample the texture. First though we need to create a
bindgroup for the environment map so that we can use it
for rendering. Add the following to `new()`:
Instead, we are going to render to the entire screen, compute the view direction from each pixel and use that to sample the texture. First, we need to create a bindgroup for the environment map so that we can use it for rendering. Add the following to `new()`:
```rust
let environment_layout =
@ -952,8 +843,7 @@ let environment_bind_group = device.create_bind_group(&wgpu::BindGroupDescriptor
});
```
Now that we have the bindgroup, we need a render pipeline
to render the skybox.
Now that we have the bindgroup, we need a render pipeline to render the skybox.
```rust
// NEW!
@ -976,13 +866,9 @@ let sky_pipeline = {
};
```
One thing to not here. We added the primitive format to
`create_render_pipeline()`. Also we changed the depth compare
function to `CompareFunction::LessEqual` (we'll discuss why when
we go over the sky shader). Here's the changes to that:
One thing to not here. We added the primitive format to `create_render_pipeline()`. Also, we changed the depth compare function to `CompareFunction::LessEqual` (we'll discuss why when we go over the sky shader). Here are the changes to that:
```rust
fn create_render_pipeline(
device: &wgpu::Device,
layout: &wgpu::PipelineLayout,
@ -1012,8 +898,7 @@ fn create_render_pipeline(
}
```
Don't forget to add the new bindgroup and pipeline to the
to `State`.
Don't forget to add the new bindgroup and pipeline to the to `State`.
```rust
struct State {
@ -1094,19 +979,12 @@ fn fs_main(in: VertexOutput) -> @location(0) vec4<f32> {
Let's break this down:
1. We create a triangle twice the size of the screen.
2. In the fragment shader we get the view direction from
the clip position. We use the inverse projection
matrix to get convert the clip coordinates to view
direction. Then we use the inverse view matrix to
get the direction into world space as that's what we
need for to sample the sky box correctly.
2. In the fragment shader, we get the view direction from the clip position. We use the inverse projection matrix to convert the clip coordinates to view direction. Then, we use the inverse view matrix to get the direction into world space, as that's what we need to sample the sky box correctly.
3. We then sample the sky texture with the view direction.
<!-- ![debugging skybox](./debugging-skybox.png) -->
In order for this to work we need to change our camera
uniforms a bit. We need to add the inverse view matrix,
and inverse projection matrix to `CameraUniform` struct.
For this to work, we need to change our camera uniforms a bit. We need to add the inverse view matrix and inverse projection matrix to `CameraUniform` struct.
```rust
#[repr(C)]
@ -1144,9 +1022,7 @@ impl CameraUniform {
}
```
Make sure to change the `Camera` definition in
`shader.wgsl`, and `light.wgsl`. Just as a reminder
it looks like this:
Make sure to change the `Camera` definition in `shader.wgsl`, and `light.wgsl`. Just as a reminder, it looks like this:
```wgsl
struct Camera {
@ -1161,30 +1037,19 @@ var<uniform> camera: Camera;
<div class="info">
You may have noticed that we removed the `OPENGL_TO_WGPU_MATRIX`. The reason for this is
that it was messing with the projection of the
skybox.
You may have noticed that we removed the `OPENGL_TO_WGPU_MATRIX`. The reason for this is that it was messing with the projection of the skybox.
![projection error](./project-error.png)
It wasn't technically needed, so I felt fine
removing it.
Technically, it wasn't needed, so I felt fine removing it.
</div>
## Reflections
Now that we have a sky, we can mess around with
using it for lighting. This won't be physically
accurate (we'll look into that later). That being
said, we have the environment map, we might as
well use it.
Now that we have a sky, we can mess around with using it for lighting. This won't be physically accurate (we'll look into that later). That being said, we have the environment map, so we might as well use it.
In order to do that though we need to change our
shader to do lighting in world space instead of
tangent space because our environment map is in
world space. Because there are a lot of changes
I'll post the whole shader here:
In order to do that though, we need to change our shader to do lighting in world space instead of tangent space because our environment map is in world space. Because there are a lot of changes I'll post the whole shader here:
```wgsl
// Vertex shader
@ -1291,7 +1156,7 @@ fn fs_main(in: VertexOutput) -> @location(0) vec4<f32> {
// NEW!
// Adjust the tangent and bitangent using the Gramm-Schmidt process
// This makes sure that they are perpedicular to each other and the
// This makes sure that they are perpendicular to each other and the
// normal of the surface.
let world_tangent = normalize(in.world_tangent - dot(in.world_tangent, in.world_normal) * in.world_normal);
let world_bitangent = cross(world_tangent, in.world_normal);
@ -1328,16 +1193,7 @@ fn fs_main(in: VertexOutput) -> @location(0) vec4<f32> {
}
```
A little note on the reflection math. The `view_dir`
gives us the direction to the camera from the surface.
The reflection math needs the direction from the
camera to the surface so we negate `view_dir`. We
then use `wgsl`'s builtin `reflect` function to
reflect the inverted `view_dir` about the `world_normal`.
This gives us a direction that we can use sample the
environment map to get the color of the sky in that
direction. Just looking at the reflection component
gives us the following:
A little note on the reflection math. The `view_dir` gives us the direction to the camera from the surface. The reflection math needs the direction from the camera to the surface, so we negate `view_dir`. We then use `wgsl`'s built-in `reflect` function to reflect the inverted `view_dir` about the `world_normal`. This gives us a direction that we can use to sample the environment map and get the color of the sky in that direction. Just looking at the reflection component gives us the following:
![just-reflections](./just-reflections.png)
@ -1349,8 +1205,7 @@ Here's the finished scene:
<div class="warn">
If your browser doesn't support WebGPU, this example
won't work for you.
If your browser doesn't support WebGPU, this example won't work for you.
</div>

Loading…
Cancel
Save