.. | ||
README.md | ||
tutorial3-pipeline-composite.svg | ||
tutorial3-pipeline-triangle.png | ||
tutorial3-pipeline-vertices.png | ||
tutorial3-pipeline-vertices.svg |
The Pipeline
What's a pipeline?
If you're familiar with OpenGL, you may remember using shader programs. You can think of a pipeline as a more robust version of that. A pipeline describes all the actions the gpu will preform when acting on a set of data. In this section, we will be creating a RenderPipeline
specifically.
Wait shaders?
Shaders are mini programs that you send to the gpu to perform operations on your data. There are 3 main types of shader: vertex, fragment, and compute. There are others such as geometry shaders, but they're more of an advanced topic. For now we're just going to use vertex, and fragment shaders.
Vertex, fragment.. what are those?
A vertex is a point in 3d space (can also be 2d). These vertices are then bundled in groups of 2s to form lines and/or 3s to form triangles.
Most modern rendering uses triangles to make all shapes, from simple shapes (such as cubes), to complex ones (such as people). These triangles are stored as vertices which are the points that make up the corners of the triangles.
We use a vertex shader to manipulate the vertices, in order to transform the shape to look the way we want it.
The vertices are then converted into fragments. Every pixel in the result image gets at least one fragment. Each fragment has a color that will be copied to its corresponding pixel. The fragment shader decides what color the fragment will be.
GLSL and SPIR-V
Shaders in wgpu
are written with a binary language called SPIR-V. SPIR-V is designed for computers to read, not people, so we're going to use a language called GLSL (specifically, with wgpu
we need to use the Vulkan flavor of GLSL) to write our code, and then convert that to SPIR-V.
In order to do that, we're going to need something to do the conversion. Add the following crate to your dependencies.
[dependencies]
# ...
shaderc = "0.7"
We'll use this in a bit, but first let's create the shaders.
Writing the shaders
In the same folder as main.rs
, create two (2) files: shader.vert
, and shader.frag
. Write the following code in shader.vert
.
// shader.vert
#version 450
const vec2 positions[3] = vec2[3](
vec2(0.0, 0.5),
vec2(-0.5, -0.5),
vec2(0.5, -0.5)
);
void main() {
gl_Position = vec4(positions[gl_VertexIndex], 0.0, 1.0);
}
If you've used C/C++ before (or even Java), this syntax should be somewhat familiar. There are some key differences though that i'll go over.
First up there's the #version 450
line. This specifies the version of GLSL that we're using. I've gone with a later version so we can use many of the advanced GLSL features.
We're currently storing vertex data in the shader as positions
. This is bad practice as it limits what we can draw with this shader, and it can make the shader super big if we want to use a complex model. Using actual vertex data requires us to use Buffer
s, which we'll talk about next time, so we'll turn a blind eye for now.
There's also gl_Position
and gl_VertexIndex
which are built-in variables that define where the vertex position data is going to be stored as 4 floats, and the index of the current vertex in the vertex data.
Next up shader.frag
.
// shader.frag
#version 450
layout(location=0) out vec4 f_color;
void main() {
f_color = vec4(0.3, 0.2, 0.1, 1.0);
}
The part that sticks out is the layout(location=0) out vec4 f_color;
line. In GLSL you can create in
and out
variables in your shaders. An in
variable will expect data from outside the shader. In the case of the vertex shader, this will come from vertex data. In a fragment shader, an in
variable will pull from out
variables in the vertex shader. When an out
variable is defined in the fragment shader, it means that the value is meant to be written to a buffer to be used outside the shader program.
in
and out
variables can also specify a layout. In shader.frag
we specify that the out vec4 f_color
should be layout(location=0)
; this means that the value of f_color
will be saved to whatever buffer is at location zero in our application. In most cases, location=0
is the current texture from the swapchain aka the screen.
You may have noticed that shader.vert
doesn't have any in
variables nor out
variables. gl_Position
functions as an out variable for vertex position data, so shader.vert
doesn't need any out
variables. If we wanted to send more data to fragment shader, we could specify an out
variable in shader.vert
and an in variable in shader.frag
. Note: the location has to match, otherwise the GLSL code will fail to compile
// shader.vert
layout(location=0) out vec4 v_color;
// shader.frag
layout(location=0) in vec4 v_color;
How do we use the shaders?
This is the part where we finally make the thing in the title: the pipeline. First let's modify State
to include the following.
// main.rs
struct State {
surface: wgpu::Surface,
device: wgpu::Device,
queue: wgpu::Queue,
sc_desc: wgpu::SwapChainDescriptor,
swap_chain: wgpu::SwapChain,
size: winit::dpi::PhysicalSize<u32>,
// NEW!
render_pipeline: wgpu::RenderPipeline,
}
Now let's move to the new()
method, and start making the pipeline. We'll have to load in those shaders we made earlier, as the render_pipeline
requires those.
let vs_src = include_str!("shader.vert");
let fs_src = include_str!("shader.frag");
let mut compiler = shaderc::Compiler::new().unwrap();
let vs_spirv = compiler.compile_into_spirv(vs_src, shaderc::ShaderKind::Vertex, "shader.vert", "main", None).unwrap();
let fs_spirv = compiler.compile_into_spirv(fs_src, shaderc::ShaderKind::Fragment, "shader.frag", "main", None).unwrap();
let vs_data = wgpu::util::make_spirv(vs_spirv.as_binary_u8());
let fs_data = wgpu::util::make_spirv(fs_spirv.as_binary_u8());
let vs_module = device.create_shader_module(&wgpu::ShaderModuleDescriptor {
label: Some("Vertex Shader"),
source: vs_data,
flags: wgpu::ShaderFlags::default(),
});
let fs_module = device.create_shader_module(&wgpu::ShaderModuleDescriptor {
label: Some("Fragment Shader"),
source: fs_data,
flags: wgpu::ShaderFlags::default(),
});
One more thing, we need to create a PipelineLayout
. We'll get more into this after we cover Buffer
s.
let render_pipeline_layout =
device.create_pipeline_layout(&wgpu::PipelineLayoutDescriptor {
label: Some("Render Pipeline Layout"),
bind_group_layouts: &[],
push_constant_ranges: &[],
});
Finally we have all we need to create the render_pipeline
.
let render_pipeline = device.create_render_pipeline(&wgpu::RenderPipelineDescriptor {
label: Some("Render Pipeline"),
layout: Some(&render_pipeline_layout),
vertex: wgpu::VertexState {
module: &vs_module,
entry_point: "main", // 1.
buffers: &[], // 2.
},
fragment: Some(wgpu::FragmentState { // 3.
module: &fs_module,
entry_point: "main",
targets: &[wgpu::ColorTargetState { // 4.
format: sc_desc.format,
alpha_blend: wgpu::BlendState::REPLACE,
color_blend: wgpu::BlendState::REPLACE,
write_mask: wgpu::ColorWrite::ALL,
}],
}),
// continued ...
Two things to note here:
- Here you can specify which function inside of the shader should be called, which is known as the
entry_point
. I normally use"main"
as that's what it would be in OpenGL, but feel free to use whatever name you like. Make sure you specify the same entry point when you're compiling your shaders as you do here where you're exposing them to your pipeline. - The
buffers
field tellswgpu
what type of vertices we want to pass to the vertex shader. We're specifying the vertices in the vertex shader itself so we'll leave this empty. We'll put something there in the next tutorial. - The
fragment_stage
is technically optional, so you have to wrap it inSome()
. We need it if we want to store color data to theswap_chain
. - The
targets
field tellswgpu
what color outputs it should set up. Currently we only need one for theswap_chain
. We use theswap_chain
's format so that copying to it is easy, and we specify that the blending should just replace old pixel data with new data. We also tellwgpu
to write to all colors: red, blue, green, and alpha. We'll talk more aboutcolor_state
when we talk about textures.
primitive: wgpu::PrimitiveState {
topology: wgpu::PrimitiveTopology::TriangleList, // 1.
strip_index_format: None,
front_face: wgpu::FrontFace::Ccw, // 2.
cull_mode: wgpu::CullMode::Back,
// Setting this to anything other than Fill requires Features::NON_FILL_POLYGON_MODE
polygon_mode: wgpu::PolygonMode::Fill,
},
// continued ...
The primitive
field describes how to interpret our vertices when converting them into triangles.
- Using
PrimitiveTopology::TriangleList
means that each three vertices will correspond to one triangle. - The
front_face
andcull_mode
fields tellwgpu
how to determine whether a given triangle is facing forward or not.FrontFace::Ccw
means that a triangle is facing forward if the vertices are arranged in a counter clockwise direction. Triangles that are not considered facing forward are culled (not included in the render) as specified byCullMode::Back
. We'll cover culling a bit more when we coverBuffer
s.
depth_stencil: None, // 1.
multisample: wgpu::MultisampleState {
count: 1, // 2.
mask: !0, // 3.
alpha_to_coverage_enabled: false, // 4.
},
});
The rest of the method is pretty simple:
- We're not using a depth/stencil buffer currently, so we leave
depth_stencil
asNone
. This will change later. - This determines how many samples this pipeline will use. Multisampling is a complex topic, so we won't get into it here.
sample_mask
specifies which samples should be active. In this case we are using all of them.alpha_to_coverage_enabled
has to do with anti-aliasing. We're not covering anti-aliasing here, so we'll leave this as false now.
Now all we have to do is save the render_pipeline
to State
and then we can use it!
// new()
Self {
surface,
device,
queue,
sc_desc,
swap_chain,
size,
// NEW!
render_pipeline,
}
Using a pipeline
If you run your program now, it'll take a little longer to start, but it will still show the blue screen we got in the last section. That's because while we created the render_pipeline
, we need to modify the code in render()
to actually use it.
// render()
// ...
{
// 1.
let mut render_pass = encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
label: Some("Render Pass"),
color_attachments: &[
wgpu::RenderPassColorAttachmentDescriptor {
attachment: &frame.view,
resolve_target: None,
ops: wgpu::Operations {
load: wgpu::LoadOp::Clear(
wgpu::Color {
r: 0.1,
g: 0.2,
b: 0.3,
a: 1.0,
}
),
store: true,
}
}
],
depth_stencil_attachment: None,
});
// NEW!
render_pass.set_pipeline(&self.render_pipeline); // 2.
render_pass.draw(0..3, 0..1); // 3.
}
// ...
We didn't change much, but let's talk about what we did change.
- We renamed
_render_pass
torender_pass
and made it mutable. - We set the pipeline on the
render_pass
using the one we just created. - We tell
wgpu
to draw something with 3 vertices, and 1 instance. This is wheregl_VertexIndex
comes from.
With all that you should be seeing a lovely brown triangle.
Compiling shaders and include_spirv
Currently we're compiling our shaders when our program starts up, and while this is a valid way of doing things it slows down our programs start up considerably. It also prevents us from using wgpu's include_spirv
convenience macro that would inline the spirv code directly. Doing this would also remove our dependency on shaderc (at least for the runtime code).
We can do this using a build script. A build script is a file that runs when cargo is compiling your project. We can use it for all sorts of things including compiling our shaders!
Add a file called build.rs
at the same level as the src directory. It should be at in the same folder as your Cargo.toml
.
We'll start writing code in it in a bit. First we need to add some things to our Cargo.toml
.
[dependencies]
image = "0.23"
winit = "0.22"
# shaderc = "0.7" # REMOVED!
cgmath = "0.17"
wgpu = "0.7"
futures = "0.3"
# NEW!
[build-dependencies]
anyhow = "1.0"
fs_extra = "1.1"
glob = "0.3"
shaderc = "0.7"
We've removed shaderc from our dependencies and added a new [build-depencies]
block. These are dependencies for our build script. We know about shaderc, but the other ones are meant to simplify dealing with the file system and dealing with rust errors.
Now we can put some code in our build.rs
.
use anyhow::*;
use glob::glob;
use std::fs::{read_to_string, write};
use std::path::PathBuf;
struct ShaderData {
src: String,
src_path: PathBuf,
spv_path: PathBuf,
kind: shaderc::ShaderKind,
}
impl ShaderData {
pub fn load(src_path: PathBuf) -> Result<Self> {
let extension = src_path
.extension()
.context("File has no extension")?
.to_str()
.context("Extension cannot be converted to &str")?;
let kind = match extension {
"vert" => shaderc::ShaderKind::Vertex,
"frag" => shaderc::ShaderKind::Fragment,
"comp" => shaderc::ShaderKind::Compute,
_ => bail!("Unsupported shader: {}", src_path.display()),
};
let src = read_to_string(src_path.clone())?;
let spv_path = src_path.with_extension(format!("{}.spv", extension));
Ok(Self {
src,
src_path,
spv_path,
kind,
})
}
}
fn main() -> Result<()> {
// Collect all shaders recursively within /src/
let mut shader_paths = [
glob("./src/**/*.vert")?,
glob("./src/**/*.frag")?,
glob("./src/**/*.comp")?,
];
// This could be parallelized
let shaders = shader_paths
.iter_mut()
.flatten()
.map(|glob_result| ShaderData::load(glob_result?))
.collect::<Vec<Result<_>>>()
.into_iter()
.collect::<Result<Vec<_>>>()?;
let mut compiler = shaderc::Compiler::new().context("Unable to create shader compiler")?;
// This can't be parallelized. The [shaderc::Compiler] is not
// thread safe. Also, it creates a lot of resources. You could
// spawn multiple processes to handle this, but it would probably
// be better just to only compile shaders that have been changed
// recently.
for shader in shaders {
// This tells cargo to rerun this script if something in /src/ changes.
println!("cargo:rerun-if-changed={}", shader.src_path.as_os_str().to_str().unwrap());
let compiled = compiler.compile_into_spirv(
&shader.src,
shader.kind,
&shader.src_path.to_str().unwrap(),
"main",
None,
)?;
write(shader.spv_path, compiled.as_binary_u8())?;
}
Ok(())
}
With that in place we can replace our shader compiling code in main.rs
with just two lines!
let vs_module = device.create_shader_module(&wgpu::include_spirv!("shader.vert.spv"));
let fs_module = device.create_shader_module(&wgpu::include_spirv!("shader.frag.spv"));
I'm glossing over the code in the build script as this guide is focused on wgpu related topics. Designing build scripts is a topic in and of itself, and going into it in detail would be quite a long tangent. You can learn more about build scripts in The Cargo Book.
Challenge
Create a second pipeline that uses the triangle's position data to create a color that it then sends to the fragment shader to use for f_color
. Have the app swap between these when you press the spacebar. Hint: use in
and out
variables in a separate shader.