Up to this point we have been drawing super simple shapes. While we can make a game with just triangles, trying to draw highly detailed objects would massively limit what devices could even run our game. However, we can get around this problem with textures.
Textures are images overlayed on a triangle mesh to make it seem more detailed. There are multiple types of textures such as normal maps, bump maps, specular maps and diffuse maps. We’re going to talk about diffuse maps, or more simply, the color texture.
In State’s new() method add the following just after creating the swap_chain:
在State的new()方法中,在创建swap_chain之后添加以下内容:
1 2 3 4 5 6 7 8 9
let swap_chain = device.create_swap_chain(&surface, &sc_desc); // NEW!
let diffuse_bytes = include_bytes!("happy-tree.png"); let diffuse_image = image::load_from_memory(diffuse_bytes).unwrap(); let diffuse_rgba = diffuse_image.as_rgba8().unwrap();
use image::GenericImageView; let dimensions = diffuse_image.dimensions();
Here we grab the bytes from our image file and load them into an image which is then converted into a Vec of rgba bytes. We also save the image’s dimensions for when we create the actual Texture.
let texture_size = wgpu::Extent3d { width: dimensions.0, height: dimensions.1, depth_or_array_layers: 1, }; let diffuse_texture = device.create_texture( &wgpu::TextureDescriptor { // All textures are stored as 3D, we represent our 2D texture // by setting depth to 1. size: texture_size, mip_level_count: 1, // We'll talk about this a little later sample_count: 1, dimension: wgpu::TextureDimension::D2, // Most images are stored using sRGB so we need to reflect that here. format: wgpu::TextureFormat::Rgba8UnormSrgb, // SAMPLED tells wgpu that we want to use this texture in shaders // COPY_DST means that we want to copy data to this texture usage: wgpu::TextureUsage::SAMPLED | wgpu::TextureUsage::COPY_DST, label: Some("diffuse_texture"), } );
Getting data into a Texture
The Texture struct has no methods to interact with the data directly. However, we can use a method on the queue we created earlier called write_texture to load the texture in. Let’s take a look at how we do that:
queue.write_texture( // Tells wgpu where to copy the pixel data wgpu::ImageCopyTexture { texture: &diffuse_texture, mip_level: 0, origin: wgpu::Origin3d::ZERO, }, // The actual pixel data diffuse_rgba, // The layout of the texture wgpu::ImageDataLayout { offset: 0, bytes_per_row: std::num::NonZeroU32::new(4 * dimensions.0), rows_per_image: std::num::NonZeroU32::new(dimensions.1), }, texture_size, );
The old way of writing data to a texture was to copy the pixel data to a buffer and then copy it to the texture. Using write_texture is a bit more efficient as it uses one less buffer - I’ll leave it here though in case you need it.
The bytes_per_row field needs some consideration. This value needs to be a multiple of 256. Check out the gif tutorial for more details.
每行字节数字段需要考虑。此值必须是256的倍数。有关详细信息,请查看gif教程。
TextureViews and Samplers
Now that our texture has data in it, we need a way to use it. This is where a TextureView and a Sampler come in. A TextureView offers us a view into our texture. A Sampler controls how the Texture is sampled. Sampling works similar to the eyedropper tool in GIMP/Photoshop. Our program supplies a coordinate on the texture (known as a texture coordinate), and the sampler then returns the corresponding color based on the texture and some internal parameters.
Let’s define our diffuse_texture_view and diffuse_sampler now:
现在让我们定义diffuse_texture_view和diffuse_sampler:
1 2 3 4 5 6 7 8 9 10 11 12
// We don't need to configure the texture view much, so let's // let wgpu define it. let diffuse_texture_view = diffuse_texture.create_view(&wgpu::TextureViewDescriptor::default()); let diffuse_sampler = device.create_sampler(&wgpu::SamplerDescriptor { address_mode_u: wgpu::AddressMode::ClampToEdge, address_mode_v: wgpu::AddressMode::ClampToEdge, address_mode_w: wgpu::AddressMode::ClampToEdge, mag_filter: wgpu::FilterMode::Linear, min_filter: wgpu::FilterMode::Nearest, mipmap_filter: wgpu::FilterMode::Nearest, ..Default::default() });
The address_mode_* parameters determine what to do if the sampler gets a texture coordinate that’s outside of the texture itself. We have a few options to choose from:
ClampToEdge: Any texture coordinates outside the texture will return the color of the nearest pixel on the edges of the texture.
Repeat: The texture will repeat as texture coordinates exceed the textures dimensions.
MirrorRepeat: Similar to Repeat, but the image will flip when going over boundaries.
ClampToEdge:纹理外部的任何纹理坐标都将返回纹理边缘上最近像素的颜色。
Repeat:纹理坐标超过纹理尺寸时,纹理将重复。
MirrorRepeat: 与“重复”类似,但图像在越过边界时将翻转。
The mag_filter and min_filter options describe what to do when a fragment covers multiple pixels, or there are multiple fragments for a single pixel. This often comes into play when viewing a surface from up close, or from far away.
Linear: Attempt to blend the in-between fragments so that they seem to flow together.
Nearest: In-between fragments will use the color of the nearest pixel. This creates an image that’s crisper from far away, but pixelated up close. This can be desirable, however, if your textures are designed to be pixelated, like in pixel art games, or voxel games like Minecraft.
Mipmaps are a complex topic, and will require their own section in the future. For now, we can say that mipmap_filter functions similar to (mag/min)_filter as it tells the sampler how to blend between mipmaps.
All these different resources are nice and all, but they don’t do us much good if we can’t plug them in anywhere. This is where BindGroups and PipelineLayouts come in.
A BindGroup describes a set of resources and how they can be accessed by a shader. We create a BindGroup using a BindGroupLayout. Let’s make one of those first.
let texture_bind_group_layout = device.create_bind_group_layout( &wgpu::BindGroupLayoutDescriptor { entries: &[ wgpu::BindGroupLayoutEntry { binding: 0, visibility: wgpu::ShaderStage::FRAGMENT, ty: wgpu::BindingType::Texture { multisampled: false, view_dimension: wgpu::TextureViewDimension::D2, sample_type: wgpu::TextureSampleType::Float { filterable: true }, }, count: None, }, wgpu::BindGroupLayoutEntry { binding: 1, visibility: wgpu::ShaderStage::FRAGMENT, ty: wgpu::BindingType::Sampler { // This is only for TextureSampleType::Depth comparison: false, // This should be true if the sample_type of the texture is: // TextureSampleType::Float { filterable: true } // Otherwise you'll get an error. filtering: true, }, count: None, }, ], label: Some("texture_bind_group_layout"), } );
Our texture_bind_group_layout has two entries: one for a sampled texture at binding 0, and one for a sampler at binding 1. Both of these bindings are visible only to the fragment shader as specified by FRAGMENT. The possible values for this field are any bitwise combination of NONE, VERTEX, FRAGMENT, or COMPUTE. Most of the time we’ll only use FRAGMENT for textures and samplers, but it’s good to know what else is available.
Looking at this you might get a bit of déjà vu! That’s because a BindGroup is a more specific declaration of the BindGroupLayout. The reason why they’re separate is it allows us to swap out BindGroups on the fly, so long as they all share the same BindGroupLayout. Each texture and sampler we create will need to be added to a BindGroup. For our purposes, we’ll create a new bind group for each texture.
Remember the PipelineLayout we created back in the pipeline section? Now we finally get to use it! The PipelineLayout contains a list of BindGroupLayouts that the pipeline can use. Modify render_pipeline_layout to use our texture_bind_group_layout.
There’s a few things we need to change about our Vertex definition. Up to now we’ve been using a color attribute to set the color of our mesh. Now that we’re using a texture, we want to replace our color with tex_coords. These coordinates will then be passed to the Sampler to retrieve the appropriate color.
With our new Vertex structure in place it’s time to update our shaders. We’ll first need to pass our tex_coords into the vertex shader and then use them over to our fragment shader to get the final color from the Sampler. Let’s start with the vertex shader:
Now that we have our vertex shader outputting our tex_coords, we need to change the fragment shader to take them in. With these coordinates, we’ll finally be able to use our sampler to get a color from our texture.
The variables t_diffuse and s_diffuse are what’s known as uniforms. We’ll go over uniforms more in the cameras section. For now, all we need to know is that group() corresponds to the 1st parameter in set_bind_group() and binding() relates to the binding specified when we created the BindGroupLayout and BindGroup.
If we run our program now we should get the following result:
That’s weird, our tree is upside down! This is because wgpu’s world coordinates have the y-axis pointing up, while texture coordinates have the y-axis pointing down. In other words, (0, 0) in texture coordinates coresponds to the top-left of the image, while (1, 1) is the bottom right.
With that in place, we now have our tree right-side up on our hexagon:
有了它,我们的树就在六边形的正确位置:
Cleaning things up
For convenience sake, let’s pull our texture code into its module. We’ll first need to add the anyhow crate to our Cargo.toml file to simplify error handling;
Note that we’re returning a CommandBuffer with our texture. This means we can load multiple textures at the same time, and then submit all their command buffers at once.
We need to import texture.rs as a module, so somewhere at the top of main.rs add the following.
我们需要将texture.rs作为一个模块导入,因此在main.rs顶部的某处添加以下内容。
1
mod texture;
The texture creation code in new() now gets a lot simpler:
1 2 3 4 5
let swap_chain = device.create_swap_chain(&surface, &sc_desc); let diffuse_bytes = include_bytes!("happy-tree.png"); // CHANGED! let diffuse_texture = texture::Texture::from_bytes(&device, &queue, diffuse_bytes, "happy-tree.png").unwrap(); // CHANGED!
// Everything up until `let texture_bind_group_layout = ...` can now be removed.
We still need to store the bind group separately so that Texture doesn’t need know how the BindGroup is laid out. Creating the diffuse_bind_group changes slightly to use the view and sampler fields of our diffuse_texture: