Wgpu Textures and bind groups

Up to this point we have been drawing super simple shapes. While we can make a game with just triangles, trying to draw highly detailed objects would massively limit what devices could even run our game. However, we can get around this problem with textures.

到目前为止,我们一直在画超级简单的形状。虽然我们可以制作一个只有三角形的游戏,但尝试绘制高度详细的对象将极大地限制哪些设备甚至可以运行我们的游戏。然而,我们可以通过纹理来解决这个问题。

Textures are images overlayed on a triangle mesh to make it seem more detailed. There are multiple types of textures such as normal maps, bump maps, specular maps and diffuse maps. We’re going to talk about diffuse maps, or more simply, the color texture.

纹理是叠加在三角形网格上的图像,使其看起来更详细。有多种类型的纹理,例如法线贴图、凹凸贴图、镜面反射贴图和漫反射贴图。我们将讨论漫反射贴图,或者更简单地说,颜色纹理。

Loading an image from a file

If we want to map an image to our mesh, we first need an image. Let’s use this happy little tree:

如果要将图像映射到网格,首先需要一个图像。让我们用这棵快乐的小树:

小树

We’ll use the image crate to load our tree. We already added to our dependencies in the first section, so all we have to do is use it.

我们将使用image crate来装载我们的树。我们已经在第一节中添加了依赖项,所以我们所要做的就是使用它。

In State’s new() method add the following just after creating the swap_chain:

在State的new()方法中,在创建swap_chain之后添加以下内容:

1
2
3
4
5
6
7
8
9
let swap_chain = device.create_swap_chain(&surface, &sc_desc);
// NEW!

let diffuse_bytes = include_bytes!("happy-tree.png");
let diffuse_image = image::load_from_memory(diffuse_bytes).unwrap();
let diffuse_rgba = diffuse_image.as_rgba8().unwrap();

use image::GenericImageView;
let dimensions = diffuse_image.dimensions();

Here we grab the bytes from our image file and load them into an image which is then converted into a Vec of rgba bytes. We also save the image’s dimensions for when we create the actual Texture.

在这里,我们从图像文件中获取bytes,并将它们加载到图像中,然后将图像转换为rgba字节的Vec。我们还保存图像的尺寸,以便在创建实际纹理时使用。

Now, let’s create the Texture:

现在,让我们创建纹理:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
let texture_size = wgpu::Extent3d {
width: dimensions.0,
height: dimensions.1,
depth_or_array_layers: 1,
};
let diffuse_texture = device.create_texture(
&wgpu::TextureDescriptor {
// All textures are stored as 3D, we represent our 2D texture
// by setting depth to 1.
size: texture_size,
mip_level_count: 1, // We'll talk about this a little later
sample_count: 1,
dimension: wgpu::TextureDimension::D2,
// Most images are stored using sRGB so we need to reflect that here.
format: wgpu::TextureFormat::Rgba8UnormSrgb,
// SAMPLED tells wgpu that we want to use this texture in shaders
// COPY_DST means that we want to copy data to this texture
usage: wgpu::TextureUsage::SAMPLED | wgpu::TextureUsage::COPY_DST,
label: Some("diffuse_texture"),
}
);

Getting data into a Texture

The Texture struct has no methods to interact with the data directly. However, we can use a method on the queue we created earlier called write_texture to load the texture in. Let’s take a look at how we do that:

纹理结构没有直接与数据交互的方法。但是,我们可以在前面创建的队列上使用一个名为write_texture的方法来加载纹理。让我们来看看我们是如何做到这一点的:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
queue.write_texture(
// Tells wgpu where to copy the pixel data
wgpu::ImageCopyTexture {
texture: &diffuse_texture,
mip_level: 0,
origin: wgpu::Origin3d::ZERO,
},
// The actual pixel data
diffuse_rgba,
// The layout of the texture
wgpu::ImageDataLayout {
offset: 0,
bytes_per_row: std::num::NonZeroU32::new(4 * dimensions.0),
rows_per_image: std::num::NonZeroU32::new(dimensions.1),
},
texture_size,
);

The old way of writing data to a texture was to copy the pixel data to a buffer and then copy it to the texture. Using write_texture is a bit more efficient as it uses one less buffer - I’ll leave it here though in case you need it.

将数据写入纹理的旧方法是将像素数据复制到缓冲区,然后将其复制到纹理。使用write_texture更有效,因为它使用的缓冲区少了一个,不过我会把它放在这里,以备您需要。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
let buffer = device.create_buffer_init(
&wgpu::util::BufferInitDescriptor {
label: Some("Temp Buffer"),
contents: &diffuse_rgba,
usage: wgpu::BufferUsage::COPY_SRC,
}
);

let mut encoder = device.create_command_encoder(&wgpu::CommandEncoderDescriptor {
label: Some("texture_buffer_copy_encoder"),
});

encoder.copy_buffer_to_texture(
wgpu::ImageCopyBuffer {
buffer: &buffer,
offset: 0,
bytes_per_row: 4 * dimensions.0,
rows_per_image: dimensions.1,
},
wgpu::ImageCopyTexture {
texture: &diffuse_texture,
mip_level: 0,
array_layer: 0,
origin: wgpu::Origin3d::ZERO,
},
size,
);

queue.submit(std::iter::once(encoder.finish()));

The bytes_per_row field needs some consideration. This value needs to be a multiple of 256. Check out the gif tutorial for more details.

每行字节数字段需要考虑。此值必须是256的倍数。有关详细信息,请查看gif教程。

TextureViews and Samplers

Now that our texture has data in it, we need a way to use it. This is where a TextureView and a Sampler come in. A TextureView offers us a view into our texture. A Sampler controls how the Texture is sampled. Sampling works similar to the eyedropper tool in GIMP/Photoshop. Our program supplies a coordinate on the texture (known as a texture coordinate), and the sampler then returns the corresponding color based on the texture and some internal parameters.

既然我们的纹理中有数据,我们需要一种使用它的方法。这就是TextureView和采样器的作用。纹理视图为我们提供了纹理视图。采样器控制纹理的采样方式。采样的工作原理类似于GIMP/Photoshop中的滴管工具。我们的程序在纹理上提供一个坐标(称为纹理坐标),然后采样器根据纹理和一些内部参数返回相应的颜色。

Let’s define our diffuse_texture_view and diffuse_sampler now:

现在让我们定义diffuse_texture_view和diffuse_sampler:

1
2
3
4
5
6
7
8
9
10
11
12
// We don't need to configure the texture view much, so let's
// let wgpu define it.
let diffuse_texture_view = diffuse_texture.create_view(&wgpu::TextureViewDescriptor::default());
let diffuse_sampler = device.create_sampler(&wgpu::SamplerDescriptor {
address_mode_u: wgpu::AddressMode::ClampToEdge,
address_mode_v: wgpu::AddressMode::ClampToEdge,
address_mode_w: wgpu::AddressMode::ClampToEdge,
mag_filter: wgpu::FilterMode::Linear,
min_filter: wgpu::FilterMode::Nearest,
mipmap_filter: wgpu::FilterMode::Nearest,
..Default::default()
});

The address_mode_* parameters determine what to do if the sampler gets a texture coordinate that’s outside of the texture itself. We have a few options to choose from:

address_mode_*参数决定了如果采样器获得的纹理坐标位于纹理本身之外,该怎么办。我们有几个选项可供选择:

  • ClampToEdge: Any texture coordinates outside the texture will return the color of the nearest pixel on the edges of the texture.
  • Repeat: The texture will repeat as texture coordinates exceed the textures dimensions.
  • MirrorRepeat: Similar to Repeat, but the image will flip when going over boundaries.
  • ClampToEdge:纹理外部的任何纹理坐标都将返回纹理边缘上最近像素的颜色。
  • Repeat:纹理坐标超过纹理尺寸时,纹理将重复。
  • MirrorRepeat: 与“重复”类似,但图像在越过边界时将翻转。

sampler

The mag_filter and min_filter options describe what to do when a fragment covers multiple pixels, or there are multiple fragments for a single pixel. This often comes into play when viewing a surface from up close, or from far away.

mag_filter和min_filter选项描述了当一个片段覆盖多个像素,或者一个像素有多个片段时该怎么做。这通常在从近距离或远处查看曲面时起作用。

There are 2 options:

有两种选择:

  • Linear: Attempt to blend the in-between fragments so that they seem to flow together.
  • Nearest: In-between fragments will use the color of the nearest pixel. This creates an image that’s crisper from far away, but pixelated up close. This can be desirable, however, if your textures are designed to be pixelated, like in pixel art games, or voxel games like Minecraft.
  • Linear: 尝试将片段混合在一起,使它们看起来是一起流动的。
  • Nearest: 在片段之间,将使用最近像素的颜色。这将创建一个从远处看更清晰的图像,但在近距离内像素化。然而,如果你的纹理被设计成像素化,比如像素艺术游戏,或者像Minecraft这样的体素游戏,这可能是可取的。

Mipmaps are a complex topic, and will require their own section in the future. For now, we can say that mipmap_filter functions similar to (mag/min)_filter as it tells the sampler how to blend between mipmaps.

mipmap是一个复杂的主题,将来需要的部分讨论。现在,我们可以说mipmap_filter的功能类似于(mag/min)_filter,因为它告诉采样器如何在mipmap之间混合。

I’m using some defaults for the other fields. If you want to see what they are, check the wgpu docs.

我对其他字段使用了一些默认值。如果您想查看它们是什么,请查看wgpu文档

All these different resources are nice and all, but they don’t do us much good if we can’t plug them in anywhere. This is where BindGroups and PipelineLayouts come in.

所有这些不同的资源都很好,但是如果我们不能在任何地方插入它们,它们对我们没有多大好处。这就是BindGroups和PipelineLayouts的用武之地。

The BindGroup

A BindGroup describes a set of resources and how they can be accessed by a shader. We create a BindGroup using a BindGroupLayout. Let’s make one of those first.

BindGroup描述一组资源以及着色器如何访问这些资源。我们使用BindGroupLayout创建一个BindGroup。让我们先做一个。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
let texture_bind_group_layout = device.create_bind_group_layout(
&wgpu::BindGroupLayoutDescriptor {
entries: &[
wgpu::BindGroupLayoutEntry {
binding: 0,
visibility: wgpu::ShaderStage::FRAGMENT,
ty: wgpu::BindingType::Texture {
multisampled: false,
view_dimension: wgpu::TextureViewDimension::D2,
sample_type: wgpu::TextureSampleType::Float { filterable: true },
},
count: None,
},
wgpu::BindGroupLayoutEntry {
binding: 1,
visibility: wgpu::ShaderStage::FRAGMENT,
ty: wgpu::BindingType::Sampler {
// This is only for TextureSampleType::Depth
comparison: false,
// This should be true if the sample_type of the texture is:
// TextureSampleType::Float { filterable: true }
// Otherwise you'll get an error.
filtering: true,
},
count: None,
},
],
label: Some("texture_bind_group_layout"),
}
);

Our texture_bind_group_layout has two entries: one for a sampled texture at binding 0, and one for a sampler at binding 1. Both of these bindings are visible only to the fragment shader as specified by FRAGMENT. The possible values for this field are any bitwise combination of NONE, VERTEX, FRAGMENT, or COMPUTE. Most of the time we’ll only use FRAGMENT for textures and samplers, but it’s good to know what else is available.

我们的texture_bind_group_layout有两个条目:一个用于绑定0处的sampled texture,另一个用于绑定1处的sampler。这两个绑定仅对片段指定的片段着色器可见。visibility字段的可能值是NONE、VERTEX、FRAGMENT或COMPUTE的任意位组合。大多数情况下,我们只会对片段使用纹理和采样器,但最好知道还有什么可用的。

With texture_bind_group_layout, we can now create our BindGroup:

使用texture_bind_group_layout,我们现在可以创建BindGroup:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
let diffuse_bind_group = device.create_bind_group(
&wgpu::BindGroupDescriptor {
layout: &texture_bind_group_layout,
entries: &[
wgpu::BindGroupEntry {
binding: 0,
resource: wgpu::BindingResource::TextureView(&diffuse_texture_view),
},
wgpu::BindGroupEntry {
binding: 1,
resource: wgpu::BindingResource::Sampler(&diffuse_sampler),
}
],
label: Some("diffuse_bind_group"),
}
);

Looking at this you might get a bit of déjà vu! That’s because a BindGroup is a more specific declaration of the BindGroupLayout. The reason why they’re separate is it allows us to swap out BindGroups on the fly, so long as they all share the same BindGroupLayout. Each texture and sampler we create will need to be added to a BindGroup. For our purposes, we’ll create a new bind group for each texture.

看看这个,你可能会有点似曾相识!这是因为BindGroup是BindGroupLayout的更具体的声明。它们分开的原因是它允许我们动态地交换BindGroups,只要它们都共享相同的BindGroupLayout。我们创建的每个纹理和采样器都需要添加到BindGroup中。出于我们的目的,我们将为每个纹理创建一个新的绑定组。

Now that we have our diffuse_bind_group, let’s add it to our State struct:

1
2
3
4
5
6
7
8
9
10
11
12
13
struct State {
surface: wgpu::Surface,
device: wgpu::Device,
queue: wgpu::Queue,
sc_desc: wgpu::SwapChainDescriptor,
swap_chain: wgpu::SwapChain,
size: winit::dpi::PhysicalSize<u32>,
render_pipeline: wgpu::RenderPipeline,
vertex_buffer: wgpu::Buffer,
index_buffer: wgpu::Buffer,
num_indicies: u32,
diffuse_bind_group: wgpu::BindGroup, // NEW!
}

And make sure we return these fields in the new method:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
impl State {
async fn new() -> Self {
// ...
Self {
surface,
device,
queue,
sc_desc,
swap_chain,
size,
render_pipeline,
vertex_buffer,
index_buffer,
num_indices,
// NEW!
diffuse_bind_group,
}
}
}

Now that we’ve got our BindGroup, we can use it in our render() function.

1
2
3
4
5
6
7
8
// render()
// ...
render_pass.set_pipeline(&self.render_pipeline);
render_pass.set_bind_group(0, &self.diffuse_bind_group, &[]); // NEW!
render_pass.set_vertex_buffer(0, self.vertex_buffer.slice(..));
render_pass.set_index_buffer(self.index_buffer.slice(..), wgpu::IndexFormat::Uint16);

render_pass.draw_indexed(0..self.num_indices, 0, 0..1);

PipelineLayout

Remember the PipelineLayout we created back in the pipeline section? Now we finally get to use it! The PipelineLayout contains a list of BindGroupLayouts that the pipeline can use. Modify render_pipeline_layout to use our texture_bind_group_layout.

还记得我们在管道部分创建的PipelineLayout吗?现在我们终于可以使用它了!PipelineLayout包含pipeline可以使用的BindGroupLayouts列表。修改render_pipeline_layout以使用texture_bind_group_layout。

1
2
3
4
5
6
7
8
9
10
11
async fn new(...) {
// ...
let render_pipeline_layout = device.create_pipeline_layout(
&wgpu::PipelineLayoutDescriptor {
label: Some("Render Pipeline Layout"), // NEW!
bind_group_layouts: &[&texture_bind_group_layout], // NEW!
push_constant_ranges: &[],
}
);
// ...
}

A change to the VERTICES

There’s a few things we need to change about our Vertex definition. Up to now we’ve been using a color attribute to set the color of our mesh. Now that we’re using a texture, we want to replace our color with tex_coords. These coordinates will then be passed to the Sampler to retrieve the appropriate color.

关于顶点定义,我们需要更改一些内容。到目前为止,我们一直在使用颜色属性来设置网格的颜色。现在我们正在使用纹理,我们想用tex_coords替换我们的颜色。然后,这些坐标将传递给采样器以检索适当的颜色。

Since our tex_coords are two dimensional, we’ll change the field to take two floats instead of three.

由于tex_coords是二维的,因此我们将字段更改为采用两个浮点数,而不是三个浮点数。

First, we’ll change the Vertex struct:

首先,我们将更改顶点结构:

1
2
3
4
5
6
#[repr(C)]
#[derive(Copy, Clone, Debug, bytemuck::Pod, bytemuck::Zeroable)]
struct Vertex {
position: [f32; 3],
tex_coords: [f32; 2], // NEW!
}

And then reflect these changes in the VertexBufferLayout:

然后在VertexBufferLayout中反映这些更改:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
impl Vertex {
fn desc<'a>() -> wgpu::VertexBufferLayout<'a> {
use std::mem;
wgpu::VertexBufferLayout {
array_stride: mem::size_of::<Vertex>() as wgpu::BufferAddress,
step_mode: wgpu::InputStepMode::Vertex,
attributes: &[
wgpu::VertexAttribute {
offset: 0,
shader_location: 0,
format: wgpu::VertexFormat::Float32x3,
},
wgpu::VertexAttribute {
offset: mem::size_of::<[f32; 3]>() as wgpu::BufferAddress,
shader_location: 1,
format: wgpu::VertexFormat::Float32x2, // NEW!
},
]
}
}
}

Lastly we need to change VERTICES itself. Replace the existing definition with the following:

最后,我们需要改变顶点本身。将现有定义替换为以下内容:

1
2
3
4
5
6
7
8
// Changed
const VERTICES: &[Vertex] = &[
Vertex { position: [-0.0868241, 0.49240386, 0.0], tex_coords: [0.4131759, 0.99240386], }, // A
Vertex { position: [-0.49513406, 0.06958647, 0.0], tex_coords: [0.0048659444, 0.56958646], }, // B
Vertex { position: [-0.21918549, -0.44939706, 0.0], tex_coords: [0.28081453, 0.050602943], }, // C
Vertex { position: [0.35966998, -0.3473291, 0.0], tex_coords: [0.85967, 0.15267089], }, // D
Vertex { position: [0.44147372, 0.2347359, 0.0], tex_coords: [0.9414737, 0.7347359], }, // E
];

Shader time

With our new Vertex structure in place it’s time to update our shaders. We’ll first need to pass our tex_coords into the vertex shader and then use them over to our fragment shader to get the final color from the Sampler. Let’s start with the vertex shader:

随着新顶点结构的就位,是时候更新着色器了。我们首先需要将tex_coords传递到顶点着色器,然后将它们传递到片段着色器,以从采样器获得最终颜色。让我们从顶点着色器开始:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
// Vertex shader

struct VertexInput {
[[location(0)]] position: vec3<f32>;
[[location(1)]] tex_coords: vec2<f32>;
};

struct VertexOutput {
[[builtin(position)]] clip_position: vec4<f32>;
[[location(0)]] tex_coords: vec2<f32>;
};

[[stage(vertex)]]
fn main(
model: VertexInput,
) -> VertexOutput {
var out: VertexOutput;
out.tex_coords = model.tex_coords;
out.clip_position = vec4<f32>(model.position, 1.0);
return out;
}

Now that we have our vertex shader outputting our tex_coords, we need to change the fragment shader to take them in. With these coordinates, we’ll finally be able to use our sampler to get a color from our texture.

现在我们已经有了顶点着色器输出tex_coords,我们需要更改片段着色器以接收它们。有了这些坐标,我们最终可以使用采样器从纹理中获取颜色。

1
2
3
4
5
6
7
8
9
10
11
// Fragment shader

[[group(0), binding(0)]]
var t_diffuse: texture_2d<f32>;
[[group(0), binding(1)]]
var s_diffuse: sampler;

[[stage(fragment)]]
fn main(in: VertexOutput) -> [[location(0)]] vec4<f32> {
return textureSample(t_diffuse, s_diffuse, in.tex_coords);
}

The variables t_diffuse and s_diffuse are what’s known as uniforms. We’ll go over uniforms more in the cameras section. For now, all we need to know is that group() corresponds to the 1st parameter in set_bind_group() and binding() relates to the binding specified when we created the BindGroupLayout and BindGroup.

变量t_diffuse和s_diffuse称为uniforms。我们将在照相机部分详细介绍uniforms。现在,我们只需要知道group()对应于set_bind_group()中的第一个参数,binding()与创建BindGroupLayout和BindGroup时指定的绑定相关。

The results

If we run our program now we should get the following result:

result

That’s weird, our tree is upside down! This is because wgpu’s world coordinates have the y-axis pointing up, while texture coordinates have the y-axis pointing down. In other words, (0, 0) in texture coordinates coresponds to the top-left of the image, while (1, 1) is the bottom right.

真奇怪,我们的树倒了!这是因为wgpu的世界坐标的y轴指向上,而纹理坐标的y轴指向下。换句话说,纹理坐标中的(0,0)对应于图像的左上角,而(1,1)对应于右下角。

纹理

We can get our triangle right-side up by inverting the y coordinate of each texture coordinate:

通过反转每个纹理坐标的y坐标,我们可以使三角形正面朝上:

1
2
3
4
5
6
7
8
const VERTICES: &[Vertex] = &[
// Changed
Vertex { position: [-0.0868241, 0.49240386, 0.0], tex_coords: [0.4131759, 0.00759614], }, // A
Vertex { position: [-0.49513406, 0.06958647, 0.0], tex_coords: [0.0048659444, 0.43041354], }, // B
Vertex { position: [-0.21918549, -0.44939706, 0.0], tex_coords: [0.28081453, 0.949397], }, // C
Vertex { position: [0.35966998, -0.3473291, 0.0], tex_coords: [0.85967, 0.84732914], }, // D
Vertex { position: [0.44147372, 0.2347359, 0.0], tex_coords: [0.9414737, 0.2652641], }, // E
];

With that in place, we now have our tree right-side up on our hexagon:

有了它,我们的树就在六边形的正确位置:

纹理

Cleaning things up

For convenience sake, let’s pull our texture code into its module. We’ll first need to add the anyhow crate to our Cargo.toml file to simplify error handling;

为了方便起见,让我们将纹理代码拉入其模块。我们首先需要将anyhow crate添加到Cargo.toml文件中,以简化错误处理;

1
2
3
4
5
6
7
8
9
10
[dependencies]
image = "0.23"
cgmath = "0.18"
winit = "0.25"
env_logger = "0.9"
log = "0.4"
pollster = "0.2"
wgpu = "0.9"
bytemuck = { version = "1.4", features = [ "derive" ] }
anyhow = "1.0" // NEW!

Then, in a new file called src/texture.rs, add the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
use image::GenericImageView;
use anyhow::*;

pub struct Texture {
pub texture: wgpu::Texture,
pub view: wgpu::TextureView,
pub sampler: wgpu::Sampler,
}

impl Texture {
pub fn from_bytes(
device: &wgpu::Device,
queue: &wgpu::Queue,
bytes: &[u8],
label: &str
) -> Result<Self> {
let img = image::load_from_memory(bytes)?;
Self::from_image(device, queue, &img, Some(label))
}

pub fn from_image(
device: &wgpu::Device,
queue: &wgpu::Queue,
img: &image::DynamicImage,
label: Option<&str>
) -> Result<Self> {
let rgba = img.as_rgba8().unwrap();
let dimensions = img.dimensions();

let size = wgpu::Extent3d {
width: dimensions.0,
height: dimensions.1,
depth_or_array_layers: 1,
};
let texture = device.create_texture(
&wgpu::TextureDescriptor {
label,
size,
mip_level_count: 1,
sample_count: 1,
dimension: wgpu::TextureDimension::D2,
format: wgpu::TextureFormat::Rgba8UnormSrgb,
usage: wgpu::TextureUsage::SAMPLED | wgpu::TextureUsage::COPY_DST,
}
);

queue.write_texture(
wgpu::ImageCopyTexture {
texture: &texture,
mip_level: 0,
origin: wgpu::Origin3d::ZERO,
},
rgba,
wgpu::ImageDataLayout {
offset: 0,
bytes_per_row: std::num::NonZeroU32::new(4 * dimensions.0),
rows_per_image: std::num::NonZeroU32::new(dimensions.1),
},
size,
);

let view = texture.create_view(&wgpu::TextureViewDescriptor::default());
let sampler = device.create_sampler(
&wgpu::SamplerDescriptor {
address_mode_u: wgpu::AddressMode::ClampToEdge,
address_mode_v: wgpu::AddressMode::ClampToEdge,
address_mode_w: wgpu::AddressMode::ClampToEdge,
mag_filter: wgpu::FilterMode::Linear,
min_filter: wgpu::FilterMode::Nearest,
mipmap_filter: wgpu::FilterMode::Nearest,
..Default::default()
}
);

Ok(Self { texture, view, sampler })
}
}

Note that we’re returning a CommandBuffer with our texture. This means we can load multiple textures at the same time, and then submit all their command buffers at once.

请注意,我们返回CommandBuffer使用纹理。这意味着我们可以同时加载多个纹理,然后一次提交它们的所有命令缓冲区。

We need to import texture.rs as a module, so somewhere at the top of main.rs add the following.

我们需要将texture.rs作为一个模块导入,因此在main.rs顶部的某处添加以下内容。

1
mod texture;

The texture creation code in new() now gets a lot simpler:

1
2
3
4
5
let swap_chain = device.create_swap_chain(&surface, &sc_desc);
let diffuse_bytes = include_bytes!("happy-tree.png"); // CHANGED!
let diffuse_texture = texture::Texture::from_bytes(&device, &queue, diffuse_bytes, "happy-tree.png").unwrap(); // CHANGED!

// Everything up until `let texture_bind_group_layout = ...` can now be removed.

We still need to store the bind group separately so that Texture doesn’t need know how the BindGroup is laid out. Creating the diffuse_bind_group changes slightly to use the view and sampler fields of our diffuse_texture:

我们仍然需要单独存储bind group,这样纹理就不需要知道BindGroup是如何布置的。创建diffuse_bind_group时会稍微更改,以使用diffuse_texture的view和sampler字段:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
let diffuse_bind_group = device.create_bind_group(
&wgpu::BindGroupDescriptor {
layout: &texture_bind_group_layout,
entries: &[
wgpu::BindGroupEntry {
binding: 0,
resource: wgpu::BindingResource::TextureView(&diffuse_texture.view), // CHANGED!
},
wgpu::BindGroupEntry {
binding: 1,
resource: wgpu::BindingResource::Sampler(&diffuse_texture.sampler), // CHANGED!
}
],
label: Some("diffuse_bind_group"),
}
);

Finally, let’s update our State field to use our shiny new Texture struct, as we’ll need it in future tutorials.

最后,让我们更新State字段以使用闪亮的新纹理结构,因为我们将在未来的教程中需要它。

1
2
3
4
5
struct State {
// ...
diffuse_bind_group: wgpu::BindGroup,
diffuse_texture: texture::Texture, // NEW
}
1
2
3
4
5
6
7
8
9
10
11
impl State {
async fn new() -> Self {
// ...
Self {
// ...
num_indices,
diffuse_bind_group,
diffuse_texture, // NEW
}
}
}

Phew!

With these changes in place, the code should be working the same as it was before, but we now have a much easier way to create textures.

有了这些更改,代码应该和以前一样工作,但是我们现在有了一种更简单的方法来创建纹理。

Challenge

创建另一个纹理,并在按空格键时将其调出。

Check out the code!