Wgpu Working with Lights

While we can tell that our scene is 3d because of our camera, it still feels very flat. That’s because our model stays the same color regardless of how it’s oriented. If we want to change that we need to add lighting to our scene.

虽然我们可以看出,我们的场景是3d的,因为我们的相机,它仍然感觉非常平坦。这是因为我们的模型保持相同的颜色,无论它是如何定向的。如果我们想改变这一点,我们需要向场景中添加照明。

In the real world, a light source emits photons which bounce around until they enter into our eyes. The color we see is the light’s original color minus whatever energy it lost while it was bouncing around.

在现实世界中,光源发射光子,这些光子在周围反弹,直到它们进入我们的眼睛。我们看到的颜色是光的原始颜色减去它在反弹时损失的能量。

In the computer graphics world, modeling individual photons would be hilariously computationally expensive. A single 100 Watt light bulb emits about 3.27 x 10^20 photons per second. Just imagine that for the sun! To get around this, we’re gonna use math to cheat.

在计算机图形学领域,对单个光子进行建模在计算上会非常昂贵。一个100瓦的灯泡每秒发射大约3.27 x 10^20个光子。想象一下,为了太阳!为了避开这个问题,我们要用数学来作弊。

Let’s discuss a few options.

让我们讨论几个选项。

Ray/Path Tracing

This is an advanced topic, and we won’t be covering it in depth here. It’s the closest model to the way light really works so I felt I had to mention it. Check out the ray tracing tutorial if you want to learn more.

The Blinn-Phong Model

Ray/path tracing is often too computationally expensive for most realtime applications (though that is starting to change), so a more efficient, if less accurate method based on the Phong reflection model is often used. It splits up the lighting calculation into three (3) parts: ambient lighting, diffuse lighting, and specular lighting. We’re going to be learning the Blinn-Phong model, which cheats a bit at the specular calculation to speed things up.

对于大多数实时应用程序来说,光线/路径跟踪在计算上往往过于昂贵(尽管这一点已经开始改变),因此通常会使用一种基于Phong reflection model的效率更高但精度更低的方法。它将照明计算分为三部分:环境照明、漫反射照明和镜面反射照明。我们将学习Blinn-Phong model,它在镜面反射计算中有点作弊,以加快速度。

Before we can get into that though, we need to add a light to our scene.

在我们开始之前,需要在场景中添加灯光。

1
2
3
4
5
6
7
8
9
// main.rs
#[repr(C)]
#[derive(Debug, Copy, Clone, bytemuck::Pod, bytemuck::Zeroable)]
struct LightUniform {
position: [f32; 3],
// Due to uniforms requiring 16 byte (4 float) spacing, we need to use a padding field here
_padding: u32,
color: [f32; 3],
}

Our LightUniform represents a colored point in space. We’re just going to use pure white light, but it’s good to allow different colors of light.

我们的LightUniform代表空间中的一个彩色点。我们将使用纯白光,但允许不同颜色的光是很好的。

We’re going to create another buffer to store our light in.

我们将创建另一个缓冲区来存储光线。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
let light_uniform = LightUniform {
position: [2.0, 2.0, 2.0],
_padding: 0,
color: [1.0, 1.0, 1.0],
};

// We'll want to update our lights position, so we use COPY_DST
let light_buffer = device.create_buffer_init(
&wgpu::util::BufferInitDescriptor {
label: Some("Light VB"),
contents: bytemuck::cast_slice(&[light_uniform]),
usage: wgpu::BufferUsage::UNIFORM | wgpu::BufferUsage::COPY_DST,
}
);

Don’t forget to add the light_uniform and light_buffer to State. After that we need to create a bind group layout and bind group for our light.

别忘了将light_uniform和light_buffer添加到State。之后,我们需要为灯光创建bind group layout和bind group。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
let light_bind_group_layout =
device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
entries: &[wgpu::BindGroupLayoutEntry {
binding: 0,
visibility: wgpu::ShaderStage::VERTEX | wgpu::ShaderStage::FRAGMENT,
ty: wgpu::BindingType::Buffer {
ty: wgpu::BufferBindingType::Uniform,
has_dynamic_offset: false,
min_binding_size: None,
},
count: None,
}],
label: None,
});

let light_bind_group = device.create_bind_group(&wgpu::BindGroupDescriptor {
layout: &light_bind_group_layout,
entries: &[wgpu::BindGroupEntry {
binding: 0,
resource: light_buffer.as_entire_binding(),
}],
label: None,
});

Add those to State, and also update the render_pipeline_layout.

将这些添加到State,并更新render_pipeline_layout。

1
2
3
4
5
6
7
let render_pipeline_layout = device.create_pipeline_layout(&wgpu::PipelineLayoutDescriptor {
bind_group_layouts: &[
&texture_bind_group_layout,
&camera_bind_group_layout,
&light_bind_group_layout,
],
});

Let’s also update the lights position in the update() method, so we can see what our objects look like from different angles.

我们还将更新update()方法中的灯光位置,以便从不同角度查看对象的外观。

1
2
3
4
5
6
// Update the light
let old_position: cgmath::Vector3<_> = self.light_uniform.position.into();
self.light_uniform.position =
cgmath::Quaternion::from_axis_angle((0.0, 1.0, 0.0).into(), cgmath::Deg(1.0))
* old_position;
self.queue.write_buffer(&self.light_buffer, 0, bytemuck::cast_slice(&[self.light_uniform]));

This will have the light rotate around the origin one degree every frame.

这将使灯光每帧围绕原点旋转一度。

Seeing the light

For debugging purposes, it would be nice if we could see where the light is to make sure that the scene looks correct. We could adapt our existing render pipeline to draw the light, but it will likely get in the way. Instead we are going to extract our render pipeline creation code into a new function called create_render_pipeline().

出于调试目的,如果我们能够看到灯光的位置,以确保场景看起来正确,那就太好了。我们可以调整现有的渲染管道来绘制灯光,但这可能会造成阻碍。相反,我们将把渲染管道创建代码提取到一个名为create_render_pipeline()的新函数中。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
fn create_render_pipeline(
device: &wgpu::Device,
layout: &wgpu::PipelineLayout,
color_format: wgpu::TextureFormat,
depth_format: Option<wgpu::TextureFormat>,
vertex_layouts: &[wgpu::VertexBufferLayout],
shader: wgpu::ShaderModuleDescriptor,
) -> wgpu::RenderPipeline {
let shader = device.create_shader_module(&shader);

device.create_render_pipeline(&wgpu::RenderPipelineDescriptor {
label: Some("Render Pipeline"),
layout: Some(layout),
vertex: wgpu::VertexState {
module: &shader,
entry_point: "main",
buffers: vertex_layouts,
},
fragment: Some(wgpu::FragmentState {
module: &shader,
entry_point: "main",
targets: &[wgpu::ColorTargetState {
format: color_format,
blend: Some(wgpu::BlendState {
alpha: wgpu::BlendComponent::REPLACE,
color: wgpu::BlendComponent::REPLACE,
}),
write_mask: wgpu::ColorWrite::ALL,
}],
}),
primitive: wgpu::PrimitiveState {
topology: wgpu::PrimitiveTopology::TriangleList,
strip_index_format: None,
front_face: wgpu::FrontFace::Ccw,
cull_mode: Some(wgpu::Face::Back),
// Setting this to anything other than Fill requires Features::NON_FILL_POLYGON_MODE
polygon_mode: wgpu::PolygonMode::Fill,
// Requires Features::DEPTH_CLAMPING
clamp_depth: false,
// Requires Features::CONSERVATIVE_RASTERIZATION
conservative: false,
},
depth_stencil: depth_format.map(|format| wgpu::DepthStencilState {
format,
depth_write_enabled: true,
depth_compare: wgpu::CompareFunction::Less,
stencil: wgpu::StencilState::default(),
bias: wgpu::DepthBiasState::default(),
}),
multisample: wgpu::MultisampleState {
count: 1,
mask: !0,
alpha_to_coverage_enabled: false,
},
})
}

We also need to change State::new() to use this function.

我们还需要更改State::new()以使用此函数。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
let render_pipeline = {
let shader = wgpu::ShaderModuleDescriptor {
label: Some("Normal Shader"),
flags: wgpu::ShaderFlags::all(),
source: wgpu::ShaderSource::Wgsl(include_str!("shader.wgsl").into()),
};
create_render_pipeline(
&device,
&render_pipeline_layout,
sc_desc.format,
Some(texture::Texture::DEPTH_FORMAT),
&[model::ModelVertex::desc(), InstanceRaw::desc()],
shader,
)
};

We’re going to need to modify model::DrawModel to use our light_bind_group.

我们需要修改model::DrawModel以使用我们的light_bind_group。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
pub trait DrawModel<'a> {
fn draw_mesh(
&mut self,
mesh: &'a Mesh,
material: &'a Material,
camera: &'a wgpu::BindGroup,
light: &'a wgpu::BindGroup,
);
fn draw_mesh_instanced(
&mut self,
mesh: &'a Mesh,
material: &'a Material,
instances: Range<u32>,
camera: &'a wgpu::BindGroup,
light: &'a wgpu::BindGroup,
);

fn draw_model(
&mut self,
model: &'a Model,
camera: &'a wgpu::BindGroup,
light: &'a wgpu::BindGroup,
);
fn draw_model_instanced(
&mut self,
model: &'a Model,
instances: Range<u32>,
camera: &'a wgpu::BindGroup,
light: &'a wgpu::BindGroup,
);
}

impl<'a, 'b> DrawModel<'b> for wgpu::RenderPass<'a>
where
'b: 'a,
{
fn draw_mesh(
&mut self,
mesh: &'b Mesh,
material: &'b Material,
camera: &'b wgpu::BindGroup,
light: &'b wgpu::BindGroup,
) {
self.draw_mesh_instanced(mesh, material, 0..1, camera, light);
}

fn draw_mesh_instanced(
&mut self,
mesh: &'b Mesh,
material: &'b Material,
instances: Range<u32>,
camera: &'b wgpu::BindGroup,
light: &'b wgpu::BindGroup,
) {
self.set_vertex_buffer(0, &mesh.vertex_buffer, 0, 0);
self.set_index_buffer(&mesh.index_buffer, 0, 0);
self.set_bind_group(0, &material.bind_group, &[]);
self.set_bind_group(1, camera, &[]);
self.set_bind_group(2, light, &[]);
self.draw_indexed(0..mesh.num_elements, 0, instances);
}

fn draw_model(
&mut self,
model: &'b Model,
camera: &'b wgpu::BindGroup,
light: &'b wgpu::BindGroup,
) {
self.draw_model_instanced(model, 0..1, camera, light);
}

fn draw_model_instanced(
&mut self,
model: &'b Model,
instances: Range<u32>,
camera: &'b wgpu::BindGroup,
light: &'b wgpu::BindGroup,
) {
for mesh in &model.meshes {
let material = &model.materials[mesh.material];
self.draw_mesh_instanced(mesh, material, instances.clone(), camera, light);
}
}
}

With that done we can create another render pipeline for our light.

完成后,我们可以为灯光创建另一个渲染管道。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
let light_render_pipeline = {
let layout = device.create_pipeline_layout(&wgpu::PipelineLayoutDescriptor {
label: Some("Light Pipeline Layout"),
bind_group_layouts: &[&camera_bind_group_layout, &light_bind_group_layout],
push_constant_ranges: &[],
});
let shader = wgpu::ShaderModuleDescriptor {
label: Some("Light Shader"),
flags: wgpu::ShaderFlags::all(),
source: wgpu::ShaderSource::Wgsl(include_str!("light.wgsl").into()),
};
create_render_pipeline(
&device,
&layout,
sc_desc.format,
Some(texture::Texture::DEPTH_FORMAT),
&[model::ModelVertex::desc()],
shader,
)
};

I chose to create a seperate layout for the light_render_pipeline, as it doesn’t need all the resources that the regular render_pipeline needs (main just the textures).

我选择为light_render_pipeline创建一个单独的layout,因为它不需要常规render_pipeline需要的所有资源(纹理)。

With that in place we need to write the actual shaders.

接着我们需要编写实际的着色器。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
// Vertex shader

[[block]]
struct Camera {
view_proj: mat4x4<f32>;
};
[[group(0), binding(0)]]
var<uniform> camera: Camera;

[[block]]
struct Light {
position: vec3<f32>;
color: vec3<f32>;
};
[[group(1), binding(0)]]
var<uniform> light: Light;

struct VertexInput {
[[location(0)]] position: vec3<f32>;
};

struct VertexOutput {
[[builtin(position)]] clip_position: vec4<f32>;
[[location(0)]] color: vec3<f32>;
};

[[stage(vertex)]]
fn main(
model: VertexInput,
) -> VertexOutput {
let scale = 0.25;
var out: VertexOutput;
out.clip_position = camera.view_proj * vec4<f32>(model.position * scale + light.position, 1.0);
out.color = light.color;
return out;
}

// Fragment shader

[[stage(fragment)]]
fn main(in: VertexOutput) -> [[location(0)]] vec4<f32> {
return vec4<f32>(in.color, 1.0);
}

Now we could manually implement the draw code for the light in render(), but to keep with the pattern we developed, let’s create a new trait called DrawLight.

现在我们可以在render()中手动实现灯光的绘制代码,但是为了保持我们开发的模式,让我们创建一个名为DrawLight的新trait。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
pub trait DrawLight<'a> {
fn draw_light_mesh(
&mut self,
mesh: &'a Mesh,
camera: &'a wgpu::BindGroup,
light: &'a wgpu::BindGroup,
);
fn draw_light_mesh_instanced(
&mut self,
mesh: &'a Mesh,
instances: Range<u32>,
camera: &'a wgpu::BindGroup,
light: &'a wgpu::BindGroup,
);

fn draw_light_model(
&mut self,
model: &'a Model,
camera: &'a wgpu::BindGroup,
light: &'a wgpu::BindGroup,
);
fn draw_light_model_instanced(
&mut self,
model: &'a Model,
instances: Range<u32>,
camera: &'a wgpu::BindGroup,
light: &'a wgpu::BindGroup,
);
}

impl<'a, 'b> DrawLight<'b> for wgpu::RenderPass<'a>
where
'b: 'a,
{
fn draw_light_mesh(
&mut self,
mesh: &'b Mesh,
camera: &'b wgpu::BindGroup,
light: &'b wgpu::BindGroup,
) {
self.draw_light_mesh_instanced(mesh, 0..1, camera, light);
}

fn draw_light_mesh_instanced(
&mut self,
mesh: &'b Mesh,
instances: Range<u32>,
camera: &'b wgpu::BindGroup,
light: &'b wgpu::BindGroup,
) {
self.set_vertex_buffer(0, mesh.vertex_buffer.slice(..));
self.set_index_buffer(mesh.index_buffer.slice(..), wgpu::IndexFormat::Uint32);
self.set_bind_group(0, camera, &[]);
self.set_bind_group(1, light, &[]);
self.draw_indexed(0..mesh.num_elements, 0, instances);
}

fn draw_light_model(
&mut self,
model: &'b Model,
camera: &'b wgpu::BindGroup,
light: &'b wgpu::BindGroup,
) {
self.draw_light_model_instanced(model, 0..1, camera, light);
}
fn draw_light_model_instanced(
&mut self,
model: &'b Model,
instances: Range<u32>,
camera: &'b wgpu::BindGroup,
light: &'b wgpu::BindGroup,
) {
for mesh in &model.meshes {
self.draw_light_mesh_instanced(mesh, instances.clone(), camera, light);
}
}
}

Finally we want to add Light rendering to our render passes.

最后,我们希望将灯光渲染添加到渲染过程中。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
impl State {
// ...
fn render(&mut self) -> Result<(), wgpu::SwapChainError> {
// ...
render_pass.set_vertex_buffer(1, self.instance_buffer.slice(..));

use crate::model::DrawLight; // NEW!
render_pass.set_pipeline(&self.light_render_pipeline); // NEW!
render_pass.draw_light_model(
&self.obj_model,
&self.camera_bind_group,
&self.light_bind_group,
); // NEW!

render_pass.set_pipeline(&self.render_pipeline);
render_pass.draw_model_instanced(
&self.obj_model,
0..self.instances.len() as u32,
&self.camera_bind_group,
&self.light_bind_group,
);
}

With all that we’ll end up with something like this.

有了这些,我们最终会得到这样的结果。

light-in-scene

Ambient Lighting

Light has a tendency to bounce around before entering our eyes. That’s why you can see in areas that are in shadow. Actually modeling this interaction is computationally expensive, so we cheat. We define an ambient lighting value that stands in for the light bouncing of other parts of the scene to light our objects.

光线在进入我们的眼睛之前有一种反弹的趋势。这就是为什么你可以在阴影中看到。实际上,对这种交互进行建模在计算上很昂贵,所以我们作弊。我们定义一个环境光照明值,该值代表场景其他部分的光反弹,以照亮我们的对象。

The ambient part is based on the light color as well as the object color. We’ve already added our light_bind_group, so we just need to use it in our shader. In shader.wgsl, add the following below the texture uniforms.

环境光部分基于灯光颜色和对象颜色。我们已经添加了灯光组,所以我们只需要在着色器中使用它。在shader.wgsl中,在纹理下方添加以下内容。

1
2
3
4
5
6
7
[[block]]
struct Light {
position: vec3<f32>;
color: vec3<f32>;
};
[[group(2), binding(0)]]
var<uniform> light: Light;

Then we need to update our main shader code to calculate and use the ambient color value.

然后我们需要更新我们的主着色器代码来计算和使用环境光颜色值。

1
2
3
4
5
6
7
8
9
10
11
12
[[stage(fragment)]]
fn main(in: VertexOutput) -> [[location(0)]] vec4<f32> {
let object_color: vec4<f32> = textureSample(t_diffuse, s_diffuse, in.tex_coords);

// We don't need (or want) much ambient light, so 0.1 is fine
let ambient_strength = 0.1;
let ambient_color = light.color * ambient_strength;

let result = ambient_color * object_color.xyz;

return vec4<f32>(result, object_color.a);
}

With that we should get something like the this.

这样我们就可以得到类似这样的东西。

ambient_lighting

Diffuse Lighting

Remember the normal vectors that were included with our model? We’re finally going to use them. Normals represent the direction a surface is facing. By comparing the normal of a fragment with a vector pointing to a light source, we get a value of how light/dark that fragment should be. We compare the vector using the dot product to get the cosine of the angle between them.

还记得我们模型中包含的法向量吗?我们终于要使用它们了。法线表示曲面面对的方向。通过将碎片的法线与指向光源的向量进行比较,我们可以得到该碎片的亮/暗程度。我们使用点积比较矢量,得到它们之间的夹角的余弦。

normal_diagram

If the dot product of the normal and light vector is 1.0, that means that the current fragment is directly inline with the light source and will receive the lights full intensity. A value of 0.0 or lower means that the surface is perpendicular or facing away from the light, and therefore will be dark.

如果法线和光向量的点积为1.0,则表示当前片段直接与光源对齐,并将接收全强度的光。值为0.0或更低表示曲面垂直或背向灯光,因此将变暗。

We’re going to need to pull in the normal vector into our shader.wgsl.

我们需要将法线向量拉入shader.wgsl。

1
2
3
4
5
struct VertexInput {
[[location(0)]] position: vec3<f32>;
[[location(1)]] tex_coords: vec2<f32>;
[[location(2)]] normal: vec3<f32>; // NEW!
};

We’re also going to want to pass that value, as well as the vertex’s position to the fragment shader.

我们还要将该值以及顶点的位置传递给片段着色器。

1
2
3
4
5
6
struct VertexOutput {
[[builtin(position)]] clip_position: vec4<f32>;
[[location(0)]] tex_coords: vec2<f32>;
[[location(1)]] world_normal: vec3<f32>;
[[location(2)]] world_position: vec3<f32>;
};

For now let’s just pass the normal directly as is. This is wrong, but we’ll fix it later.

现在让我们直接按原样传递法线。这是错误的,但我们稍后会解决它。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[[stage(vertex)]]
fn main(
model: VertexInput,
instance: InstanceInput,
) -> VertexOutput {
let model_matrix = mat4x4<f32>(
instance.model_matrix_0,
instance.model_matrix_1,
instance.model_matrix_2,
instance.model_matrix_3,
);
var out: VertexOutput;
out.tex_coords = model.tex_coords;
out.world_normal = model.normal;
var world_position: vec4<f32> = model_matrix * vec4<f32>(model.position, 1.0);
out.world_position = world_position.xyz;
out.clip_position = camera.view_proj * world_position;
return out;
}

With that we can do the actual calculation. Below the ambient_color calculation, but above result, add the following.

这样我们就可以进行实际计算了。在ambient_color计算下方,但在结果上方,添加以下内容。

1
2
3
4
let light_dir = normalize(light.position - in.world_position);

let diffuse_strength = max(dot(in.world_normal, light_dir), 0.0);
let diffuse_color = light.color * diffuse_strength;

Now we can include the diffuse_color in the result.

现在我们可以在结果中包含漫反射颜色。

1
let result = (ambient_color + diffuse_color) * object_color.xyz;

With that we get something like this.

这样我们就得到了这样的东西。

ambient_diffuse_wrong

The normal matrix

Remember when I said passing the vertex normal directly to the fragment shader was wrong? Let’s explore that by removing all the cubes from the scene except one that will be rotated 180 degrees on the y-axis.

还记得我说过将顶点法线直接传递给片段着色器是错误的吗?让我们通过移除场景中的所有立方体来探索这一点,其中一个立方体将在y轴上旋转180度。

1
2
3
4
const NUM_INSTANCES_PER_ROW: u32 = 1;

// In the loop we create the instances in
let rotation = cgmath::Quaternion::from_axis_angle((0.0, 1.0, 0.0).into(), cgmath::Deg(180.0));

We’ll also remove the ambient_color from our lighting result.

我们还将从照明结果中删除环境光颜色。

1
let result = (diffuse_color) * object_color.xyz;

That should give us something that looks like this.

这应该给我们一些如下图所示的东西。

diffuse_wrong

This is clearly wrong as the light is illuminating the wrong side of the cube. This is because we aren’t rotating our normals with our object, so no matter what direction the object faces, the normals will always face the same way.

这显然是错误的,因为光线照亮了立方体的错误一侧。这是因为我们没有随对象旋转法线,所以无论对象朝向哪个方向,法线都将始终朝向相同的方向。

image

We need to use the model matrix to transform the normals to be in the right direction. We only want the rotation data though. A normal represents a direction, and should be a unit vector throughout the calculation. We can get our normals into the right direction using what is called a normal matrix.

我们需要使用模型矩阵将法线变换为正确的方向。我们只需要旋转数据。法线表示方向,在整个计算过程中应为单位向量。我们可以使用所谓的法线矩阵使法线指向正确的方向。

We could compute the normal matrix in the vertex shader, but that would involve inverting the model_matrix, and WGSL doesn’t actually have an inverse function. We would have to code our own. On top of that computing the inverse of a matrix is actually really expensive, especially doing that compututation for every vertex.

我们可以在顶点着色器中计算法线矩阵,但这需要反转模型_矩阵,WGSL实际上没有反函数。我们必须自己编写代码。除此之外,计算矩阵的逆实际上非常昂贵,尤其是对每个顶点进行计算。

Instead we’re going to create add a normal matrix field to InstanceRaw. Instead of inverting the model matrix, we’ll just using the the instances rotation to create a Matrix3.

相反,我们将创建一个法线矩阵字段,并将其添加到InstanceRaw。我们将使用实例旋转来创建Matrix3,而不是反转模型矩阵。

We using Matrix3 instead of Matrix4 as we only really need the rotation component of the matrix.

我们使用Matrix3代替Matrix4,因为我们只需要矩阵的旋转分量。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
#[repr(C)]
#[derive(Debug, Copy, Clone, bytemuck::Pod, bytemuck::Zeroable)]
#[allow(dead_code)]
struct InstanceRaw {
model: [[f32; 4]; 4],
normal: [[f32; 3]; 3],
}

impl model::Vertex for InstanceRaw {
fn desc<'a>() -> wgpu::VertexBufferLayout<'a> {
use std::mem;
wgpu::VertexBufferLayout {
array_stride: mem::size_of::<InstanceRaw>() as wgpu::BufferAddress,
// We need to switch from using a step mode of Vertex to Instance
// This means that our shaders will only change to use the next
// instance when the shader starts processing a new instance
step_mode: wgpu::VertexStepMode::Instance,
attributes: &[
wgpu::VertexAttribute {
offset: 0,
// While our vertex shader only uses locations 0, and 1 now, in later tutorials we'll
// be using 2, 3, and 4, for Vertex. We'll start at slot 5 not conflict with them later
shader_location: 5,
format: wgpu::VertexFormat::Float32x4,
},
// A mat4 takes up 4 vertex slots as it is technically 4 vec4s. We need to define a slot
// for each vec4. We don't have to do this in code though.
wgpu::VertexAttribute {
offset: mem::size_of::<[f32; 4]>() as wgpu::BufferAddress,
shader_location: 6,
format: wgpu::VertexFormat::Float32x4,
},
wgpu::VertexAttribute {
offset: mem::size_of::<[f32; 8]>() as wgpu::BufferAddress,
shader_location: 7,
format: wgpu::VertexFormat::Float32x4,
},
wgpu::VertexAttribute {
offset: mem::size_of::<[f32; 12]>() as wgpu::BufferAddress,
shader_location: 8,
format: wgpu::VertexFormat::Float32x4,
},
// NEW!
wgpu::VertexAttribute {
offset: mem::size_of::<[f32; 16]>() as wgpu::BufferAddress,
shader_location: 9,
format: wgpu::VertexFormat::Float32x3,
},
wgpu::VertexAttribute {
offset: mem::size_of::<[f32; 19]>() as wgpu::BufferAddress,
shader_location: 10,
format: wgpu::VertexFormat::Float32x3,
},
wgpu::VertexAttribute {
offset: mem::size_of::<[f32; 22]>() as wgpu::BufferAddress,
shader_location: 11,
format: wgpu::VertexFormat::Float32x3,
},
],
}
}
}

We need to modify Instance to create the normal matrix.

我们需要修改实例来创建法线矩阵。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
struct Instance {
position: cgmath::Vector3<f32>,
rotation: cgmath::Quaternion<f32>,
}

impl Instance {
fn to_raw(&self) -> InstanceRaw {
let model =
cgmath::Matrix4::from_translation(self.position) * cgmath::Matrix4::from(self.rotation);
InstanceRaw {
model: model.into(),
// NEW!
normal: cgmath::Matrix3::from(self.rotation).into(),
}
}
}

Now we need to reconstruct the normal matrix in the vertex shader.

现在我们需要在顶点着色器中重建法线矩阵。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
struct InstanceInput {
[[location(5)]] model_matrix_0: vec4<f32>;
[[location(6)]] model_matrix_1: vec4<f32>;
[[location(7)]] model_matrix_2: vec4<f32>;
[[location(8)]] model_matrix_3: vec4<f32>;
// NEW!
[[location(9)]] normal_matrix_0: vec3<f32>;
[[location(10)]] normal_matrix_1: vec3<f32>;
[[location(11)]] normal_matrix_2: vec3<f32>;
};

struct VertexOutput {
[[builtin(position)]] clip_position: vec4<f32>;
[[location(0)]] tex_coords: vec2<f32>;
[[location(1)]] world_normal: vec3<f32>;
[[location(2)]] world_position: vec3<f32>;
};

[[stage(vertex)]]
fn main(
model: VertexInput,
instance: InstanceInput,
) -> VertexOutput {
let model_matrix = mat4x4<f32>(
instance.model_matrix_0,
instance.model_matrix_1,
instance.model_matrix_2,
instance.model_matrix_3,
);
// NEW!
let normal_matrix = mat3x3<f32>(
instance.normal_matrix_0,
instance.normal_matrix_1,
instance.normal_matrix_2,
);
var out: VertexOutput;
out.tex_coords = model.tex_coords;
out.world_normal = normal_matrix * model.normal;
var world_position: vec4<f32> = model_matrix * vec4<f32>(model.position, 1.0);
out.world_position = world_position.xyz;
out.clip_position = camera.view_proj * world_position;
return out;
}

I’m currently doing things in world space. Doing things in view-space also known as eye-space, is more standard as objects can have lighting issues when they are further away from the origin. If we wanted to use view-space, we would have include the rotation due to the view matrix as well. We’d also have to transform our light’s position using something like view_matrix * model_matrix * light_position to keep the calculation from getting messed up when the camera moves.

我现在在世界空间里做事。在视图空间(也称为eye-space)中进行操作更为标准,因为当对象离原点较远时,可能会出现照明问题。如果我们想使用视图空间,我们还应该包括由于视图矩阵而产生的旋转。我们还必须使用视图矩阵模型矩阵灯光位置之类的东西来变换灯光的位置,以防止相机移动时计算出错。

There are advantages to using view space. The main one is when you have massive worlds doing lighting and other calculations in model spacing can cause issues as floating point precision degrades when numbers get really large. View space keeps the camera at the origin meaning all calculations will be using smaller numbers. The actual lighting math ends up the same, but it does require a bit more setup.

使用视图空间有很多优点。最主要的一个问题是,当有大量的世界在进行照明和其他模型间距计算时,可能会导致问题,因为当数字变得非常大时,浮点精度会下降。视图空间将相机保持在原点,这意味着所有计算都将使用较小的数字。实际的照明数学结果是一样的,但它确实需要更多的设置。

With that change our lighting now looks correct.

通过这一更改,我们的照明现在看起来是正确的。

diffuse_right

Bringing back our other objects, and adding the ambient lighting gives us this.

带回其他对象,并添加环境照明,我们就可以做到这一点。

ambient_diffuse_lighting

Specular Lighting

Specular lighting describes the highlights that appear on objects when viewed from certain angles. If you’ve ever looked at a car, it’s the super bright parts. Basically, some of the light can reflect of the surface like a mirror. The location of the hightlight shifts depending on what angle you view it at.

镜面反射照明描述从特定角度查看时在对象上显示的高光。如果你看过一辆车,那就是超亮的部分。基本上,一些光可以像镜子一样反射表面。强光的位置会根据您的观察角度发生变化。

image

Because this is relative to the view angle, we are going to need to pass in the camera’s position both into the fragment shader and into the vertex shader.

因为这是相对于视图角度的,所以我们需要将摄影机的位置传递到片段着色器和顶点着色器中。

1
2
3
4
5
6
7
[[block]]
struct Camera {
view_pos: vec4<f32>;
view_proj: mat4x4<f32>;
};
[[group(1), binding(0)]]
var<uniform> camera: Camera;

Don’t forget to update the Camera struct in light.wgsl as well, as if it doesn’t match the CameraUniform struct in rust, the light will render wrong.

不要忘记更新light.wgsl中的Camera结构,因为如果它与rust中的CameraUniform结构不匹配,灯光将渲染错误。

We’re going to need to update the CameraUniform struct as well.

们还需要更新CameraUniform结构。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
// main.rs
#[repr(C)]
#[derive(Copy, Clone, bytemuck::Pod, bytemuck::Zeroable)]
struct CameraUniform {
view_position: [f32; 4],
view_proj: [[f32; 4]; 4],
}

impl CameraUniform {
fn new() -> Self {
Self {
view_position: [0.0; 4],
view_proj: cgmath::Matrix4::identity().into(),
}
}

fn update_view_proj(&mut self, camera: &Camera) {
// We're using Vector4 because of the uniforms 16 byte spacing requirement
self.view_position = camera.eye.to_homogeneous();
self.view_proj = OPENGL_TO_WGPU_MATRIX * camera.build_view_projection_matrix();
}
}

Since we want to use our uniforms in the fragment shader now, we need to change it’s visibility.

因为我们现在想在片段着色器中使用制服,所以需要更改其可见性。

1
2
3
4
5
6
7
8
9
10
11
12
// main.rs
let camera_bind_group_layout = device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
entries: &[
wgpu::BindGroupLayoutBinding {
// ...
visibility: wgpu::ShaderStages::VERTEX | wgpu::ShaderStages::FRAGMENT, // Updated!
// ...
},
// ...
],
label: None,
});

We’re going to get the direction from the fragment’s position to the camera, and use that with the normal to calculate the reflect_dir.

我们将得到从碎片的位置到摄像机的方向,并将其与法线一起计算反射方向。

1
2
3
// In the fragment shader...
let view_dir = normalize(camera.view_pos.xyz - in.world_position);
let reflect_dir = reflect(-light_dir, in.world_normal);

Then we use the dot product to calculate the specular_strength and use that to compute the specular_color.

然后我们使用点积来计算镜面反射强度,并使用它来计算镜面反射颜色。

1
2
let specular_strength = pow(max(dot(view_dir, reflect_dir), 0.0), 32.0);
let specular_color = specular_strength * light.color;

Finally we add that to the result.

最后,我们将其添加到结果中。

1
let result = (ambient_color + diffuse_color + specular_color) * object_color.xyz;

With that you should have something like this.

这样你就应该有这样的东西。

ambient_diffuse_specular_lighting

如果我们只看镜面反射的颜色会如下所示。

specular_lighting

The half direction

Up to this point we’ve actually only implemented the Phong part of Blinn-Phong. The Phong reflection model works well, but it can break down under certain circumstances. The Blinn part of Blinn-Phong comes from the realization that if you add the view_dir, and light_dir together, normalize the result and use the dot product of that and the normal, you get roughly the same results without the issues that using reflect_dir had.

到目前为止,我们实际上只实现了Blinn Phong的Phong部分。Phong反射模型运行良好,但在某些情况下可能会崩溃。Blinn Phong的Blinn部分源于这样一种认识,即如果将view_dir和light_dir添加在一起,对结果进行规格化,并使用其与法线的点积,则得到的结果与使用reflect_dir得到的结果大致相同,而不存在使用reflect_dir的问题。

1
2
3
4
let view_dir = normalize(camera.view_pos.xyz - in.world_position);
let half_dir = normalize(view_dir + light_dir);

let specular_strength = pow(max(dot(in.world_normal, half_dir), 0.0), 32.0);

It’s hard to tell the difference, but here’s the results.

很难区分两者之间的区别,但结果如下。

half_dir

Check out the code!