While we can tell that our scene is 3d because of our camera, it still feels very flat. That’s because our model stays the same color regardless of how it’s oriented. If we want to change that we need to add lighting to our scene.
In the real world, a light source emits photons which bounce around until they enter into our eyes. The color we see is the light’s original color minus whatever energy it lost while it was bouncing around.
In the computer graphics world, modeling individual photons would be hilariously computationally expensive. A single 100 Watt light bulb emits about 3.27 x 10^20 photons per second. Just imagine that for the sun! To get around this, we’re gonna use math to cheat.
在计算机图形学领域,对单个光子进行建模在计算上会非常昂贵。一个100瓦的灯泡每秒发射大约3.27 x 10^20个光子。想象一下,为了太阳!为了避开这个问题,我们要用数学来作弊。
Let’s discuss a few options.
让我们讨论几个选项。
Ray/Path Tracing
This is an advanced topic, and we won’t be covering it in depth here. It’s the closest model to the way light really works so I felt I had to mention it. Check out the ray tracing tutorial if you want to learn more.
The Blinn-Phong Model
Ray/path tracing is often too computationally expensive for most realtime applications (though that is starting to change), so a more efficient, if less accurate method based on the Phong reflection model is often used. It splits up the lighting calculation into three (3) parts: ambient lighting, diffuse lighting, and specular lighting. We’re going to be learning the Blinn-Phong model, which cheats a bit at the specular calculation to speed things up.
Before we can get into that though, we need to add a light to our scene.
在我们开始之前,需要在场景中添加灯光。
1 2 3 4 5 6 7 8 9
// main.rs #[repr(C)] #[derive(Debug, Copy, Clone, bytemuck::Pod, bytemuck::Zeroable)] structLightUniform { position: [f32; 3], // Due to uniforms requiring 16 byte (4 float) spacing, we need to use a padding field here _padding: u32, color: [f32; 3], }
Our LightUniform represents a colored point in space. We’re just going to use pure white light, but it’s good to allow different colors of light.
Let’s also update the lights position in the update() method, so we can see what our objects look like from different angles.
我们还将更新update()方法中的灯光位置,以便从不同角度查看对象的外观。
1 2 3 4 5 6
// Update the light let old_position: cgmath::Vector3<_> = self.light_uniform.position.into(); self.light_uniform.position = cgmath::Quaternion::from_axis_angle((0.0, 1.0, 0.0).into(), cgmath::Deg(1.0)) * old_position; self.queue.write_buffer(&self.light_buffer, 0, bytemuck::cast_slice(&[self.light_uniform]));
This will have the light rotate around the origin one degree every frame.
这将使灯光每帧围绕原点旋转一度。
Seeing the light
For debugging purposes, it would be nice if we could see where the light is to make sure that the scene looks correct. We could adapt our existing render pipeline to draw the light, but it will likely get in the way. Instead we are going to extract our render pipeline creation code into a new function called create_render_pipeline().
I chose to create a seperate layout for the light_render_pipeline, as it doesn’t need all the resources that the regular render_pipeline needs (main just the textures).
Now we could manually implement the draw code for the light in render(), but to keep with the pattern we developed, let’s create a new trait called DrawLight.
With all that we’ll end up with something like this.
有了这些,我们最终会得到这样的结果。
Ambient Lighting
Light has a tendency to bounce around before entering our eyes. That’s why you can see in areas that are in shadow. Actually modeling this interaction is computationally expensive, so we cheat. We define an ambient lighting value that stands in for the light bouncing of other parts of the scene to light our objects.
The ambient part is based on the light color as well as the object color. We’ve already added our light_bind_group, so we just need to use it in our shader. In shader.wgsl, add the following below the texture uniforms.
Then we need to update our main shader code to calculate and use the ambient color value.
然后我们需要更新我们的主着色器代码来计算和使用环境光颜色值。
1 2 3 4 5 6 7 8 9 10 11 12
[[stage(fragment)]] fnmain(in: VertexOutput) -> [[location(0)]] vec4<f32> { let object_color: vec4<f32> = textureSample(t_diffuse, s_diffuse, in.tex_coords); // We don't need (or want) much ambient light, so 0.1 is fine let ambient_strength = 0.1; let ambient_color = light.color * ambient_strength;
let result = ambient_color * object_color.xyz;
return vec4<f32>(result, object_color.a); }
With that we should get something like the this.
这样我们就可以得到类似这样的东西。
Diffuse Lighting
Remember the normal vectors that were included with our model? We’re finally going to use them. Normals represent the direction a surface is facing. By comparing the normal of a fragment with a vector pointing to a light source, we get a value of how light/dark that fragment should be. We compare the vector using the dot product to get the cosine of the angle between them.
If the dot product of the normal and light vector is 1.0, that means that the current fragment is directly inline with the light source and will receive the lights full intensity. A value of 0.0 or lower means that the surface is perpendicular or facing away from the light, and therefore will be dark.
With that we can do the actual calculation. Below the ambient_color calculation, but above result, add the following.
这样我们就可以进行实际计算了。在ambient_color计算下方,但在结果上方,添加以下内容。
1 2 3 4
let light_dir = normalize(light.position - in.world_position);
let diffuse_strength = max(dot(in.world_normal, light_dir), 0.0); let diffuse_color = light.color * diffuse_strength;
Now we can include the diffuse_color in the result.
现在我们可以在结果中包含漫反射颜色。
1
let result = (ambient_color + diffuse_color) * object_color.xyz;
With that we get something like this.
这样我们就得到了这样的东西。
The normal matrix
Remember when I said passing the vertex normal directly to the fragment shader was wrong? Let’s explore that by removing all the cubes from the scene except one that will be rotated 180 degrees on the y-axis.
// In the loop we create the instances in let rotation = cgmath::Quaternion::from_axis_angle((0.0, 1.0, 0.0).into(), cgmath::Deg(180.0));
We’ll also remove the ambient_color from our lighting result.
我们还将从照明结果中删除环境光颜色。
1
let result = (diffuse_color) * object_color.xyz;
That should give us something that looks like this.
这应该给我们一些如下图所示的东西。
This is clearly wrong as the light is illuminating the wrong side of the cube. This is because we aren’t rotating our normals with our object, so no matter what direction the object faces, the normals will always face the same way.
We need to use the model matrix to transform the normals to be in the right direction. We only want the rotation data though. A normal represents a direction, and should be a unit vector throughout the calculation. We can get our normals into the right direction using what is called a normal matrix.
We could compute the normal matrix in the vertex shader, but that would involve inverting the model_matrix, and WGSL doesn’t actually have an inverse function. We would have to code our own. On top of that computing the inverse of a matrix is actually really expensive, especially doing that compututation for every vertex.
Instead we’re going to create add a normal matrix field to InstanceRaw. Instead of inverting the model matrix, we’ll just using the the instances rotation to create a Matrix3.
impl model::Vertex for InstanceRaw { fndesc<'a>() -> wgpu::VertexBufferLayout<'a> { use std::mem; wgpu::VertexBufferLayout { array_stride: mem::size_of::<InstanceRaw>() as wgpu::BufferAddress, // We need to switch from using a step mode of Vertex to Instance // This means that our shaders will only change to use the next // instance when the shader starts processing a new instance step_mode: wgpu::VertexStepMode::Instance, attributes: &[ wgpu::VertexAttribute { offset: 0, // While our vertex shader only uses locations 0, and 1 now, in later tutorials we'll // be using 2, 3, and 4, for Vertex. We'll start at slot 5 not conflict with them later shader_location: 5, format: wgpu::VertexFormat::Float32x4, }, // A mat4 takes up 4 vertex slots as it is technically 4 vec4s. We need to define a slot // for each vec4. We don't have to do this in code though. wgpu::VertexAttribute { offset: mem::size_of::<[f32; 4]>() as wgpu::BufferAddress, shader_location: 6, format: wgpu::VertexFormat::Float32x4, }, wgpu::VertexAttribute { offset: mem::size_of::<[f32; 8]>() as wgpu::BufferAddress, shader_location: 7, format: wgpu::VertexFormat::Float32x4, }, wgpu::VertexAttribute { offset: mem::size_of::<[f32; 12]>() as wgpu::BufferAddress, shader_location: 8, format: wgpu::VertexFormat::Float32x4, }, // NEW! wgpu::VertexAttribute { offset: mem::size_of::<[f32; 16]>() as wgpu::BufferAddress, shader_location: 9, format: wgpu::VertexFormat::Float32x3, }, wgpu::VertexAttribute { offset: mem::size_of::<[f32; 19]>() as wgpu::BufferAddress, shader_location: 10, format: wgpu::VertexFormat::Float32x3, }, wgpu::VertexAttribute { offset: mem::size_of::<[f32; 22]>() as wgpu::BufferAddress, shader_location: 11, format: wgpu::VertexFormat::Float32x3, }, ], } } }
We need to modify Instance to create the normal matrix.
I’m currently doing things in world space. Doing things in view-space also known as eye-space, is more standard as objects can have lighting issues when they are further away from the origin. If we wanted to use view-space, we would have include the rotation due to the view matrix as well. We’d also have to transform our light’s position using something like view_matrix * model_matrix * light_position to keep the calculation from getting messed up when the camera moves.
There are advantages to using view space. The main one is when you have massive worlds doing lighting and other calculations in model spacing can cause issues as floating point precision degrades when numbers get really large. View space keeps the camera at the origin meaning all calculations will be using smaller numbers. The actual lighting math ends up the same, but it does require a bit more setup.
Bringing back our other objects, and adding the ambient lighting gives us this.
带回其他对象,并添加环境照明,我们就可以做到这一点。
Specular Lighting
Specular lighting describes the highlights that appear on objects when viewed from certain angles. If you’ve ever looked at a car, it’s the super bright parts. Basically, some of the light can reflect of the surface like a mirror. The location of the hightlight shifts depending on what angle you view it at.
Because this is relative to the view angle, we are going to need to pass in the camera’s position both into the fragment shader and into the vertex shader.
We’re going to get the direction from the fragment’s position to the camera, and use that with the normal to calculate the reflect_dir.
我们将得到从碎片的位置到摄像机的方向,并将其与法线一起计算反射方向。
1 2 3
// In the fragment shader... let view_dir = normalize(camera.view_pos.xyz - in.world_position); let reflect_dir = reflect(-light_dir, in.world_normal);
Then we use the dot product to calculate the specular_strength and use that to compute the specular_color.
然后我们使用点积来计算镜面反射强度,并使用它来计算镜面反射颜色。
1 2
let specular_strength = pow(max(dot(view_dir, reflect_dir), 0.0), 32.0); let specular_color = specular_strength * light.color;
Finally we add that to the result.
最后,我们将其添加到结果中。
1
let result = (ambient_color + diffuse_color + specular_color) * object_color.xyz;
With that you should have something like this.
这样你就应该有这样的东西。
如果我们只看镜面反射的颜色会如下所示。
The half direction
Up to this point we’ve actually only implemented the Phong part of Blinn-Phong. The Phong reflection model works well, but it can break down under certain circumstances. The Blinn part of Blinn-Phong comes from the realization that if you add the view_dir, and light_dir together, normalize the result and use the dot product of that and the normal, you get roughly the same results without the issues that using reflect_dir had.