OpenGL 4 Shading Language Cookbook chapter 3——Lighting, Shading, and Optimization
https://www.cnblogs.com/z12603/p/6860730.html
In Chapter 2, The Basics of GLSL Shaders, we covered a number of techniques for
implementing some of the shading effects that were produced by the former fixed-function
pipeline. We also looked at some basic features of GLSL such as functions and subroutines.
In this chapter, we’ll move beyond the shading model introduced in Chapter 2, The Basics of
GLSL Shaders, and see how to produce shading effects such as spotlights, fog, and cartoon
style shading. We’ll cover how to use multiple light sources, and how to improve the realism of
the results with a technique called per-fragment shading.
We’ll also see techniques for improving the efficiency of the shading calculations by using the
so-called “halfway vector” and directional light sources.
Finally, we’ll cover how to fine-tune the depth test by configuring the early depth test
optimization.
Shading with multiple positional lights
When shading with multiple light sources, we need to evaluate the shading equation for each
light and sum the results to determine the total light intensity reflected by a surface location.
The natural choice is to create uniform arrays to store the position and intensity of each light.
We’ll use an array of structures so that we can store the values for multiple lights within a
single uniform variable.
The following figure shows a “pig” mesh rendered with five light sources of different colors.
Note the multiple specular highlights.
Set up your OpenGL program with the vertex position in attribute location zero, and the normal
in location one.
To create a shader program that renders using the ADS (Phong) shading model with multiple
light sources, use the following steps:
- Use the following vertex shader:
layout (location = 0) in vec3 VertexPosition;
layout (location = 1) in vec3 VertexNormal;
out vec3 Color;
struct LightInfo {
vec4 Position; // Light position in eye coords.
vec3 Intensity; // Light intensity
};
uniform LightInfo lights[5];
// Material parameters
uniform vec3 Kd; // Diffuse reflectivity
uniform vec3 Ka; // Ambient reflectivity
uniform vec3 Ks; // Specular reflectivity
uniform float Shininess; // Specular shininess factor
uniform mat4 ModelViewMatrix;
uniform mat3 NormalMatrix;
uniform mat4 MVP;
vec3 ads( int lightIndex, vec4 position, vec3 norm )
{
vec3 s = normalize( vec3(lights[lightIndex].Position –position) );
vec3 v = normalize(vec3(-position));
vec3 r = reflect( -s, norm );
vec3 I = lights[lightIndex].Intensity;
return I * ( Ka + Kd * max( dot(s, norm), 0.0 ) + Ks * pow( max( dot(r,v), 0.0 ), Shininess ) );
}
void main()
{
vec3 eyeNorm = normalize( NormalMatrix * VertexNormal);
vec4 eyePosition = ModelViewMatrix * vec4(VertexPosition,1.0);
// Evaluate the lighting equation for each light
Color = vec3(0.0);
for( int i = 0; i < 5; i++ )
Color += ads( i, eyePosition, eyeNorm );
gl_Position = MVP * vec4(VertexPosition,1.0);
}
- Use the following simple fragment shader:
in vec3 Color;
layout( location = 0 ) out vec4 FragColor;
void main() {
FragColor = vec4(Color, 1.0);
}
- In the OpenGL application, set the values for the lights array in the vertex shader.
For each light, use something similar to the following code. This example uses the
C++ shader program class (prog is a GLSLProgram object).
prog.setUniform(“lights[0].Intensity”,
vec3(0.0f,0.8f,0.8f) );
prog.setUniform("lights[0].Position", position );
Update the array index as appropriate for each light.
Within the vertex shader, the lighting parameters are stored in the uniform array lights. Each
element of the array is a struct of type LightInfo. This example uses five lights. The light
intensity is stored in the Intensity field, and the position in eye coordinates is stored in the
Position field.
The rest of the uniform variables are essentially the same as in the ADS (ambient, diffuse,
and specular) shader presented in Chapter 2, The Basics of GLSL Shaders.
The ads function is responsible for computing the shading equation for a given light source.
The index of the light is provided as the first parameter lightIndex. The equation is
computed based on the values in the lights array at that index.
In the main function, a for loop is used to compute the shading equation for each light, and
the results are summed into the shader output variable Color.
The fragment shader simply applies the interpolated color to the fragment.
A core component of a shading equation is the vector that points from the surface location
towards the light source (s in previous examples). For lights that are extremely far away, there
is very little variation in this vector over the surface of an object. In fact, for very distant light
sources, the vector is essentially the same for all points on a surface. (Another way of thinking
about this is that the light rays are nearly parallel.) Such a model would be appropriate for a
distant, but powerful, light source such as the sun. Such a light source is commonly called a
directional light source because it does not have a specific position, only a direction.
If we are using a directional light source, the direction towards the source is the same for
all points in the scene. Therefore, we can increase the efficiency of our shading calculations
because we no longer need to recompute the direction towards the light source for each
location on the surface.
Of course, there is a visual difference between a positional light source and a directional one.
The following figures show a torus rendered with a positional light (left) and a directional light
(right). In the left figure, the light is located somewhat close to the torus. The directional light
covers more of the surface of the torus due to the fact that all of the rays are parallel.
In previous versions of OpenGL, the fourth component of the light position was used to
determine whether or not a light was considered directional. A zero in the fourth component
indicated that the light source was directional and the position was to be treated as a
direction towards the source (a vector). Otherwise, the position was treated as the actual
location of the light source. In this example, we’ll emulate the same functionality.
Set up your OpenGL program with the vertex position in attribute location zero, and the vertex
normal in location one.
To create a shader program that implements ADS shading using a directional light source, use
the following code:
- Use the following vertex shader:
layout (location = 0) in vec3 VertexPosition;
layout (location = 1) in vec3 VertexNormal;
out vec3 Color;
uniform vec4 LightPosition;
uniform vec3 LightIntensity;
uniform vec3 Kd; // Diffuse reflectivity
uniform vec3 Ka; // Ambient reflectivity
uniform vec3 Ks; // Specular reflectivity
uniform float Shininess; // Specular shininess factor
uniform mat4 ModelViewMatrix;
uniform mat3 NormalMatrix;
uniform mat4 ProjectionMatrix;
uniform mat4 MVP;
vec3 ads( vec4 position, vec3 norm )
{
vec3 s;
if( LightPosition.w == 0.0 )
s = normalize(vec3(LightPosition));
else
s = normalize(vec3(LightPosition - position));
vec3 v = normalize(vec3(-position));
vec3 r = reflect( -s, norm );
return LightIntensity * ( Ka + Kd * max( dot(s, norm), 0.0 ) + Ks * pow( max( dot(r,v), 0.0 ),Shininess ) );
}
void main()
{
vec3 eyeNorm = normalize( NormalMatrix * VertexNormal);
vec4 eyePosition = ModelViewMatrix *
vec4(VertexPosition,1.0);
// Evaluate the lighting equation
Color = ads( eyePosition, eyeNorm );
gl_Position = MVP * vec4(VertexPosition,1.0);
}
- Use the same simple fragment shader from the previous recipe:
in vec3 Color;
layout( location = 0 ) out vec4 FragColor;
void main() {
FragColor = vec4(Color, 1.0);
}
Within the vertex shader, the fourth coordinate of the uniform variable LightPosition is used
to determine whether or not the light is to be treated as a directional light. Inside the ads
function, which is responsible for computing the shading equation, the value of the vector s
is determined based on whether or not the fourth coordinate of LightPosition is zero. If the
value is zero, LightPosition is normalized and used as the direction towards the light source.
Otherwise, LightPosition is treated as a location in eye coordinates, and we compute the
direction towards the light source by subtracting the vertex position from LightPosition and
normalizing the result.
There is a slight efficiency gain when using directional lights due to the fact that there is no
need to recompute the light direction for each vertex. This saves a subtraction operation,
which is a small gain, but could accumulate when there are several lights, or when the lighting
is computed per-fragment.
Using per-fragment shading for improved realism
When the shading equation is evaluated within the vertex shader (as we have done in
previous recipes), we end up with a color associated with each vertex. That color is then
interpolated across the face, and the fragment shader assigns that interpolated color to the
output fragment. As mentioned previously (the Implementing flat shading recipe in Chapter
2, The Basics of GLSL Shaders), this technique is often called Gouraud shading. Gouraud
shading (like all shading techniques) is an approximation, and can lead to some less than
desirable results when; for example, the reflection characteristics at the vertices have little
resemblance to those in the center of the polygon. For example, a bright specular highlight
may reside in the center of a polygon, but not at its vertices. Simply evaluating the shading
equation at the vertices would prevent the specular highlight from appearing in the rendered
result. Other undesirable artifacts, such as edges of polygons, may also appear when Gouraud
shading is used, due to the fact that color interpolation is less physically accurate.
To improve the accuracy of our results, we can move the computation of the shading equation
from the vertex shader to the fragment shader. Instead of interpolating color across the
polygon, we interpolate the position and normal vector, and use these values to evaluate the
shading equation at each fragment. This technique is often called Phong shading or Phong
interpolation. The results from Phong shading are much more accurate and provide more
pleasing results, but some undesirable artifacts may still appear.
The following figure shows the difference between Gouraud and Phong shading. The scene
on the left is rendered with Gouraud (per-vertex) shading, and on the right is the same scene
rendered using Phong (per-fragment) shading. Underneath the teapot is a partial plane, drawn
with a single quad. Note the difference in the specular highlight on the teapot, as well as the
variation in the color of the plane beneath the teapot.
In this example, we’ll implement Phong shading by passing the position and normal from the
vertex shader to the fragment shader, and then evaluate the ADS shading model within the
fragment shader.
Set up your OpenGL program with the vertex position in attribute location zero, and the
normal in location one. Your OpenGL application must also provide the values for the uniform
variables Ka, Kd, Ks, Shininess, LightPosition, and LightIntensity, the first four
of which are the standard material properties (reflectivities) of the ADS shading model. The
latter two are the position of the light in eye coordinates, and the intensity of the light source,
respectively. Finally, the OpenGL application must also provide the values for the uniforms
ModelViewMatrix, NormalMatrix, ProjectionMatrix, and MVP.
To create a shader program that can be used for implementing per-fragment (or Phong)
shading using the ADS shading model, use the following steps:
- Use the following code for the vertex shader:
layout (location = 0) in vec3 VertexPosition;
layout (location = 1) in vec3 VertexNormal;
out vec3 Position;
out vec3 Normal;
uniform mat4 ModelViewMatrix;
uniform mat3 NormalMatrix;
uniform mat4 ProjectionMatrix;
uniform mat4 MVP;
void main()
{
Normal = normalize( NormalMatrix * VertexNormal);
Position = vec3( ModelViewMatrix *
vec4(VertexPosition,1.0) );
gl_Position = MVP * vec4(VertexPosition,1.0);
}
- Use the following code for the fragment shader:
in vec3 Position;
in vec3 Normal;
uniform vec4 LightPosition;
uniform vec3 LightIntensity;
uniform vec3 Kd; // Diffuse reflectivity
uniform vec3 Ka; // Ambient reflectivity
uniform vec3 Ks; // Specular reflectivity
uniform float Shininess; // Specular shininess factor
layout( location = 0 ) out vec4 FragColor;
vec3 ads( )
{
vec3 n = normalize( Normal );
vec3 s = normalize( vec3(LightPosition) - Position );
vec3 v = normalize(vec3(-Position));
vec3 r = reflect( -s, n );
return LightIntensity * ( Ka + Kd * max( dot(s, n), 0.0 ) + Ks * pow( max( dot(r,v), 0.0 ), Shininess ) );
}
void main() {
FragColor = vec4(ads(), 1.0);
}
The vertex shader has two output variables: Position and Normal. In the main function,
we convert the vertex normal to eye coordinates by transforming with the normal matrix, and
then store the converted value in Normal. Similarly, the vertex position is converted to eye
coordinates by transforming it by the model-view matrix, and the converted value is stored
in Position.
The values of Position and Normal are automatically interpolated and provided to the
fragment shader via the corresponding input variables. The fragment shader then computes
the standard ADS shading equation using the values provided. The result is then stored in the
output variable, FragColor.
Evaluating the shading equation within the fragment shader produces more accurate
renderings. However, the price we pay is in the evaluation of the shading model for each
pixel of the polygon, rather than at each vertex. The good news is that with modern graphics
cards, there may be enough processing power to evaluate all of the fragments for a polygon in
parallel. This can essentially provide nearly equivalent performance for either per-fragment or
per-vertex shading.
下一篇: 求分享下phpstorm的常用技巧?