This page does not represent the most current semester of this course; it is present merely as an archive.
Here we discuss only the simplest form of shadow mapping: the case where the shadows are cast by one light and all fall inside a reasonably-narrow frustum.
As discussed in Using Textures, the basic steps to shadow mapping are:
Elements of realizing this outline are described below.
You’ll want two shader programs: one with a cheap fragment shader for rendering shadows and one with an expensive fragment shader for rendering the lit scene. We also want to send the exact same geometry to both. That means we’d rather use just one vertex array object, shaded between both programs.
Every in
variable in a vertex shader is given an index.
We have thus far discovered those indices in our JavaScript using
gl.getAttribLocation
, but we could also have hard-coded
locations once and for all. If we hard-code the same locations in
multiple shader we can use the same VAO for multiple shaders, saving us
considerable work.
Hard-coding locations involves two parts:
Add layout(location = 0)
and the like before each
in
variable in the vertex shader. For example,
layout(location = 0) in vec4 position;
layout(location = 1) in vec2 texCoord;
Locations should be consecutive integers starting at 0. They should be the same in all vertex shaders.
After compiling and attaching the shaders but before linking
them, call gl.bindAttribLocation
with each of these same
locations. For example,
let program = gl.createProgram()
.attachShader(program, lvs)
gl.attachShader(program, fs)
gl.bindAttribLocation(program, 0, "position"); // ← new
gl.bindAttribLocation(program, 1, "texCoord"); // ← new
gl.linkProgram(program)
glif (!gl.getProgramParameter(program, gl.LINK_STATUS)) {
console.error(gl.getProgramInfoLog(program))
throw Error("Linking failed")
}
Once hard-coded in this way, we can change our geometry setup to not refer to a program. For example, instead of
let loc = gl.getAttribLocation(program, 'texCoord')
.vertexAttribPointer(loc, data[0].length, gl.FLOAT, false, 0, 0)
gl.enableVertexAttribArray(loc) gl
we’d have
let loc = 1 // same as the location specified in the vertex shader
.vertexAttribPointer(loc, data[0].length, gl.FLOAT, false, 0, 0)
gl.enableVertexAttribArray(loc) gl
That was the only use of program
in our previous
geometry setup, so once we do this the VAO is now
program-independent.
Should we always use fixed indices?
The layout(location = 0)
notation is accepted for all
in
, out
, and uniform
variables in
both vertex and fragment shaders and remove dependence on specific
shader program instances and on matching identifier names. Because of
this, some developers feel like locations should always be used because
they scale nicely to many shaders.
However, explicit layout locations tend to be less
programmer-friendly. Attributes (geometry buffers, vertex
in
) have one set of locations, Varyings (vertex
out
, fragment in
) have another, and Uniforms
have a third so location 0
is ambiguous and potentially
confusing. Because of that we can’t simply think in terms of locations,
meaning we need a name-to-location mapping in our head that we can
easily get wrong.
As a consequence of neither being the clear winner, opinions about the correctness of fixed indices vs dynamic name-based indices vary widely. Some libraries that wrap WebGL and other GLSL-related APIs add their own third option, for example by requiring something like Hungarian notation in your code and automatically transforming that into fixed indices under the hood.
We’ll draw the scene twice each frame. Once we’ll render using the shadow map’s shaders and render to a texture. Then we’ll render using the display’s shaders and render to the canvas. To support this, it’s best to have the drawing function split in two; conceptually something like
drawStuff() {
draw 1st object
draw 2nd object
and so on
}
drawScene() {
load shadow program
set up and clear shadow buffer as destination
drawStuff()
load display program
set up and clear canvas as destination
drawStuff()
}
If you have a scene where different objects need different shaders this organizaton will become more complicated.
WenGL renders to something called a framebuffer
, which is a
raster with two roles. One role is as the destination where colors and
so on are written during rendering. The other role is as something to
display in some fashion.
By default, WebGL creates one RGBA framebuffer for the canvas object;
it’s display in some fashion
is display in the canvas
and
is handled by the browser.
We’ll want a second framebuffer for the shadow map; it’s display
in some fashion
will be use as a texture during the display
rendering
. To do that we’ll need to
// 1. make a texture
const shadowMap = gl.createTexture();
const smSize = 512; // power of 2 fastest on some GPUs
.bindTexture(gl.TEXTURE_2D, shadowMap);
gl.texImage2D(
gl.TEXTURE_2D, // same we bound to in line above
gl0, // mip level
.DEPTH_COMPONENT32F, // 2. set up high-fi channel
gl, smSize, // width and height
smSize0, // border
.DEPTH_COMPONENT, // format
gl.FLOAT, // 2. set up high-fi channel
glnull); // no data: we'll render into it instead
// add the usual `gl.textParameteri` calls here;
// typically NEAREST and CLAMP_TO_EDGE are used for shadow maps
// 3. make a framebuffer
const shadowFB = gl.createFramebuffer();
.bindFramebuffer(gl.FRAMEBUFFER, shadowFB);
gl// 4. connect the two
.framebufferTexture2D(
gl.FRAMEBUFFER, // same we bound to in line above
gl.DEPTH_ATTACHMENT, // should get the depth buffer (only)
gl.TEXTURE_2D, // destination is bound here
gl, // destination is stored here
shadowMap0); // mip level
For the shadow map pass, we’ll load a view and projection matrix for the light. For the scene pass, we’ll load a view and projection matrix for the camera. We’ll also presumably have various model matrices for different scene objects.
In the scene pass’s fragemnt shader, we’ll have a fragment coordinate and want to find which texel of the shadow map it corresponds to. There are multiple ways to do that, but the easiest to describe is this:
smSize/2
Parts of the fragment shader work above can be moved to the vertex shader, improving runtime by doing the work only once per vertex instead of once per fragment. How much can be so moved?
Naively implemented, shadow maps will cause roughly half of all
not-in-shadow fragments to appear to be in shadow. This is because the
texel in the shadow map was generated in a slightly different place on
the model than the fragment was, meaning it was generated at a different
depth from the camera. The name for this phenomenon is shadow
acne
.
Several solutions to this are possible.
Shadow bias.
A bias is decision process that favors one outcome over the
other. A shadow bias favors not in shadow
by checking z_{\text{shadow map}} + \epsilon <
z_{\text{fragment}} where \epsilon is the bias.
This is easy to implement but hard to tune: too small a bias gives self-shadowing while too large a bias gives bleed-through lighting when the shadow caster is close to the shadow receiver.
Culling.
If all your objects are 2-sided and have consistent triangle winding directions then you can cull front faces during shadow projection and back faces during scene rendering. Because shadows are being case by the back of the object but received by the front, they will only self-shadow for very thin objects.
Multiple samples.
Instead of using the GLSL built-in texture
function to
look up texels, we could manually compute several indices into the
texture: textureSize(shadowMap, 0)
retrieves the texture’s
dimensions and texelFetch(shadowMap, someVec2i, 0)
retrieves a specific texel. We could, for example, say that a fragment
is in shadow only if all four of its surrounding shadow map texels are
closer to the light than it is.
Taking multiple samples is also one way to create soft shadows, though the best techniques for that work in different ways.