This page does not represent the most current semester of this course; it is present merely as an archive.
In graphics, we refer to many different Bidirectional X
Distribution Function
s, of which the most general is the
Bidirectional Scattering Distribution Function
of BSDF.
A BSDFs provides a probability distribution function of the following form:
Given a photon of with wavelength \lambda struck the object at position \mathbf{p} arriving from direction \vec i, what is the probability it will emerge from the object at position \mathbf{q} in direction \vec o?
Because the physics of light is reversible, all BSDFs are bidirectional, meaning we can swap (\mathbf{p},\vec i) with (\mathbf{q},\vec o) and get the same probability.
The integral of the BSDF over all (\mathbf{q},\vec o) is called the albedo of the material. If the albedo is 1, every photon that enters also exits. If the albedo is 0, every photon that enters is absorbed. Because of the conservation of energy, albedos cannot be larger than 1. Because of entropy, all real albedos are less than 1.
An approachable overview of the common parts of BSDFs can be found in Light and Matter: The theory of Physically-Based Rendering and Shading by Wes McDermott.
If we had a lot of compute time, we could use BSDFs directly to track every photon’s movement through a scene, using those that are absorbed by a light sensor in a camera to create an image. There are too many photons to really do this1, but it is worth understanding because all other renderings are approximations of photon movement.
To simulate photon transit, we’d create a ray for every photon generated by a light source, find it’s first intersection with an object, and randomly sample the BSDF of that intersection point to figure out of the photon is absorbed and if not where it goes next.
Instead of tracking individual photons, we’re going to assume that
there are so many photons we can treat their number as a continuous
light intensity
. We’re usually also going to assume that
individual wavelengths don’t matter2 and assume that the
light intensity has a color
, typically stored as RGB.
With those assumptions, we can now compute many photons at once and use bidirectionality to focus our computation on parts of the scene that are most likely to contribute to the final image. In particular, we will now
Shoot rays backwards
from the camera into the
scene
When we hit an object, sum up all the incident light at that point. That sum has two parts: for each possible light source direction,
That sum of all lights has the general form \int_S L P d\Omega where - S is all possible directions (i.e. (\mathbf{p}, \vec i) from the BSDF) - d\Omega is an infinitesimal regions of directions - L is the light coming from \Omega - P is the probability that light coming from \Omega will exit towards our ray (i.e. towards (\mathbf{q}, \vec o) from the BSDF)
Given this integral, we are now left with the challenge of solving it.
Likely in your calculus class you learned two broad ways to solve integrals.
You likely spent most of your time learning analytical formulas resulting in closed-form equations, such as \int x^{-1} dx = \ln(x) + C. Almost every function in graphics is far to complicated to solve using these kinds of formulas.
You likely were also exposed to a few numerical approaches like Riemann sums and the trapezoidal rule and maybe even generalized techniques like the Newton-Cotes formulas and Gaussian quadrature. These work by sampling the function to be integrated at a few points (usually evenly spaced) and connecting the points to create some kind of piecewise-polynomial approximation of the function, which then has a simple closed-form solution. Nice ideas, but difficult to apply to directions (samples on the unit sphere are hard to connect) and often susceptible to visually-jarring aliasing artifacts, so also not what we use in graphics.
Instead, we use various Monte Carlo methods.
Given a function f(x) we can approximate \int f(x) dx by evaluating f(x) at randomly-chosen xs and averaging the results. In particular, if we select a set X of x stochastically from probability density function P(x) then \frac{1}{|X|}\sum_{x \in X} f(x) \approx \int_{-\infty}^{\infty} f(x) P(x) dx
If P is chosen such that it is non-zero between a and b and zero outside that interval then \frac{1}{|X|}\sum_{x \in X} f(x) \approx \int_{a}^{b} f(x) P(x) dx Notably, if P is a uniform distribution between a and b then P(x) is the constant value \frac{1}{b-a} in that interval and \frac{1}{|X|}\sum_{x \in X} f(x) \approx \frac{1}{b-a}\int_{a}^{b} f(x) dx
Naive Monte Carlo methods pick P to be a uniform distribution in order to evaluate \int f(x). But we can do better than that.
In an integral, most of the result comes from were the function had a large value. Because of this, we can converge on the answer more quickly if our Monte Carlo method uses a distribution that preferentially samples from the high-value areas.
Consider the function f(x) = \begin{cases}1 &\text{if}\;|a| < 0.5\\0&\text{otherwise}\end{cases}. When evaluating \displaystyle \int_{-10}^{10} f(x) dx, the value of the integral value comes entirely from x between -0.5 and 0.5; numbers outside that range add 0. Thus, we’ll converge more quickly if we use a P that preferentially samples near zero.
To realize this preferential sampling, we need a prior, our estimate of the likely shape of the function we are integrating. We use the prior as the probability distribution for samples, sampling more often where we think the function has a higher value. To compensate for this sampling bias, at each sample we evaluate f(x) / P(x) instead of f(x), where P(x) is the prior we used to generate x.
This approach of Monte Carlo integration with preferential sampling where we expect the function to be large is called importance sampling and is a major topic in optimizing raytracing.
The simplest version of raytracing runs as follows:
Shoot a ray from the camera and see what it hits
Attenuate the intensity of the ray according to the albedo of the material
If the intensity is still high enough to contribute significantly to the scene,
Otherwise
There are two primary tuning parameters here. The first is what
high enough
means ins step 3. Some use a particular intensity
cut-off, some a particular number of bounces from the initial ray, some
a combination. The second is what n to
pick in step 3.a. Some use n=1, making
up for few secondary rays by shooting more primary rays, prioritizing
simplicity and antialiasing; others use n varying depending on the material, sampling
more if the material has a wide BSDF.
The math of 3.a. and 3.b. deserves additional comment. Both are ways of evaluating \int_S L P d\Omega; 3.a. uses P as the sampling distribution and evaluates L by re-running this entire process for each of the n rays generated; 3.b. uses an approximation of L as the sampling distribution and evaluates the true L by looking for shadows (i.e. did the ray pointed at light X hit light X or something else?), multiplying the results by P from the BSDF.
Step 3.a. in the basic raytracing process includes importance
sampling based on the BSDF, but ignores sampling based on the light
intensity. When raytracing literature refers to importance
sampling
they generally mean improving that to include a prior of
light intensity. Then rays are generated based on P_{\text{BSDF}} P_{\text{Light prior}} and
the added influence of sampling L L / P_{\text{Light prior}}.
Light priors do not have a single obvious best form. We want some estimate of how much light is likely to come from each direction that we can easily update as we learn more about lighting during the raytracing process and easily sample from during random ray generation. While and important and interesting topic, solutions to this problem are out of scope for this course.