This page does not represent the most current semester of this course; it is present merely as an archive.
Many graphics problems include differential equations. They have the
general form the rate of change of x
is some function of x
. For most
graphics simulations, the following all hold:
We describe the differential equation locally. For example, the motion of each bit of cloth depends on its state and the states of the nearby bits of cloth.
But the local bits chain together to form non-local results.
And the resulting equation has no closed-form solution: the solution families you learn about in differential equations class don’t work.
So we solve them by
Making a linear approximation of the equations at the current time.
Because of Newton’s Third Law, the resulting matrix is usually symmetric.
Because motion is smooth and the linearization of smooth systems is convex, the resulting matrix is positive-definite; that is, all its eigenvalues are positive.
Because the equations were local, the resulting matrix is sparse; that is, most of its entries are zero.
Solve the linear system.
The Conjugate Gradient method is particularly good at solving sparse symmetric positive-definite problems.
Use the solution to bump up to a slightly greater time
There are two approximations being used here: first, we treat complicated math as if it were linear, and, second, we use a bunch of small steps in time instead of a continuous model of time. These two are related: as each time step length approaches zero, the linear solution approaches the correct solution.
This section presents the conjugate gradient method. It is generally the fastest class of solvers for the kinds of differential equations we encounter in graphics. It’s general process runs as follows:
solutionof the equation
A \vec x = \vec b.
A \vec x - \vec b).
Given a linear system A \vec x = \vec b, where A is sparse, symmetric, and positive-definite:
Store A well. The operation we’ll do with A is A \vec v for various \vec v, so we need a storage that makes multiplication by a vector fast. LIL is the easiest such storage technique to code: A matrix is a list of rows and each row is a list of (column index, value) pairs for the non-zero entries in that row.
Optionally, precondition the system. Instead of solving A \vec x = \vec b, solve E^{-1}A(E^{-1})^T \vec y = E^{-1}\vec b and \vec y = E^T x.
Preconditioning is a topic we’ll not really discuss in this class. A good E can make the algorithm converge in fewer steps, but also adds extra up-front work that becomes more involved for better Es; that means picking E is something of a balancing act, one more involved than we can adequately cover in this class.
Use the following algorithm which, given an initial estimate \vec x, iteratively improves it:
\vec r = \vec b - A \vec x // the residual vector; stores the error of the current estimate
\vec p = \vec r // the direction we’ll move \vec x in to improve it
\rho = \vec r \cdot \vec r // the squared magnitude of \vec r
repeat until \rho is
sufficiently small
:
\vec a_p = A \vec p
\alpha = \dfrac{\rho}{\vec p \cdot \vec a_p} // how far to move in the \vec p direction
\vec x \mathrel{+}= \alpha \vec p // move
\vec r \mathrel{-}= \alpha \vec a_p // update the error
\rho' = \vec r \cdot \vec r
\beta = \dfrac{\rho'}{\rho} // used to ensure each direction is conjugate with all previous directions
\rho := \rho' // update the error squared magnitude
\vec p \mathrel{\times}= \beta // update the direction for the next step
\vec p \mathrel{+}= \vec r // update the direction for the next step