Least Squares Data Fitting


Learning Objectives

Linear Regression with a Set of Data

Consider a set of \(m\) data points (where \(m>2\)), \({(t_1,y_1),(t_2,y_2),\dots,(t_m,y_m)}\). Suppose we want to find a straight line that best fits these data points. Mathematically, we are finding \(x_0\) and \(x_1\) such that

\[ y_i = x_1\,t_i + x_0, \quad \forall i \in [1,m]. \]

In matrix form, the resulting linear system is

\[ \begin{bmatrix} 1 & t_1 \\ 1& t_2 \\ \vdots & \vdots\\ 1& t_m \end{bmatrix} \begin{bmatrix} x_0\\ x_1 \end{bmatrix} = \begin{bmatrix} y_1\\ y_2\\ \vdots\\ y_m \end{bmatrix} \]

However, it is obvious that we have more equations than unknowns, and there is usually no exact solution to the above problem.

Generally, if we have a linear system

\[{\bf A x} = {\bf b}\]

where \({\bf A}\) is an \(m\times n\) matrix. When \(m>n\) we call this system overdetermined and the equality is usually not exactly satisfiable as \({\bf b}\) may not lie in the column space of \({\bf A}\).

Therefore, an overdetermined system is better written as

\[{\bf A x} \cong {\bf b}\]

Linear Least-squares Problem

For an overdetermined system \({\bf A x}\cong {\bf b}\), we are typically looking for a solution \({\bf x}\) that minimizes the squared Euclidean norm of the residual vector \({\bf r} = {\bf b} - {\bf A} {\bf x}\),

\[\min_{ {\bf x} } \|{\bf r}\|_2^2 = \min_{ {\bf x} } \|{\bf b} - {\bf A} {\bf x}\|_2^2.\]

This problem \(A {\bf x} \cong {\bf b}\) is called a linear least-squares problem, and the solution \({\bf x}\) is called least-squares solution. Linear Least Squares problem \(A {\bf x} \cong {\bf b}\) always has solution. Here we will first focus on linear least-squares problems.

Data Fitting vs Interpolation

It is important to understand that interpolation and least-squares data fitting, while somewhat similar, are fundamentally different in their goals. In both problems we have a set of data points \((t_i, y_i)\), \(i=1,\ldots,m\), and we are attempting to determine the coefficients for a linear combination of basis functions.

With interpolation, we are looking for the linear combination of basis functions such that the resulting function passes through each of the data points exactly. So, for \(m\) unique data points, we need \(m\) linearly independent basis functions (and the resulting linear system will be square and full rank, so it will have an exact solution).

In contrast, however, with least squares data fitting we have some model that we are trying to find the parameters of the model that best fits the data points. For example, with linear least squares we may have 300 noisy data points that we want to model as a quadratic function. Therefore, we are trying represent our data as

\[y = x_0 + x_1 t + x_2 t^2\]

where \(x_0, x_1,\) and \(x_2\) are the unknowns we want to determine (the coefficients to our basis functions). Because there are significantly more data points than parameters, we do not expect that the function will exactly pass through the data points. For this example, with noisy data points we would not want our function to pass through the data points exactly as we are looking to model the general trend and not capture the noise.

Normal Equations

Consider the least squares problem, \({\bf A} {\bf x} \cong {\bf b}\), where \({\bf A} \) is \(m \times n\) real matrix (with \(m > n\)). As stated above, the least squares solution minimizes the squared 2-norm of the residual. Hence, we want to find the \(\mathbf{x}\) that minimizes the function:

\[\phi(\mathbf{x}) = \|\mathbf{r}\|_2^2 = (\mathbf{b} - {\bf A} \mathbf{x})^T (\mathbf{b} - {\bf A} \mathbf{x}) = \mathbf{b}^T \mathbf{b} - 2\mathbf{x}^T {\bf A} ^T \mathbf{b} + \mathbf{x}^T {\bf A} ^T {\bf A} \mathbf{x}.\]

To solve this unconstrained minimization problem, we need to satisfy the first order necessary condition to get a stationary point:

\[\nabla \phi(\mathbf{x}) = 0 \Rightarrow -2 {\bf A} ^T \mathbf{b} + 2 {\bf A} ^T {\bf A} \mathbf{x} = 0.\]

The resulting square (\(n \times n\)) linear system

\[{\bf A} ^T {\bf A} \mathbf{x} = {\bf A} ^T \mathbf{b}\]

is called the system of normal equations. If the matrix \({\bf A}\) is full rank, the least-squares solution is unique and given by:

\[{\bf x} = ({\bf A} ^T {\bf A})^{-1} {\bf A} ^T \mathbf{b}\]

We can look at the second-order sufficient condition of the the minimization problem by evaluating the Hessian of \(\phi\):

\[{\bf H} = 2 {\bf A} ^T {\bf A}\]

Since the Hessian is symmetric and positive-definite, we confirm that the least-squares solution \({\bf x}\) is indeed a minimizer.

Although the least squares problem can be solved via the normal equations for full rank matrices, the solution tend to worsen the conditioning of the problem. Specifically,

\[\text{cond}({\bf A}^T {\bf A}) = (\text{cond}({\bf A}))^2.\]

Because of this, finding the least squares solution using Normal Equations is often not a good choice (however, simple to implement).

Another approach to solve Linear Least Squares is to find \({\bf y} = {\bf A} {\bf x}\) which is closest to the vector \({\bf b}\). When the residual \({\bf r} = {\bf b} - {\bf y} = {\bf b} - {\bf A} {\bf x}\) is orthogonal to all columns of \({\bf A}\), then \({\bf y}\) is closest to \({\bf b}\).

Computational complexity:

Since the system of normal equations yield a square and symmetric matrix, the least-squares solution can be computed using efficient methods such as Cholesky factorization. Note that the overall computational complexity of the factorization is \(\mathcal{O}(n^3)\). However, the construction of the matrix \({\bf A} ^T {\bf A}\) has complexity \(\mathcal{O}(mn^2)\). In typical data fitting problems, \(m >> n\) and hence the overall complexity of the Normal Equations method is \(\mathcal{O}(mn^2)\).

Solving Least-Squares Problems Using SVD

Another way to solve the least-squares problem \({\bf A} {\bf x} \cong {\bf b}\) (where we are looking for \({\bf x}\) that minimizes \(\|{\bf b} - {\bf A} {\bf x}\|_2^2\) is to use the singular value decomposition (SVD) of \({\bf A}\),

\[{\bf A} = {\bf U \Sigma V}^T\]

where the squared norm of the residual becomes:

\[ \begin{align} \|{\bf b} - {\bf A} {\bf x}\|_2^2 &= \|{\bf b} - {\bf U \Sigma V}^T {\bf x}\|_2^2 & (1)\\ &= \|{\bf U}^T ({\bf b} - {\bf U \Sigma V}^T {\bf x})\|_2^2 & (2)\\ &= \|{\bf U}^T {\bf b} - {\bf \Sigma V}^T {\bf x}\|_2^2 \end{align} \]

We can go from (1) to (2) because multiplying a vector by an orthogonal matrix does not change the 2-norm of the vector. Now let

\[ {\bf y} = {\bf V}^T {\bf x} \]
\[ {\bf z} = {\bf U}^T {\bf b}, \]

then we are looking for \({\bf y}\) that minimizes

\[ \|{\bf z} - \Sigma {\bf y}\|_2^2. \]

Note that

\[ \Sigma {\bf y} = \begin{bmatrix} \sigma_1 y_1\\ \sigma_2 y_2\\ \vdots \\ \sigma_n y_n \end{bmatrix},(\sigma_i = \Sigma_{i,i}) \]

so we choose

\[ y_i = \begin{cases} \frac{z_i}{\sigma_i} & \sigma_i \neq 0\\ 0 & \sigma_i = 0 \end{cases} \]

which will minimize \(\|{\bf z} - {\bf \Sigma} {\bf y}\|_2^2\). Finally, we compute

\[{\bf x} = {\bf V} {\bf y}\]

to find \({\bf x}\). The expression of least-squares solution is

\[{\bf x} = \sum_{\sigma_i \neq 0} \frac{ {\bf u}_i^T {\bf b} }{\sigma_i} {\bf v}_i\]

where \({\bf u}_i\) represents the \(i\)th column of \({\bf U}\) and \({\bf v}_i\) represents the \(i\)th column of \({\bf V}\).

In closed-form, we can express the least-squares solution as:

\[{\bf x} = {\bf V\Sigma}^{+}{\bf U}^T{\bf b}\]

where \({\bf \Sigma}^{+}\) is the pseudoinverse of the singular matrix computed by taking the reciprocal of the non-zero diagonal entries, leaving the zeros in place and transposing the resulting matrix. For example:

\[\Sigma = \begin{bmatrix} \sigma_1 & & &\\ & \ddots & & \\& & \sigma_r &\\ & & & 0\\ 0 & ... & ... & 0 \\ \vdots & \ddots & \ddots & \vdots \\ 0 & ... & ... & 0 \end{bmatrix} \quad \implies \quad \Sigma^{+} = \begin{bmatrix} \frac{1}{\sigma_1} & & & & 0 & \dots & 0 \\ & \ddots & & & & \ddots &\\ & & \frac{1}{\sigma_r} & & 0 & \dots & 0 \\ & & & 0 & 0 & \dots & 0 \end{bmatrix}.\]

Or in reduced form:

\[{\bf x} = {\bf V\Sigma}_R^{+}{\bf U}_R^T{\bf b}\]

Note: Solving the least squares problem using a given reduced SVD has time complexity \(\mathcal{O}(mn)\).

Determining Residual in Least-Squares Problem Using SVD

We’ve shown above how the SVD can be used to find the least-squares solution (the solution that minimizes the squared 2-norm of the residual) to the least squares problem \({\bf A} {\bf x} \cong {\bf b}\). We can also use the SVD to determine an exact expression for the value of the residual with the least-squares solution.

Assume in the SVD of \({\bf A}\), \({\bf A} = {\bf U \Sigma V}^T\), the diagonal entries of \({\bf \Sigma}\) are in descending order (\(\sigma_1 \ge \sigma_2 \ge \dots \)), and the first \(r\) diagonal entries of \({\bf \Sigma}\) are nonzeros while all others are zeros, then

\[\begin{align} \|{\bf b} - A {\bf x}\|_2^2 &= \|{\bf z} - \Sigma {\bf y}\|_2^2\\ &= \sum_{i=1}^n (z_i - \sigma_i y_i)^2\\ &= \sum_{i=1}^r (z_i - \sigma_i \frac{z_i}{\sigma_i})^2 + \sum_{i=r+1}^n (z_i - 0)^2\\ &= \sum_{i=r+1}^n z_i^2 \end{align}\]

Recall that

\[ {\bf z} = {\bf U}^T {\bf b}, \]

we get

\[ \|{\bf b} - {\bf A} {\bf x}\|_2^2 = \sum_{i=r+1}^n ({\bf u}_i^T {\bf b})^2 \]

where \({\bf u}_i\) represents the \(i\)th column of \({\bf U}\).

(For more formal proof check this video.)

Example of a Least-squares solution using SVD

Assume we have \(3\) data points, \({(t_i,y_i)}={(1,1.2),(2,1.9),(3,1)}\), we want to find a line that best fits these data points. The code for using SVD to solve this least-squares problem is:

import numpy as np
import numpy.linalg as la

A = np.array([[1,1],[2,1],[3,1]])
b = np.array([1.2,1.9,1])
U, s, V = la.svd(A)
V = V.T
y = np.zeros(len(A[0]))
z = np.dot(U.T,b)
k = 0
threshold = 0.01
while k < len(A[0]) and s[k] > threshold:
  y[k] = z[k]/s[k]
  k += 1

x = np.dot(V,y)
print("The function of the best line is: y = " + str(x[0]) + "x + " + str(x[1]))

Non-linear Least-squares Problem vs. Linear Least-squares Problem

The above linear least-squares problem is associated with an overdetermined linear system \(A {\bf x} \cong {\bf b}.\) This problem is called “linear” because the fitting function we are looking for is linear in the components of \({\bf x}\). For example, if we are looking for a polynomial fitting function

\[ f(t,{\bf x}) = x_1 + x_2t + x_3t^2 + \dotsb + x_nt^{n-1} \]

to fit the data points \({(t_i,y_i), i = 1, …, m}\) and (\(m > n\)), the problem can be solved using the linear least-squares method, because \(f(t,{\bf x})\) is linear in the components of \({\bf x}\) (though \(f(t,{\bf x})\) is nonlinear in \(t\)).

If the fitting function \(f(t,{\bf x})\) for data points \((t_i,y_i), i = 1, ..., m\) is nonlinear in the components of \({\bf x}\), then the problem is a non-linear least-squares problem. For example, fitting sum of exponentials

\[ f(t,{\bf x}) = x_1 e^{x_2 t} + x_2 e^{x_3 t} + \dotsb + x_{n-1} e^{x_n t} \]

is a non-linear least-squares problem.

Review Questions

ChangeLog