A polynomial in a variable \(x\) can always be written (or rewritten) in the form
where \(a_{i}\) (\(0 \le i \le n\)) are constants.
Using the summation notation, we can express the polynomial concisely by
If \(a_n \neq 0\), the polynomial is called an \(n\)-th degree polynomial.
A monomial in a variable \(x\) is a power of \(x\) where the exponent is a nonnegative integer (i.e. \(x^n\) where \(n\) is a nonnegative integer). You might see another definition of monomial which allows a nonzero constant as a coefficient in the monomial (i.e. \(a x^n\) where \(a\) is nonzero and \(n\) is a nonnegative integer). Then an \(n\)-th degree polynomial
can be seen as a linear combination of monomials \({x^i\ |\ 0 \le i \le n}\).
A Taylor series is a representation of a function as an infinite sum of terms that are calculated from the values of the function’s derivatives at a single point. The Taylor series expansion about \(x=x_0\) of a function \(f(x)\) that is infinitely differentiable at \(x_0\) is the power series
Using the summation notation, we can express the Taylor series concisely by
(Recall that \(0! = 1\))
In practice, however, we often cannot compute the (infinite) Taylor series of the function, or the function is not infinitely differentiable at some points. Therefore, we often have to truncate the Taylor series (use a finite number of terms) to approximate the function.
If we use the first \(n+1\) terms of the Taylor series, we will get
which is called a Taylor polynomial of degree \(n\).
Suppose that \(f(x)\) is an \(n+1\) times differentiable function of \(x\), and \(T_n(x)\) is the Taylor polynomial of degree \(n\) for \(f(x)\) centered at \(x_0\). Then when \(h = |x-x_0| \to 0\), we obtain the truncation error bound by \[ \left|f(x)-T_n(x)\right|\le C \cdot h^{n+1} = O(h^{n+1}) \]
We will see the exact expression of \(C\) in the next section: Taylor Remainder Theorem.
Suppose that \(f(x)\) is an \(n+1\) times differentiable function of \(x\). Let \(R_n(x)\) denote the difference between \(f(x)\) and the Taylor polynomial of degree \(n\) for \(f(x)\) centered at \(x_0\). Then
for some \(\xi\) between \(x\) and \(x_0\). Thus, the constant \(C\) mentioned above is
.
Suppose we want to expand \(f(x) = \cos x\) about the point \(x_0 = 0\). Following the formula
\[f(x) = f(x_0)+\frac{ f'(x_0) }{1!}(x-x_0)+\frac{ f''(x_0) }{2!}(x-x_0)^2+\frac{ f'''(x_0) }{3!}(x-x_0)^3+\dotsb\]we need to compute the derivatives of \(f(x) = \cos x\) at \(x = x_0\).
\[\begin{align} f(x_0) &= \cos(0) = 1\\ f'(x_0) &= -\sin(0) = 0\\ f''(x_0) &= -\cos(0) = -1\\ f'''(x_0) &= \sin(0) = 0\\ f^{(4)}(x_0) &= \cos(0) = 1\\ &\vdots \end{align}\]Then
\[\begin{align} \cos x &= f(0)+\frac{f'(0)}{1!}x+\frac{f''(0)}{2!}x^2+\frac{f'''(0)}{3!}x^3+\dotsb\\ &= 1 + 0 - \frac{1}{2}x^2 + 0 +\dotsb\\ &= \sum_{k=0}^{\infty} \frac{(-1)^k}{(2k)!}x^{2k} \end{align}\]Suppose we want to approximate \(f(x) = \sin x\) at \(x = 2\) using a degree-4 Taylor polynomial about (centered at) the point \(x_0 = 0\). Following the formula
we need to compute the first \(4\) derivatives of \(f(x) = \sin x\) at \(x = x_0\).
Then
Using this truncated Taylor series centered at \(x_0 = 0\), we can approximate \(f(x) = \sin(x)\) at \(x=2\). To do so, we simply plug \(x = 2\) into the above formula for the degree 4 Taylor polynomial giving
Suppose we want to approximate \(f(x) = \sin x\) using a degree-4 Taylor polynomial expanded about the point \(x_0 = 0\). We want to compute the error bound for this approximation. Following Taylor Remainder Theorem,
for some \(\xi\) between \(x_0\) and \(x\).
If we want to find the upper bound for the absolute error, we are looking for an upper bound for \(\vert f^{(5)}(\xi)\vert\).
Since \(f^{(5)}(x) = \cos x\), we have \(|f^{(5)}(\xi)|\le 1\). Then \[ |R_4(x)| = \left|\frac{f^{(5)}(\xi)}{5!} (x-x_0)^{5}\right| = \frac{|f^{(5)}(\xi)|}{5!} |x|^{5} \le \frac{1}{120} |x|^{5} \]
For a differentiable function \(f:\mathbb{R} \rightarrow \mathbb{R}\), the derivative is defined as
Let’s consider the finite difference approximation to the first derivative as
where \(h\) is often called a “perturbation”, i.e., a “small” change to the variable \(x\) (small when compared to the magnitude of \(x\)). By the Taylor’s theorem, we can write
for some \(\xi \in [x,x+h]\). Rearranging the above we get
Therefore, the truncation error of the finite difference approximation is bounded by \(M\,h/2\), where \(M\) is a bound on \(\vert f''(\xi) \vert\) for \(\xi\) near \(x\).
Reference text: “Scientific Computing: an introductory survey” by Michael Heath