BIOE 205

Lecture 06

Reading material: Section 3.1 - 3.3 of CSSB.

Recap & Intro

Last time, we discussed correlations, the concept of orthogonality, inner products, the issue with correlations and the use of averaging to filter out noise.

In this lecture and the next we will try to tie together a few concepts introduced so far to understand the Fourier series and expansions.

  1. Some background
    1. Vector spaces
    2. Basis
    3. Why do we care?
  2. Fourier Transform(s)
    1. Discrete Fourier Transform (DFT)
    2. Fourier Series
    3. Fourier Transform
  3. Wrap up

⚠️ Note
CSSB provides an alternative introduction to the Fourier analysis than this lecture. This lecture note does not replace it. In particular, students should read Sections 3.3.1 - 3.3.6 (included) of CSSB. The textbook also provides code implementations of many of the algorithms and procedures; so these notes, will instead try to supplement intuition.

Some background

We tend to think of a vector as a tuple or collection of real numbers. Intuitively, this makes sense to us because in two and three dimensions these correspond to the familiar coordinates from 2D and 3D geometry. Vector addition, multiplication etc. are thus intuitively familiar to us and we have no qualms with writing: let xR3x \in \mathbb{R}^3 refer to a vector in 3D. In fact, we are even comfortable with: let yRny \in \mathbb{R}^n. Now let us take a step back and think of Rn\mathbb{R}^n.

What is it?

Vector spaces

We naturally think of Rn\mathbb{R}^n as the space in which xx or yy or our vectors live in or belong to - a vector space. How is this formalized? Well, recall from BIOE210[1] that we have field axioms that specify the rules followed by elements that we can do algebra with. Similarly, there are axioms or rules that a space VV must satisfy to be a vector space. Note that there is no specification in the field axioms on what the elements a,ba, b and cc are: only that they follow some rules.

In the same vein, there are no restrictions on what we can consider as vectors in a vector space as long as they follow these rules. Don't be intimidated by the math on that page: you already know a few vector spaces! For example R,R2\mathbb{R}, \mathbb{R}^2 and Rn\mathbb{R}^n are all vector spaces which follow those rules; only that the rules are written up in an abstract way.

Key point here is that Rn\mathbb{R}^n is not the only thing that follows the rules of vector space. There are more abstract things/objects that we can show are vector spaces! This includes \dots you guessed it: certain classes of signals and functions.

Basis

Now follows a slightly wordy definition:

Definition: (Basis) Given a vector space VV, a set of vectors BB serves as basis for VV if every element of VV may be written in a unique way as a finite linear combination of elements of BB.
There is of course a more formal and precise definition one can give but that will get into more math than we need here. Instead we will try to understand what a basis is with some examples.

Answer: We can take
e1:=[10]ande2:=[01] e_1 := \begin{bmatrix} 1 \\ 0 \end{bmatrix} \quad \textrm{and} \quad e_2 := \begin{bmatrix} 0 \\ 1 \end{bmatrix}
as a basis for R2\mathbb{R}^2. Then any vector v=[v1,v2]TR2v = [v_1, v_2]^T \in \mathbb{R}^2 is by default a linear combination
v=v1[10]+v2[01]=k=12viei v = v_1 \begin{bmatrix} 1 \\ 0 \end{bmatrix} + v_2 \begin{bmatrix} 0 \\ 1 \end{bmatrix} = \sum \limits _{k=1}^{2} v_i e_i
This basis is not unique as we will see later.

Take note of how we have written a summation to express vv in terms of the basis. This way of writing vectors will show up a lot.

Whenever we talked of vectors, we have been implicitly using a basis – in fact the standard basis {ek}\{e_k\} which are defined as vectors that are zero in all entries except the kk-th one. Thus even a vector like v=[3,4,5]Tv = [3, 4, 5]^T is actually v=3e1+4e2+5e3v = 3e_1 + 4e_2 + 5e_3 where

e1=[100],e2=[010],e3=[001] e_1 = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}, \quad e_2 = \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}, \quad e_3 = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}
Therefore, [3,4,5]T[3, 4, 5]^T is shorthand specifying what weights we must use to sum the eke_k such that they add to vv!

We are always using some basis (even if implicitly) when we express vectors in their coordinates.

As mentioned above, the choice of the basis vector set BB is not unique. It just so happens that the set {ek}k=1n\{e_k\}_{k=1}^n as the basis for Rn\mathbb{R}^n is extremely convenient. The construction below shows how the same point in R2\mathbb{R}^2 can be represented in two different bases.

The black axes obviously represent the standard basis e1e_1 and e2e_2. The blue pair of rays represent the directions given by a new set of basis vectors BB (in black) that you can choose by entering the vectors aa and bb into the input boxes. Since we are in 2D or equivalently R2\mathbb{R}^2, we need a pair of basis vectors B={a,b}B=\{a, b\} and each vector has two entries, a=[ax,ay]Ta=[a_x, a_y]^T and b=[bx,by]Tb=[b_x, b_y]^T.

Then moving the blue point around shows how it is represented in coordinates that utilize the usual basis (in black) and the newly chosen basis (in blue). Thus the blue coordinates & the black coordinates represent the same vector in R2\mathbb{R}^2 but with different sets as the basis.

Answer: We can write:
v=v1b1+(v2v1)b2 v = v_1 b_1 + \left(v_2 - v_1 \right)b_2

This non-uniqueness of the basis set is not a peculiar feature of just R2\mathbb{R}^2 or Rn\mathbb{R}^n.

Answer: The coordinates of f(x)f(x) are found by expanding f(x)=x3+3x+3x2+1f(x) = x^3 + 3x + 3x^2 + 1. The set M3M_3 by definition is:
M3={1,x,x2,x3} M_3 = \left\{1, x, x^2, x^3 \right\}
Since f(x)=11+3x+3x2+1x3f(x) = \textcolor{blue}{1} \cdot 1 + \textcolor{blue}{3} \cdot x + \textcolor{blue}{3} \cdot x^2 + \textcolor{blue}{1} \cdot x^3 we can say the coordinates are v=[1,3,3,1]v = [1, 3, 3, 1] in the basis given by M3M_3. If we use mkm_k to represent the elements xkx^k of M3M_3 then we can say:
f(x)=(1+x)3f(x)=k=03vkmk f(x) = (1 + x)^3 \quad \Leftrightarrow \quad f(x) = \sum \limits _{k=0}^3 v_k m_k
where again we have written ff as a summation of the basis elements.

As shown above Pn(x)P_n(x) has the familiar basis {1,x,x2,x3,,xn}\{1, x, x^2, x^3, \dots, x^n\} that we are used to seeing. But we can also find another basis set. For example consider the polynomials in xx given by

Ln(x):=12nn!dndxn(x21)nn=0,1,2,, L_n(x):= \dfrac{1}{2^n n!} \dfrac{d^n}{dx^n} \left( x^2 - 1 \right)^n \qquad n = 0, 1, 2, \dots,

One can show that the set {Lk(x)}k=1n\{L_k(x)\}_{k=1}^n is also a basis for Pn(x)P_n(x).

Answer: Since f(x)P3(x)f(x) \in P_3(x) we write out the relevant basis vectors:
L0=1,L1=x,L2=12(3x21),L3=12(5x33x)\begin{aligned} L_0 &= 1, \qquad L_1 = x, \qquad L_2 = \dfrac{1}{2} \left(3x^2 -1 \right), \\ L_3 &= \dfrac{1}{2} \left(5 x^3 - 3x \right) \end{aligned}
One can verify that f(x)f(x) expressed as a vector in this new basis is v=[2,18/5,2,2/5]v = [2, 18/5, 2, 2/5]. That is
f(x)=2L0(x)+18/5L1(x)+2L2(x)+2/5L3(x)v=k=03vkLk f(x) = \textcolor{blue}{2} \cdot L_0(x) + \textcolor{blue}{18/5} \cdot L_1(x) + \textcolor{blue}{2} \cdot L_2(x) + \textcolor{blue}{2/5} \cdot L_3(x) \quad \Leftrightarrow \quad v = \sum \limits _{k=0}^3 v_k L_k

Be sure to work this out: Verify (a) the application of the defining equation to write out the basis vectors as well as (b) the fact that (1+x)3(1+x)^3 is indeed [2,18/5,2,2/5][2, 18/5, 2, 2/5] in this basis.

Why do we care?

Apart from providing a preview of what will be a major & crucial topic in any course on linear algebra[2]; the main takeaway here is that representing a problem or situation in a different basis can often make things simpler or provide more insight into a problem. For example polar & spherical coordinates often simplify problems in physics as you may recall.

Indeed, many important techniques you will learn, including Principal Components Analysis, Singular Value Decomposition, eigendecompositions etc. are all essentially a change of basis to some particularly useful basis vectors. Moreover, change of basis calculations form an extremely important part of various analytical & problem solving techniques including solving differential equations, implementing matrix multiplications, etc.

More pertinently for us, recall that we can think of signals and functions as being elements of certain vector space. It turns out that the in the continuous domain, what is akin to the change-of-basis we discussed above are called integral transforms \dots but more on that later.

Fourier Transform(s)

CSSB talks about four different "transforms" in the forward and reverse directions[3]. They are listed as:

Transform nameApplicabiliyAcronynmImplementation/Usage
Discrete Fourier Transformapplicable to periodic discrete signalsDFTall real world applications
Fourier Seriesapplicable to periodic and continuous signalsFSanalytical computations
Fourier Transformapplicable to aperiodic and continuous signalsFTanalytical computations
Discrete Time Fourier Transformtheoretically applicable to aperiodic & discrete signalsDTFTnot possible

Please take care to use the right name because each of the techniques have slightly different areas of applicability as listed in the table. In the words of one bestselling author:

You might be thinking that the names given to these four types of Fourier transforms are confusing and poorly organized. You're right; the names have evolved rather haphazardly over 200 years. There is nothing you can do but memorize them and move on.
~ S. W. Smith

We will start with the DFT above, and discuss each in turn, though we may not follow the unwieldy notation used in CSSB. We start with the DFT because it is directly related to the discussions we have had above.

Discrete Fourier Transform (DFT)

Given one period of discrete periodic signal f[n]f[n] as vector of length NN we define the DFT of it as the sequence of NN complex numbers given by:

F[k]:=1Nn=0N1f[n]einωkwhereωk:=2πkNandk=0,1,,N1 F[k] := \dfrac{1}{\sqrt{N}} \sum \limits _{n=0} ^{N-1} f[n] e^{-i n \omega_k} \quad \textrm{where} \quad \omega_k := \dfrac{2 \pi k}{N} \quad \textrm{and} \quad k=0, 1, \dots, N-1

Note the similarity of the above summation with those we have been using to express vectors in terms of different basis. This is no coincidence because the inverse DFT is:

f[n]:=1Nk=0N1F[k]einωkforn=0,1,2,,N1 f[n] := \dfrac{1}{\sqrt{N}} \sum \limits _{k=0}^{N-1} F[k] e^{i n \omega_k} \quad \textrm{for} \quad n=0, 1, 2, \dots, N-1

We finally hit pay dirt for mining through all the math so far.

Key point: One can view the DFT (forward) as computing the coefficients F[k]F[k] we need to express the original discrete signal f[n]f[n] in terms of a new basis of complex exponentials.

One naturally wonders why we would bother to write a signal in terms of complex exponentials. Here is where we tie up another thread we introduced a couple of lectures back via Euler's formula: complex exponentials are sinusoids!

Thus in the above:

einωk=cos(nωk)+isin(nωk) e^{in\omega_k} = \cos(n\omega_k) + i \sin(n\omega_k)

One should think of this as evaluating a sine and cosine of frequency ωk\omega_k at the time nn.

Note that the summation is over kk in (3): Therefore, we are expressing the original signals as a summation of sinusoids of different frequencies! This is useful because sinusoids are the "nicest" periodic functions in a sense[4].

⚠️ Note
  1. Recall from Lecture 03 that a general sinusoid can be written in terms of pure sines & cosines without phases. Therefore, rather than having to find the coefficient for the sine part and the coefficient for the cosine part separately (as we will in next lecture), the complex formulation above does it in one shot.

  2. We have used square brackets [][ \cdot ] in the above in the usual sense for discrete signals but also pay close attention to the use of FF vs ff (one refers to the signal and the other its transform).

  3. The normalization factor we have used above is N\sqrt{N}; which lends credence to our interpretation as a change of basis. CSSB and most other texts/implementations present an unnormalized DFT and normalized iDFT (by NN); it changes things very little as long as one is consistent.

Fourier's great insight was that theoretically, any periodic function can be expressed in terms of a summation of sines & cosines ++ a constant. In practice though, there are a few caveats. For example, the function to be expressed needs to be "nice" in certain mathematical terms. Sometimes the summation needed might be an infinite one which we cannot do on a computer. The demonstration below illustrates this with a square wave. The first collection of plots (of different colors) show the sinusoids of five different frequencies being added together to create the plot immediately below it (green). The green plot can be changed by using the checkboxes to add/remove sine wave of a particular frequency. The last plot (brown) shows what happens when you increase the number of frequencies being summed together (using the slider). At its maximum value, we see that we get a very good approximation of a square wave.

In the above definition we took a complete period of a signal f[k]f[k] when we computed its transform. This is an implicit assumption whenever we perform the DFT. In practice, it is rarely the case that we will have one complete period of the signal we are performing DFT on which leads to some artifacts \dots. These will be the subject of subsequent lectures.

Note that while we discuss other flavors of the transform next; the only one we can implement and perform on real world signals is the DFT via a computer. Nevertheless, the other transforms are important to know for purposes of analysis and problem solving and we look at these next.

Answer: This is an exercise that every engineer must do at least once. Hence it is a homework problem but here are some hints:

To show they are inverses one should go back and forth between equations – put (2) in (3) and get the identity & vice versa. However, the ideas are essentially the same so it suffices to just do one. To start, note that in (2) the nn is a dummy variable used only for summation and is distinct from the nn in (3) where is referring to the nn-th coordinate of ff. So start by rewriting the forward transform as:

F[k]=1Nm=0N1f[m]eimωk F[k] = \dfrac{1}{\sqrt{N}} \sum \limits _{m=0} ^{N-1} f[m] e ^{-im \omega_k}
Plug this into (3) to get 1N\dfrac{1}{N} times a double sum. The term that will need to be addressed is the eiωk(nm)e^{i\omega_k(n-m)} term. Show that the sum keiωk(nm)\sum_k e^{i \omega_k (n-m)} reduces to NN if m=nm=n and zero otherwise. For this step, using the definition of ωk\omega_k and recalling the formula for a sum of a geometric series:
j=1parj1=a(1rp)1r \sum \limits _{j=1} ^p ar^{j-1} = \dfrac{a(1-r^p)}{1-r}
maybe helpful. Hint: e2πik/N=(e2πi/N)ke^{2\pi i k /N} = (e^{2 \pi i /N})^k.

Finally show that the leftover sum mf[m]()\sum_m f[m] \left( \dots \right) therefore is simply f[n]f[n] since the only nonzero term arises when m=nm=n.

Fourier Series

In the above discussion, we had a discrete periodic signal to begin with. What happens if instead we have a continuous periodic signal? We no longer have any indices to sum over; so we should adjust our equations a bit. While in terms of implementation, everything in a computer relies on the DFT, conceptually we tend to think in terms of continuous signals and so this is a practical concern.

❗ Caution
Mathematically speaking, the interpretation that we just made, i.e. viewing Fourier transforms and expansions as a change-of-basis might seem fraught with difficulties beyond the DFT; but for our purposes in this course and for almost all engineering purposes – this interpretation serves just fine.

To go from discrete to continuous recall that we summed over a complete period of a signal. For continuous signals the summation becomes an integration and just like before we integrate over a full period TT. Next, the summation variable nn turns into the integration variable tt. Moreover as we replace f[t]f(t)f[t] \mapsto f(t) we also do:

n=0N10Tandeinωkeiωkt \sum \limits _{n=0} ^{N-1} \dots \quad \mapsto \quad \int \limits _0 ^T \dots \qquad \textrm{and} \qquad e^{-i n \omega_k } \quad \mapsto \quad e^{-i \omega_k t}

Thus we get that Fourier Series corresponding to a periodic signal f(t)f(t) over one period [0,T][0, T] is given as:

F[k]=1T0Tf(t)eikω0tdtwhereω0:=2πTandk=0,±1,±2, F[k] = \dfrac{1}{T} \int \limits _{0} ^T f(t) e^{-i k \omega_0 t} dt \quad \textrm{where} \quad \omega_0 := \dfrac{2\pi}{T} \quad \textrm{and} \quad k=0, \pm 1, \pm 2, \dots

where we have normalized by the period TT. The quantity ω0\omega_0 should be familiar to us from our discussion of periodic signals and is in the context of Fourier analysis called the fundamental frequency. Moreover, in literature it is common to call (4) and (2) as forward or analysis equations and the inverse transform as in (3) as the synthesis equation because these synthesize our signal out of sinusoids. Thus, for a continuous periodic signal, the synthesis equation is given by:

f(t)=k=F[k]eikω0tandt[0,T] f(t) = \sum \limits _{k=-\infty} ^{\infty} F[k] e^{i k \omega_0 t} \quad \textrm{and} \quad t \in [0, T]

Note that the synthesis equation is still a summation and the analysis equations generates an infinite sequence of Fourier coefficients F[k]F[k]. This explains the reason why we call it the Fourier Series. Ideally, we should have called the DFT the Discrete Fourier Series but ... oh well 🤷.

Obviously, only a fraction of the signals we see in real life are periodic signals. In our discussion so far, we have assumed the signals to be periodic regardless of whether discrete or continuous. Next, we will try to relax this assumption.

Fourier Transform

As stated previously in Lecture 02 one convenient way to think of aperiodic signals is to treat them as signals whose period TT \to \infty. To extend the equations above to signals that are not periodic, this is exactly what we do. We let TT go to \infty and let the machinery for mathematical theory of limits take over. We don't work this out here but one can easily see that as TT \to \infty we have ω00\omega_0 \to 0. Yet, in the integration, we have kω0k \omega_0 with k=0,±1,±2,k= 0, \pm 1, \pm 2, \dots all the way to \infty. Thus the term kω0k \omega_0 does not go to zero, rather becomes a continuous variable ω\omega and we perform the necessary integration over the whole real line. Then we have for the forward equation:

F(ω)=12πf(t)eiωtdt F (\omega) = \dfrac{1}{\sqrt{2 \pi}} \int \limits _{-\infty}^{\infty} f(t) e^{-i\omega t} dt

In tandem, since kω0k \omega _0 became a continuous variable ω\omega, it is now necessary to replace the sum in (5) with an integral. Therefore we get the synthesis equation:

f(t)=12πF(ω)eiωtdω f(t) = \dfrac{1}{\sqrt{2 \pi}} \int \limits _{-\infty} ^{\infty} F(\omega) e^{i \omega t } d\omega

Intuitively, one can think of it like this: if a periodic signal (like the square wave) requires an infinite summation of sines and cosines of discrete frequencies to represent it, then an aperiodic signal would require even more – an infinite summation of continuous frequencies; in fact an integral.

⚠️ Note
You might be wondering about the sudden appearance of a normalization term in (6) and (7). The short answer is sines/cosines are naturally 2π2\pi periodic and the function we are transforming does not have a natural period. It is an annoying fact of life that different sources use different normalization or sometimes none at all (e.g. MATLAB). Here, we have once again chosen symmetry, amongst other things. But the point is to pick one style and stick to it consistently.

Wrap up

At this point, it might be instructive to take a minute and recall that we remarked in Lecture 01 that signals admit equivalent time and frequency domain representations and referred to this figure below. It is precisely via the Fourier transform that we get the frequency domain representation. Each coordinate on the right panel gives us three things: a frequency, a phase and a magnitude corresponding to a single sinusoid in the (possibly infinite) summation required to represent it.

We will skip the DTFT because of its limited practical application and in the next lecture we will arrive at equations equivalent to the ones shown in this lecture but without using imaginary numbers and complex mathematics.

In this lecture note, the intention was to provide a birds eye view of the theory of Fourier transforms and how it fits in the greater scheme of things. We have grossly neglected to discuss any implementations or calculations; both of which, hopefully will be addressed by homework, and in particular the latter which we will rectify with some worked out examples.

[back]

[1] No assumption is made that you have taken that class; if you are concurrently registered you will see the field axioms this semester. If not, ignore the statement.
[2] Which BIOE205 isn't, so we must wrap up the discussion.
[3] As you will see on this page, some of the "transforms", especially the first discrete ones, doesn't actually involve integrals, but it is common to refer to them all as transforms anyway.
[4] For example, they are infinitely differentiable, admit Taylor series expansions, are orthogonal, etc.

CC BY-SA 4.0 Ivan Abraham. Last modified: February 05, 2023. Website built with Franklin.jl and the Julia programming language. Curious? See familiar examples.