BIOE 205

Lecture 13

Reading material: Chapter 7 of CSSB

Last lecture we introduced Laplace transform as a generalization of the Fourier Transform that is applicable to signals whose Fourier transform does not exist. In this lecture we spent a great deal of time learning about the Laplace transform of basic signals and how different mathematical operations can be translated to the Laplace domain. We will build a repertoire of common Laplace and time-domain pairs and then see how we can utilize that to solve both ordinary differential equations as well as initial value problems.

  1. Basic Laplace transforms
    1. Mathematical operations & Laplace transforms
    2. Solving convolutions
  2. General method of partial fractions
    1. Partial fractions in software
  3. Solving ODE's in the presence of nonzero initial conditions

Basic Laplace transforms

In the last lecture we introduced the Laplace transform as a necessary generalization of the Fourier transform integral that is able to accommodate wider class of signals. We showed this by computing the Laplace transform of a hyperbolic sin for which the Fourier transform did not exist. However, there is far simpler and more ubiquitous signal whose Fourier transform doesn't exist but the Laplace transform does. This is the so called "on" signal or step signal (recall Lecture 04) defined as

u(t)={0,t<t0c,tt0 u (t) = \begin{cases} &0, \quad t < t_0 \\ &c,\quad t \geq t_0 \end{cases}
A simple exercise shows that the Laplace transform of the step input is given as L(u(t))=1/s\mathcal{L}\left(u(t)\right) = 1/s.

Solution: Left as an exercise.

However if the only advantage of defining the Laplace transform were to simply make some tough integrals converge its utility would have been offset by the loss in interpretability - i.e. we can interpret the Fourier transform as a change of basis for our time domain signals that exposes the underlying frequency contributions in its constitution. Alas no such easy to intuit explanation exists for the Laplace transform. Nevertheless, we have gained far more utility with this generalization as we will shortly see.

Mathematical operations & Laplace transforms

Consider the derivative of a function f(t)f(t) given as

f(t)=ddtf(t) f'(t) = \dfrac{d}{dt} f(t)
and consider its Laplace transform (and apply integration by parts):

L(f(t))=0dx(t)dtestdt=x(t)est0+s0x(t)estdt \mathcal{L}\left(f'(t)\right) = \int \limits _{0} ^{\infty} \dfrac{dx(t)}{dt} e^{-st} dt = \left. x(t) e^{-st}\right|_{0}^{\infty} + s \int \limits _{0} ^{\infty} x(t) e^{-st} dt
The quantity on the right being integrated is nothing but the Laplace transform of x(t)x(t) (which we will denote by X(s)X(s)). This gives (applying the limits)

Ldx(t)dt=sX(s)x(0) \mathcal{L}\dfrac{dx(t)}{dt} = s X(s) - x(0^{-})

where x(0)x(0^{-}) is used to denote the value of the signal at time t=0t=0. The negative sign t=0t=0^{-} is used to indicate that the negative time history of the signal up till t=0t=0 has been lumped together into the value at t=0t=0. When the initial condition for the signal is zero (as is often the case) this extra term disappears.

⚠️ Note
Note: Indeed, it is precisely through this extra term that we will be able to incorporate nonzero initial conditions in our set up and solve initial value problems or IVPs.

Thus we get that as must be true of any generalization, the generalization must respect or uphold truths or results established by whatever mathematical object or concept was being generalized. The precise result being preserved here is the fact that the Laplace domain representation of the impulse function is still 11 since s×1/s=1s\times 1/s = 1. Indeed, recall that the derivative of the step function is the impulse function and thus we see that we can multiply L(u(t))\mathcal{L}(u(t)) with ss to get 1.

Answer: Full answer is skipped. Indeed repeated differentiation is simply repeated multiplication by ss, but we leave it as an exercise to figure out what happens to initial conditions in this case.
⚠️ Note
Note: Don't skip the exercise if you want to understand how to solve IVPs of higher orders, say for example a second order system.

Now if differentiation in the time domain is represented in the Laplace domain as multiplication by ss then one can guess that the natural counterpart of differentiation being integration, the multiplication must be replaced by ... you guessed it: division! Isn't mathematics beautiful & orderly? In fact that particular bit of math works out to look like:

L[0Tx(t)dt]=1sX(s)+1s0x(t)dt \mathcal{L} \left[ \int \limits _{0} ^{T} x \left( t \right) dt \right] = \dfrac{1}{s} X(s) + \dfrac{1}{s} \int \limits_{-\infty} ^{0} x(t) dt

Therefore, in the presence of all zero initial conditions we can equate integration in time domain with division by ss in the Laplace domain - often called multiplication by 1s\dfrac{1}{s} for obvious reasons.

Solution: Recall that the integration of the step function gives rise to the ramp function. Thus, the Laplace transform of the ramp function is the Laplace transform of the step function multiplied by ss, i.e. 1s2\dfrac{1}{s^2}

Solving convolutions

Another major result that we had when we learnt about Fourier analysis was the fact that convolutions in the time domain became multiplications in the frequency domain. Once again, as should be true of any generalization, this result remains unchanged in the Laplace domain. In other words

f(t)g(t)=F(s)G(s) f(t) \star g(t) = F(s) G(s)

Example

Let us try to solve the following convolution using Laplace domain.

The functions in the above plot are as follows:

h(t)=u(t)u(t1)andf(t)=e2tfort0 h(t) = u(t)-u(t-1) \qquad \textrm{and} \qquad f(t) = e^{-2t} \quad \textrm{for} \quad t \ge 0

Then we have that

L(h(t))=1ses1s \mathcal{L} \left( h(t) \right) = \frac{1}{s} - e^{-s}\frac{1}{s}
and
L(f(t))=1s+2 \mathcal{L} \left( f(t) \right) = \frac{1}{s+2}

Verify that the above claims are true! In particular, we made use of the following result:
L[x(tT)]=esTL[x(t)] \mathcal{L}\left[x(t-T)\right] =e^{-sT}\mathcal{L}\left[x\left(t\right)\right]
which you should be able to prove.

Therefore

f(t)h(t)    F(s)H(s)=(1ses1s)1s+2=1s(s+2)es1s(s+2) f(t) \star h(t) \implies F(s) H(s) = \left( \frac{1}{s} - e^{-s}\frac{1}{s} \right) \frac{1}{s+2} = \frac{1}{s(s+2)} - e^{-s}\frac{1}{s(s+2)}

Now to find the result y(t)=f(t)h(t)y(t) = f(t) \star h(t) we need to take the inverse Laplace transform of F(s)H(s)F(s)H(s) which we can do using Laplace transform tables if we can find the above functions in them. Often the tables of Laplace transforms will only have basic generic forms listed in them and the onus is on us to convert our functions into the forms given in the tables.

One particular technique that is useful here is to be able to break apart complex fractions like the above into their constituents. This is called the technique of partial fractions which is a powerful and very useful one to have under our repertoire. Here we will show the technique in action for now but relegate the general treatment to the next section.

Consider:

1s(s+2)=As+Bs+2 \dfrac{1}{s(s+2)} = \frac{A}{s} + \dfrac{B}{s+2}
where we have made the assumption that we can rewrite the rational form on the left using two constants AA and BB on the right. A little manipulation of the quantity on the right and then equating the numerators gives:
1s(s+2)=A(s+2)+Bss(s+2)    A(s+2)+Bs=1 \dfrac{1}{s(s+2)} = \dfrac{A(s+2)+Bs}{s(s+2)} \quad \implies \quad A(s+2) +Bs = 1
The equation on the extreme right should hold for all values of ss. In particular, plugging s=0s=0 and s=2s=-2 we can easily get that:
2A=1    A=1/2 2A = 1 \quad \implies \quad A = 1/2
and
2B=1    B=1/2 \quad -2B =1 \quad \implies \quad B = -1/2
Therefore,
1s(s+2)=12s12(s+2) \dfrac{1}{s(s+2)} = \dfrac{1}{2s} - \dfrac{1}{2(s+2)}
These fractions can be commonly found in tables: for example here.

Taking the inverse Laplace transform gives:

y(t)=12(1e2t)u(t1)12(1e2(t1)) y(t) = \frac{1}{2}\left(1- e^{-2t}\right) - u(t-1) \frac{1}{2}\left(1- e^{-2(t-1)}\right)

which we plot below along with the original functions.

Exercise: Verify that the above claim is true using a Laplace transform table.

Now we turn our attention to the general procedure for finding partial fraction expansions (PFE).

General method of partial fractions

⛔ Warning!
Note: Skip the exercises and verification steps in the following section at your own risk: PFEs tend to be popular on exams and if you have been sourcing rather than solving homework or have become accustomed to using MATLAB all the time, you need to work out every step here.

The basic objective in the method of partial fractions is to decompose a proper rational fraction (a rational fraction is proper only if the degree of the numerator is lower than the degree of the denominator) into simpler constituents. Here is the general strategy:

  1. Start with a proper rational fraction. If improper, then perform polynomial long division first and focus on the part that is proper.

  2. Factor the denominator into linear and irreducible higher order terms. A term or factor is called irreducible if it cannot be factored further using rational numbers: e.g. s2+36s+25s^2+36s+25 is irreducible because factoring it requires writing:

    s2+36s+25=((s+29918)(s+299+18)) s^2 + 36s + 25 = -\left(\left(-s+\sqrt{299}-18\right) \left(s+\sqrt{299}+18\right)\right)
    which involves an irrational: 299\sqrt{299}.

  3. Next, write out a sum of partial fractions for each factor (and every exponent of each factor) using the forms/rules for PFEs (see table below).

  4. Multiply the whole equation by the bottom/denominator and solve for the coefficients.

The form of the partial fraction written in Step 3 depends on the form of the factor in the denominator as elucidated in the following table:

TypePartial Fraction Decomposition
Non-repeated linear factorA1ax+b\dfrac{A_1}{ax+b}
Repeated linear factorA1ax+b+A2(ax+b)2++An(ax+b)n\dfrac{A_1}{ax+b} + \dfrac{A_2}{(ax+b)^2} + \dots + \dfrac{A_n}{(ax+b)^n}
Non-repeated quadratic factorB1x+C1ax2+bx+c\dfrac{B_1x+C_1}{ax^2+bx+c}
Repeated quadratic factorB1x+C1ax2+bx+c+B2x+C2(ax2+bx+c)2++Bnx+Cn(ax2+bx+c)n\dfrac{B_1x+C_1}{ax^2+bx+c} + \dfrac{B_2x+C_2}{(ax^2+bx+c)^2} + \dots + \dfrac{B_nx+C_n}{(ax^2+bx+c)^n}

As you can see, when we have a repeated factor we have to write a partial fraction for it as many times as it repeats (with different powers as well). We now do an example to illustrate the above steps.

Partial fractions example:

Find the partial fraction decomposition of:

3s+1(2s1)(s+2)2 \dfrac{3s+1}{\left(2s-1\right)\left(s+2\right)^2}

Solution: Fortunately the denominator is already factored for us and consists of two repeated linear terms and a non-repeated linear term. So we have:

3s+1(2s1)(s+2)2=A12s1+A2s+2+A3(s+2)2 \dfrac{3s+1}{\left(2s-1\right)\left(s+2\right)^2} = \dfrac{A_1}{2s-1} + \dfrac{A_2}{s+2} + \dfrac{A_3}{(s+2)^2}
Multiply throughout by (2s1)(s+2)2(2s-1)(s+2)^2 to get rid of the denominator. We get
3s+1=A1(s+2)2+A2(s+2)(2s1)+A3(2s1) 3s+1 = A_1(s+2)^2 + A_2(s+2)(2s-1) + A_3(2s-1)
Now we can solve for the uppercase coefficients by plugging in different values of ss (because the equation should hold of any value of ss).

3(2)+1=A3(2(2)1)    5=5A3 3 (-2) + 1 = A_3 \left(2(-2) -1 \right) \quad \implies \quad -5 = -5A_3

Thus,

3s+1(2s1)(s+2)2=2/52s11/5s+2+1(s+2)2 \dfrac{3s+1}{\left(2s-1\right)\left(s+2\right)^2} = \dfrac{2/5}{2s-1} - \dfrac{1/5}{s+2} + \dfrac{1}{(s+2)^2}
Exercise: Again ... verify that the equality above holds!

Now it may not always be the case that the method of partial fractions will be solvable by trying different values of ss as in (2). Sometimes we may need to set up a system of linear equations to solve for the coefficients or resort to using software. Below follows an example where this happens:

Example redux

Find the partial fraction expansion of:

s2+15(s+3)2(s23) \dfrac{s^2+15}{(s+3)^2(s^2-3)}

Solution: Again the denominator is already factored for us and consists of a twice repeated linear terms and a single quadratic term (why?). Therefore as per the table above we get that:

s2+15(s+3)2(s23)=A1s+3+A2(s+3)2+B1s+C1s23 \dfrac{s^2+15}{(s+3)^2(s^2-3)} = \dfrac{A_1}{s+3} + \dfrac{A_2}{(s+3)^2} + \dfrac{B_1s+C_1}{s^2-3}
Now multiply both sides with (s+3)2(s23)(s+3)^2(s^2-3) to get rid of the denominator:
s2+15=(s+3)(s23)A1+(s23)A2+(s+3)2(B1s+C1) s^2+15 = (s+3)(s^2-3)A_1 + (s^2-3)A_2 + (s+3)^2(B_1s+C_1)
Now our task is to solve for the uppercase coefficients using different values of ss. For example, using s=3s=-3 we get:

6A2=24    A2=4 6A_2 = 24 \qquad \implies \qquad A_2 = 4

But that is pretty much the only headway we can make here to get at the uppercase coefficients directly. What we will need to do next is to substitute in this value of A2A_2 into the above equations and collect like powers of ss to compare coefficients. This will give us a linear system of three equations in three unknowns. One can verify that these are:

9A1+9C1=273A1+6B1+C1=34A1+8B1+6C1=0\begin{aligned} -9A_1 + 9C_1 &= 27 \\ 3A_1 + 6B_1 + C_1 &= -3 \\ -4A_1 + 8B_1 + 6C_1 &= 0 \end{aligned}
Solution: One gets 2A1=C1=62A_1 = C_1 = 6 and B1=A1B_1=-A_1. We leave it as an exercise to write down and verify the partial fraction expansion in this case.

Partial fractions in software

Fortunately, we can use MATLAB (or some other software package) to solve partial fraction problems[1]. In this section we will show examples of using software to solve the above problems. As usual with software toolkits, using the right tool for the correct job makes life easier.

The most straightforward method is to use a CAS. For example Mathematica implements this as Apart.

but our go-to software in this course is either MATLAB or Python so we look at MATLAB next. In MATLAB the command we will use is called residue and the code snippet below shows how we solve the first problem above.

clear 
numerator = [3, 1];
denominator1 = fold(@conv, {[2,-1],[1,2],[1, 2]}); 
[num, roots, const] = residue(numerator, denominator1)

% If the fold function does not work then that single line above 
% is equivalent to the following:

den_factors = {[2,-1],[1,2],[1, 2]};
denominator2 = [1];
for i=1:length(den_factors)
    denominator2 = conv(denominator2, den_factors{i});
end

which gives

num =

   -0.2000
    1.0000
    0.2000

roots =

   -2.0000
   -2.0000
    0.5000

const =

     []

The output suggests that the partial fraction expansion is:

3s+1(2s1)(s+2)2=0.2s+2+1(s+2)2+0.2s1/2 \dfrac{3s+1}{\left(2s-1\right)\left(s+2\right)^2} = \dfrac{-0.2}{s+2} + \dfrac{1}{(s+2)^2}+ \dfrac{0.2}{s-1/2}
Exercise: Verify the partial fractions in (3) and (5) are the same.

A similar exercise in coding shows that the second example from above can be formulated as follows:

clear 
numerator = [1, 0, 15];
denominator = fold(@conv, {[1,0,-3],[1,3],[1, 3]}); 
[num, roots, const] = residue(numerator, denominator)

with output:

num =

    3.0000
    4.0000
    0.2321
   -3.2321

roots =

   -3.0000
   -3.0000
    1.7321
   -1.7321

const =

     []

But here we run into trouble because MATLAB factored the s23s^2-3 term further using (s3)(s+3)(s - \sqrt{3})(s+\sqrt{3}) (see footnote 1).

Answer: Left as an exercise (and please inform me if you do fix it).

One (particularly unsatisfactory) way around this limitation is to use the partfrac from MATLAB's Symbolic Toolbox (which might involve paying for yet another add-on to MATLAB):

syms s 
partfrac((s^2+15)/((s+3)^2*(s^2-3)))

The above gives:

ans = 3/(s+3) - (3*s-6)/(s^2-3) + 4/(s+3)^2

Things are a bit simpler in Python (for those of you interested) but requires the use of the SymPy package

from sympy import symbols, apart

s = symbols('s')
numerator = s**2+15
denominator = ((s+3)**2)*(s**2-3)
frac = numerator / denominator
print(apart(frac))

which results in:

-3*(s - 2)/(s**2 - 3) + 3/(s + 3) + 4/(s + 3)**2

Solving ODE's in the presence of nonzero initial conditions

As remarked previously, the one-sided Laplace transform (when properly taken) ends up having a s(0)s(0^-) term (see Eq. (1) for example). In the following we will make use of this to see how we can solve initial value problems. Consider a mass-spring and damper system from our in-class lectures:

We have derived that the dynamical equations for such a system are:

mx¨+bx˙+kx=f(t) m \ddot{x} + b\dot{x} + kx = f(t)

where xx is the displacement of the mass from its equilibrium position, kk is the spring constant, bb is the damper coefficient and mm is the mass in kilos of the attachment. Here f(t)f(t) is the forcing function or input into the system. Then if we take the Laplace transform of (6) above we get:

[ms2+bs+k]X(s)[ms+b]x(0)mx(0)=F(s) \left[ms^2 + bs + k \right] X(s) - \left[ms+b\right] x(0^-) - mx'(0^-) = F(s)

In (7) above we have used the following fact (stated without proof but alluded to above):

Theorem: (Higher derivatives in Laplace domain)
Suppose that ff, ff', f,,f(n1),f(n)f'',\dots, f^{(n-1)}, f^{(n)} are all continuous functions whose Laplace transforms exist.

Then:

L(f(n))=snF(s)sn1f(0)sn2f(0)sf(n2)(0)f(n1)(0) \mathcal{L}\left(f^{(n)}\right) = s^nF(s) - s^{n-1}f(0) -s^{n-2}f'(0) - \dots - sf^{(n-2)}(0) - f^{(n-1)}(0)

Now consider the two cases:

Zero initial conditions

If the mass was at rest in its equilibrium position then we have x(0)=0=x(0)x(0^-)=0=x'(0^-). Suppose for simplicity m=1m=1 kg and b=8b=8 Ns/m and k=25k=25 N/m. Thus (7) above reduces to:

X(s)F(s)=1s2+8s+25 \dfrac{X(s)}{F(s)} = \dfrac{1}{s^2+8s+25}
which is simply a second order transfer function.

Solution: The steps are left as an exercise. The final answer is
x(t)=e4tsin(3t) x(t) = e^{-4 t} \sin (3 t)

which looks like:

Does the plot make sense?

A 1 kilo mass was attached to a spring that requires 25N to compress it 1 meter and a damper that produces 8N of force of if you try to move it at 1m/s speed. Then this system was hit with a force equal to three times the standard impulse. While it moved, the relatively strong damping and spring meant it didn't move much and returned quickly to equilibrium.

Non-zero initial conditions

Suppose instead the mass was displaced aa units from its equilibrium position and released at time t=0t=0 with velocity vv m/s. This means x(0)=ax(0^-)=a and x(0)=vx'(0^-)=v. Then (7) above becomes:

X(s)=F(s)+a(ms+b)+mvms2+bs+k X(s) = \dfrac{F(s) + a(ms + b) + mv}{ms^2+bs+k}
which while not in the form of a transfer function, we can still take into the time domain by using the inverse Laplace transform and getting x(t)x(t) - which is the solution of a second order ODE with initial conditions!!

Solution: Left as an exercise. The rational fraction whose Inverse Laplace Transform you must find is:
X(s)=4(s+2)(2s+1)2 X(s) = \dfrac{4(s+2)}{(2s+1)^2}

If you plot the answer it should look like this:

Does the plot make sense?

The plot starts at x=1x=1 so the initial condition was respected. The mass had an initial displacement of +1+1 units and also was released with an initial velocity +1+1 m/s (hence in same direction as the displacement). So ... it actually ends up moving away from its equilibrium position (of x=0x=0) initially until the spring starts pulling it back. The relatively heavy damping then smoothly (but also slowly) brings it to rest at x=0x=0 in about 20 seconds.

Clearly from the plots above, the values of damping coefficient in second order systems has a huge effect on the solution profiles. We will investigate this in depth in the next lecture.

[back]

[1] ... with a few caveats. More specifically, the answer MATLAB gives you may not always be the answer you want.
CC BY-SA 4.0 Ivan Abraham. Last modified: May 05, 2023. Website built with Franklin.jl and the Julia programming language. Curious? See familiar examples.