|
08.28.2016Dear Student, Week 2 -- Content Summary for Lectures 4, 5, and 6 Lecture 4 begins with a quick summary of linear operators from m-dimensional space to n-dimensional space m-dimensional space as represented m by n matrices, and the associated subspaces such as the range space and null space of the matrix. The second part of Lecture 4 and all of Lecture 5 begin our study of eigenvalues of linear operators with the case of matrix operators. Later in the course, we will study eigenvalues in the contexts of ordinary and partial differential equations. You will get plenty of practice with the methods developed in Lectures 4 and 5 by doing the Just Do It! problems as you study the lectures and by solving as many of the homework problems as independently as you can after studying each lecture. In these lectures, you should concentrate on the geometric meaning and calculation of eigenvalues and eigenvectors for matrix operators and on the diagonalization of matrices through a change of variables. There is an application of matrix diagonalization to the solutions of a system of differential equations in Lecture 5. We will do much more with matrix diagonalization in upcoming lectures. Lecture 6 reviews the basic information about first order and second order linear differential equations that you studied in your first undergraduate course in differential equations. This review should bring you up to speed if you are a little rusty right now. A note about an important change in mathematical approach: In Lectures 1, 2, 3 and the first part of Lecture 4, the basic mathematical tool was the reduction of a given matrix to echelon form. Although we applied that first to the solution of a system Ax = b of m linear equations in n unknowns (by reducing the augmented matrix [A:b] to echelon form, we also used echelon form reduction to decide if a set of vectors is linearly independent and to find spanning sets and bases for various subspaces of n-dimensional space. It turns out that reduction of matrices to echelon form is of little value for finding eigenvalues and eigenvectors of matrices because a given matrix A and its echelon form typically have different eigenvalues. For diagonal matrices, the eigenvalues are simply the diagonal entries. For a given non-diagonal matrix A, we try to find an invertible matrix P such that the change of variables x = Py reduces A to a diagonal matrix D. For a non-diagonal square matrix A, we try to find another matrix B that has the same eigenvalues as A but for which the eigenvalues are obvious. We do that by making a change of variable x = Py where P is an invertible matrix. The details of this procedure will be explained in detail in the examples in the lectures. I hope that these comments are helpful to you as you work through Lectures 4, 5 and 6. Please contact me by e-mail or phone if you have any questions. Tony PeressiniCell Ph. 217-840-2871
|