10.09.2016Dear Student,According to our class schedule, you should complete Lectures 18, 19, and Part 1 of Lecture 20 and the associated homework by Sunday, October 16. These lectures comprise Unit 6 on numerical methods for matrices. These methods contrast with the matrix methods treated earlier in this course in which row-reduction to echelon form is the basic tool. For very large matrices, echelon-form reduction numerically “expensive” from the standpoint of the time required for computer implementation. The matrix methods discussed in Unit 6 are especially useful for finding approximate solutions of systems A x = b or finding approximate eigenvalues and eigenvectors of A for cases in which A is a very large square matrix with most of the non-zero entries along or very near the main diagonal. Such matrices arise in stress-strain analysis of structures as well as the numerical solution of ordinary or partial differential equations. The content summary below should help you to understand this material and to know why numerical methods such as these are needed. Week 8 -- Content Summary for Lectures 18, 19, and 20 (Part 1) Lecture 18 begins the unit on numerical methods for matrices and deals with the Jacobi Methodand the Gauss-Seidel Method, two special iterative methods for solving systems Ax = b of m linear equations in m unknowns. These two methods are especially useful for square systems of large order n and in which most of the large entries are near the diagonal of A. As we pointed out above, such systems occur in many applied contexts including the numerical solution of ordinary and partial differential equations, the analysis of structures such as trusses, elasticity problems and so on. Although the Gaussian Elimination Procedure that we studied early in this course applies to such systems, the Jacobi and Gauss-Seidel Methods produce reasonably accurate solutions much more rapidly. Most of this lesson is devoted to introducing these two methods and their matrix versions. Convergence of these methods is also discussed in some detail. I think that you will find this lecture to be relatively easy to follow because there are many good illustrative examples, Just Do It’s in the lecture and in the homework problems to guide you. For any iterative method for solving a system Ax = b, it is important to estimate how close the k-th-iterate x(k) is to the "true" solution x* of the system. Closeness is measured by vector and matrix norms denoted by || x(k) - x*|| or ||A||. There are many different vector and matrix norms and some are easier to compute than others. Generally, you prove convergence by finding an inequality of the form || x(k) - x*|| < C(k) ||A|| ||x(k)|| where C(k) approaches 0 as k increases without bound. In Lecture 19, you will learn to compute several specific vector and matrix norms. Lecture 19 also introduces some iterative methods for finding eigenvalues and eigenvectors for very large square matrices. The methods that we learned earlier in the course for finding eigenvalues and eigenvectors are not efficient for large order square matrices. The iterative methods discussed in Lecture 19 and the first part of Lecture 20 allow us to locate the eigenvalues and eigenvectors with a high degree of accuracy using a small number of iterations. For example, Gershgorin Circles Theorem and Collatz's Theorem can be used to estimate the approximate location of the eigenvalues. It is important to have a good estimate of the location of the eigenvalues to speed up the convergence of the two main iterative methods, the Power Method and the QR Factorization Method and their variations. The Singular Value Decomposition (SVD) of a matrix and its applications are very important in mathematics and its applications. These are discussed in the second part of Lecture 20 and Lecture 21, but this material is not currently included in the course syllabus because it really requires active use of Mathematica or a similar symbolic algebra system for solving even small order problems. Although we use output from Mathematica code throughout the course, we do not require students in this course to create and modify the necessary code to do the SVD material or other advanced Mathematica programming problems. However, if you use Mathematica programming in your work, you might find it interesting to read these two lectures on your own. Be sure to contact me if you have any questions about this week’s work! Have a good week, Tony Peressini Cell PH: 217 840 2871 anthonyperessini@gmail.com
|