MATH 4093.001/5093.001 - Applied Numerical Methods - Spring 2011
MWF 12:30-1:20 p.m., 416 PHSC
Instructor:
Nikola Petrov, 802 PHSC, (405)325-4316, npetrov AT math.ou.edu.
Office Hours:
Monday 2:30-3:30 p.m., Wednesday 3:30-4:30 p.m., or by appointment.
Prerequisites:
Multivariable Calculus at the level of MATH 2443, Ordinary
Differential Equations at the level of MATH 3113 or MATH 3413,
Linear Algebra at the level of MATH 3333 or 4373,
or permission of the instructor.
Previous knowledge of Numerical Analysis is not assumed.
Course catalog description:
Numerical treatment of ordinary differential equations,
numerical linear algebra and applications,
basic numerical methods for partial differential equations.
No student can earn credit for both 4093 and 5093. (Alt. Sp.)
Check out the
OU Math Blog!
It is REALLY interesting!
Check out the
Problem of the Month
!
Course content
(here is a link
to a more detailed description):
-
Numerical linear algebra.
-
Numerical solutions of initial-value problems for ordinary
differential equations.
-
Numerical approximation of functions.
-
Numerical solutions of nonlinear systems of equations.
-
Numerical solutions of boundary-value problems for
ordinary differential equations.
-
Numerical solutions of partial differential equations.
Text:
Brian Bradie,
A Friendly Introduction to Numerical Analysis.
Pearson/Prentice Hall, 2006, ISBN 0-13-012054-0.
Homework:
-
Homework 1,
due Friday, January 28.
-
Homework 2,
new due date: Wednesday, February 9.
Codes needed for Homework 2:
-
Homework 3,
due Friday, February 18.
Codes needed for Homework 3:
-
Homework 4,
due Friday, February 25.
-
Homework 5,
due Friday, March 4.
Codes needed for Homework 5:
-
Homework 6,
due Friday, April 1.
Codes needed for Homework 6:
-
Homework 7,
due Friday, April 8.
Codes needed for Homework 7:
-
Homework 8,
due Friday, April 15.
Codes needed for Homework 8:
-
Project for 5093.
Codes needed for the project:
Homework 9,
due Friday, May 6.
Codes needed for Homework 9:
Content of the lectures:
-
Lecture 1 (Wed, Jan 19):
Introduction.
-
Lecture 2 (Fri, Jan 21):
Algorithms:
algorithms, examples (pages 10-12 of Sec. 1.1).
Convergence:
rate of convergence of a convergent sequence
{xn},
rate of convergence of a function,
order of convergence α and asymptotic error
constant λ of a convergent sequence
(pages 20-23 of Sec. 1.2).
- Lecture 3 (Mon, Jan 24):
Convergence (cont.):
proof that the sequence
xn+1=(1/2)[xn+(a/xn)]
is quadratically convergent,
determining the order of convergence empirically;
Taylor's Theorem, remainder term
(pages 24-26 of Sec. 1.2).
Floating point number systems:
base-β number systems -
binary (base-2), octal (base-8), decimal (base-10),
hexadecimal (base-16, with digits 0,1,2,...,9,A,B,C,D,E,F)
number systems;
1985 IEEE standard for saving a floating point numbers
in the computer - sign, characteristic, mantissa,
smallest and largest numbers representable in this way;
underflow and overflow; chopping and rounding arithmetic,
loss of accuracy due to adding numbers that differs in orders
of magnitude, loss of accuracy due to subtracting
very close numbers
(pages 32-33, 37 of Sec. 1.3)
- Lecture 4 (Wed, Jan 26):
Floating point number systems (cont.):
overflow and underflow, loop control in MATLAB (for, while),
conditional execution (if).
Gaussian elimination:
linear systems of equations,
coefficient matrix and augmented matrix of a system,
Gaussian elimination,
elementary row operations,
an example of Gaussian elimination
and back substitution
(pages 149-152 of Sec. 3.1).
- Lecture 5 (Fri, Jan 28):
Gaussian elimination (cont.):
counting the number of operation in the Gaussian
elimination and in the back substitution,
comparison with the number of operation
in the Gauss-Jordan elimination
(pages 155-157 of Sec. 3.1).
Pivoting strategies:
a motivating example, partial pivoting - algorithm
(pages 160-162 of Sec. 3.2).
- Lecture 6 (Mon, Jan 31):
Pivoting strategies:
partial pivoting - an example,
using a row vector r to keep track
of the interchanging the rows of the augmented matrix
(A|b) (i.e., of interchanging the order
of the equations in the linear system);
scaled partial pivoting - a motivating example
(pages 162-165 of Sec. 3.2).
- Lecture 7 (Wed, Feb 2):
Cancelled due to weather.
- Lecture 8 (Fri, Feb 4):
Cancelled due to weather.
- Lecture 9 (Mon, Feb 7):
Pivoting strategies (cont.):
scaled partial pivoting (pages 165-167 of Sec. 3.2).
Vector and matrix norms:
definition of a vector space (linear space),
proof of the uniqueness of the zero element.
- Lecture 10 (Wed, Feb 9):
Cancelled due to weather.
- Lecture 11 (Fri, Feb 11):
Vector and matrix norms (cont.):
linear space, a (vector) norm on Rn,
lp-norms, Cauchy-Schwarz inequality,
proof that the l2-norm is indeed a norm,
convergence of sequences of vectors in Rn
equipped with a particular norm, equivalent norms,
proof that
the l1- and l∞-norms
are equivalent,
matrix norms, operator norm, the operator norm is indeed a norm
(only the statement of the theorem)
(pages 171-175 of Sec. 3.3).
- Lecture 12 (Mon, Feb 14):
Vector and matrix norms (cont.):
inner product, inner product linear spaces,
examples of inner products on Rn
(coming from a symmetric positive-definite matrix),
inner products defines a norm,
function spaces (i.e., spaces of functions)
as linear spaces, norms in function spaces,
spaces of polynomials;
operator norms associated with different vector norms,
an expression for the "row norm" ||A||∞
(pages 176-177 of Sec. 3.3).
- Lecture 13 (Wed, Feb 16):
Vector and matrix norms (cont.):
eigenvectors, eigenvalues, spectrum, spectral radius ρ(A),
characteristic polynomial of a matrix A,
expression for || ||2,
the spectral radius ρ(A) is the greatest lower bound
on all natural norms of a matrix
(pages 177-179 of Sec. 3.3).
Error estimates and condition number:
error e and residual vector r of an approximate
solution of a linear system Ax=b
(pages 181-182 of Sec. 3.4).
- Lecture 14 (Fri, Feb 18):
Error estimates and condition number:
error e and residual vector r of an approximate
solution of a linear system Ax=b,
an example of a small residual but large error;
a theorem giving lower and upper bounds
on the error ||e|| and the relative error
||e||/||x|| in terms of the residual
||r|| and the relative residual
||r||/||b||;
condition number κ(A) of a square matrix,
an example of calculation of the κ∞(A)
(pages 182-184 of Sec. 3.4).
- Lecture 15 (Mon, Feb 21):
Error estimates and condition number (cont.):
perturbations to A and b,
an upper bound on the relative error
||δx||/||x||
in terms of the condition number κ(A)
and the relative perturbations
||δA||/||A|| and
||δb||/||b||;
briefly on rounding errors introduced by Gaussian elimination;
an example of ill-conditioned matrices - Hilbert matrices
(pages 184-186 of Sec. 3.3).
LU decomposition:
definition of upper and lower triangular matrices,
the product of two upper (lower) triangular matrices
is an upper (lower) triangular matrix;
goal: solving m several systems Ax=b
of n equations with the same coefficient matrix
A and different right-hand sides b
without performing O(mn3) operations:
first find an LU decomposition A=LU,
and then for each of the m systems perform
the following two steps: first find z such that
Lz=b and then find x such that
Lx=z;
every square matrix A can be written as A=LU
(perhaps after interchanging the rows of A)
(pages 191-192, 197-199 of Sec. 3.5).
- Lecture 16 (Wed, Feb 23):
LU decomposition (cont.):
proof that the LU decomposition is
"unique up to scaling by a diagonal matrix",
obtaining an LU decomposition "theoretically",
practial issues in obtaining an LU decomposition
by Gaussian elimination, permutation matrices
(pages 192-197 of Sec. 3.5).
- Lecture 17 (Fri, Feb 25):
LU decomposition (cont.):
operation count (read page 199 of Sec. 3.5).
Direct factorization:
idea of direct factorization, Crout and Doolittle decompositions,
derivation of the Crout decomposition
(pages 205-209 of Sec. 3.6).
Special matrices:
symmetric positive definite matrices:
definition, theoreical results about symmetric matrices,
theorem about practical issues concerning
symmetric positive definite matrices,
Cholesky decomposition;
tridiagonal matrices:
definition, Crout factorization for tridiagonal matrices,
total cost for solving a linear system with a tridiagonal
coefficient matrix is 8n-7
(pages 212-219 of Sec. 3.7).
Iterative techniques for linear systems:
basic concepts and methods:
reformulation an algebraic equation f(x)=0
as a fixed-point problem g(p)=p,
solving the fixed-point problem by iterating:
choose an initial approximation p0
to the exact solution p*
and construct a sequence {pn}
by
pn=g(pn-1),
then the limit of pn is p*
(if the limit exists and g is continuous at
p*).
- Lecture 18 (Mon, Feb 28):
Iterative techniques for linear systems:
basic concepts and methods (cont.):
solving a one-dimensional fixed-point problem iteratively:
pn=g(pn-1),
existence and uniqueness of a fixed point
(intuitive geometric proof),
order of convergence α and asymptotic error constant λ
of an iterative procedure: if
g(p*)≠0, then
α=1 and λ=|g(p*)|,
and if
g(p*)=0, then α>1;
graphical representation of iteration in the 1-dimensional case;
reformulating the linear system Ax=b
as a fixed point problem,
x=Tx+c,
which then is solved iteratively,
x(k+1)=Tx(k)+c,
theorem about convergence of fixed-point iteration
(pages 223-224 of Sec. 3.8,
and a selection of results from Sec. 2.3).
- Lecture 19 (Wed, Mar 2):
Iterative techniques for linear systems:
basic concepts and methods (cont.):
error e(k)
of an iterative procedure, bounds on the error
||e(k+1)||≤||T||||e(k)||,
linear (α=1) convergence with
asymptotic error constant λ=ρ(T),
splitting methods, Jacobi splitting
(pages 224-227 of Sec. 3.8).
- Lecture 20 (Fri, Mar 4):
Iterative techniques for linear systems:
basic concepts and methods (cont.):
Jacobi splitting (simultaneous relaxation):
idea, implementation (vectorizable, requires memory
space for the new data at every iteration step);
Gauss-Seidel splitting (successive relaxation):
idea, implementation (non-vectorizable,
the new data at every iteration step can overwrite the old data);
successive overrelaxation (SOR): idea, remarks about the choice
of the relaxation parameter ω
(pages 226-234 of Sec. 3.8).
- Lecture 21 (Mon, Mar 7).
Rootfinding:
statement of the problem in d=1 and d≥2 dimensions,
Intermediate Value Theorem guaranteeing the existence of a root,
multiplicity of a root;
one-dimensional rootfinding methods:
bisection (always converges, linearly convergent, root bracketed,
works only in one dimension),
Newton's method (not always converges, quadratically convergent,
root not bracketed, generalizes to many dimensions),
quasi-Newton methods, secant method (idea)
(pages 56-57 of Sec. 2.0, 58-61 of Sec. 2.1, 95-96 of Sec. 2.4,
107 of Sec. 2.5).
- Lecture 22 (Wed, Mar 9):
Exam 1
(bring a calculator and a letter-size sheet of paper with info
written on both sides of it).
- Lecture 23 (Fri, Mar 11):
Rootfinding (cont.):
derivation of the recursive procedure secant method in one dimension;
Taylor expansions of vector-valued functions of several variables
(up to linear terms), Jacobian,
derivation of the Newton method in n dimensions
(Sec. 3.10).
- Lecture 24 (Mon, Mar 21):
Initial value problems (IVPs) for ordinary differential equations
(ODEs) - Key numerical concepts and elements of continuous theory:
IVP=ODE+IC (IC=initial condition(s)),
key issues - existence, uniqueness, stability;
an example of an IVP without solution,
an example of an IVP with infinitely many solutions,
Lipschitz functions, finding the Lipschitz constant
from the partial derivative,
converting an ODE of order n to a system
of n first-order ODEs
(pages 540-544 of Sec. 7.1).
- Lecture 25 (Wed, Mar 23):
IVPs for ODEs
- Key numerical concepts and elements of continuous theory (cont.):
theorem of existence and uniqueness
of solutions of IVPs (only a sketch);
numerical treatment of IVPs - mesh, time stepping,
yi=y(ti)
versus wi,
classification of the methods - one-step versus multistep methods,
explicit versus implicit methods, examples;
fundamental numerical concepts: local truncation error,
consistency of a method, global discretization error,
convergent methods, stability
(pages 537-541, 544 of Sec. 7.1).
Euler's method:
derivation by using Taylor's theorem
(pages 546-547 of Sec. 7.2).
- Lecture 26 (Fri, Mar 25):
Euler's method (cont.):
derivation by converting the IVP for an ODE
to an integral equation and approximating the integrand,
Euler's method for systems of equations,
the local trucation error is O(h),
global discretization error (only the statement
of the Theorem on page 551), numerical example in MATLAB
(pages 548-554 of Sec. 7.2).
- Lecture 27 (Mon, Mar 28):
Euler's method (cont.):
numerical investigation of the rate of convergence
of Euler's method as h tends to 0.
Higher-order one-step methods - Taylor methods:
derivation
(pages 560-561 of Sec. 7.3).
- Lecture 28 (Wed, Mar 30):
Higher-order one-step methods - Taylor methods (cont.):
examples (pages 562-565 of Sec. 7.3).
Runge-Kutta (RK) methods:
preparation - Taylor's Theorem for functions of two
(and n) variables;
idea of RK methods, derivation of a RK2 method
and discussion of its geometric meaning
(pages 570-571 of Sec. 7.4).
- Lecture 29 (Fri, Apr 1):
Runge-Kutta (RK) methods (cont.):
derivation of a family of RK2 methods
- in particular, the modified Euler method
(derived in the previous lecture),
Heun method and its geometric meaning,
optimal RK2 method;
classical RK4 method
(pages 570-575 of Sec. 7.4).
- Lecture 30 (Mon, Apr 4):
Runge-Kutta (RK) methods (cont.):
general explicit RK methods - RK taleau, RK matrix,
RK weights, RK nodes
(pages 576-577 of Sec. 7.4).
Lagrange form of the interpolating polynomial:
derivation of the formula,
error of Lagrange interpolation
(pages 341-344, 347-348 of Sec. 5.1).
- Lecture 31 (Wed, Apr 6):
Multistep methods:
Adams-Bashforth (AB) methods, derivation of the AB2 method;
Mean Value Theorem (MVT) for derivatives and for integrals,
Weighted MVT for integrals,
proof that the local truncatoin error of the AB2 method
is O(h2)
(pages 583-585 of Sec. 7.5).
- Lecture 32 (Fri, Apr 8):
Multistep methods (cont.):
idea of Adams-Moulton (AM) methods,
derivation of the 2-step AM method,
the 2-step AM method is of order 3 (without proof);
predictor-corrector schemes
(pages 587-592 of Sec. 7.5).
- Lecture 33 (Mon, Apr 11):
Convergence and stability analysis:
definition of consistent, converent, and stable
methods for solving IVPs for ODEs;
for 1-step methods consistency is equivalent to convergence;
consistency conditions for m-step methods,
check that the two-step AM method derived in Lecture 32
is consistent
(pages 598-600, 602-603 of Sec. 7.6).
- Lecture 34 (Wed, Apr 13):
Convergence and stability analysis (cont.):
stability of m-step methods:
characteristic equation, root condition,
strong and weak stability;
for a linear multistep method
convergence is equivalent to (consistency + stability),
numerical examples
(pages 603-604, 606 of Sec. 7.6).
- Lecture 35 (Fri, Apr 15):
Error control and variable step size algorithms:
changing the stepsize h of 1-step methods
with local truncation error O(hp),
Runge-Kutta-Fehlbert 4th order - 5th order method (RKF45)
(pages 608-613 of Sec. 7.7).
- Lecture 36 (Mon, Apr 18):
Error control and variable step size algorithms (cont.):
changing the stepsize h of 1-step methods - the Milne device
(pages 615-619 of Sec. 7.7).
Systems of equations and higher-order equations:
solving systems of equations versus solving a single equation,
converting a higher-order equation into a system of
first-order equations; ideas about dealing with equations
with very different time scales
(pages 623-626, 628-629 of Sec. 7.8).
- Lecture 37 (Wed, Apr 20).
Derivation of the heat equation:
Fourier law of heat conduction
derivation of the heat equation from
the conservation of the heat energy,
- Lecture 38 (Fri, Apr 22).
Derivation of the heat equation (cont.):
Dirichlet, Neumann, and Robin boundary conditions (BCs),
initial condition (IC).
Generalities on partial differential equations of importance in
physics:
parabolic equations (heat/diffusion equation),
elliptic equations (Laplace equation, interpretation
as an equation describing stationary temperature distribution),
hyperbolic equations (wave equation).
- Lecture 39 (Mon, Apr 25):
Exam 2
(bring a calculator and two letter-size sheets of paper with info
written on both sides of it).
- Lecture 40 (Wed, Apr 27):
Numerical differentiation:
idea - the derivative of the function at a point
is approximately equal to the derivative of a Lagrange
interpolating polynomial approximating the function in a neighborhood
of this point; derivation of the "forward-difference formula" (with
error O(h)) (pages 438-440 of Sec. 6.2).
- Lecture 41 (Fri, Apr 29):
Numerical differentiation (cont.):
and the "three-point formula" (with error O(h2))
for the first derivative, and a formula for the second derivative
with error O(h2
(pages 441-443 of Sec. 6.2).
Finite difference (FD) method, part 1 - the linear problem with
Dirichlet BCs:
set-up, derivation of the FD approximation, matrix formulation,
solvability of the discrete equations, diagonally dominant matrices,
maximum absolute error and root mean square error, empirical illustration that the
error of the method is O(h2) (pages 660-667 of Sec. 8.1).
- Lecture 42 (Mon, May 2):
Finite difference (FD) method, part 2 - the linear problem with
non-Dirichlet BCs:
ways to deal with the non-Dirichlet BCs:
(1) use an alternative O(h2) for the derivative
at the endpoint (good accuracy, but destroys the tridiagonal structure
of the linear system),
(2) use an O(h) formula for the derivative at the endpoint
(preserves the tridiagonal structure, but the accuracy is lower),
(3) add "fictitious nodes" to the computational grid
(preserves the tridiagonal structure and maintains the
O(h2) accuracy)
(pages 673-676 of Sec. 8.2).
The shooting method, part 1 - linear BVPs:
general idea of the shooting method for non-lienar BVPs;
tricks for linear BVPs: representing the general solution
of the nonhomogeneous ODE as a sum of the general solution
of the homogeneous ODE and a particular solution of the
homogeneous ODE (pages 713-714 of Sec. 8.5, page 699 of Sec. 8.4).
- Lecture 43 (Wed, May 4):
The shooting method, part 1 - linear BVPs (cont.):
computing the solution as a linear combination
of the solutions of two IVPs,
determining the coefficients in this linear combination
in order to solve the original BVP
(pages 699-701 of Sec. 8.4).
The Poisson equation on a rectangular domain, part 1
- Dirichlet BCs:
computational grid, boundary and interior grid points,
discretizing the PDE, stencil (computational template),
5-point star approximation for the Laplacian;
organizing the equations, lexicographic ordering,
general structure of the coefficient matrix of the linear system
Aw=b
(pages 730-736 of Sec. 9.1).
- Lecture 44 (Fri, May 6):
The Poisson equation on a rectangular domain, part 1
- Dirichlet BCs (cont.):
the properties of the matrix A (symmetric positive
definite) guarantee the existence and uniqueness of the solution
of the system Aw=b
(pages 736-737 of Sec. 9.1)
The Poisson equation on a rectangular domain, part 1
- non-Dirichlet BCs:
introducing fictitious nodes to deal with the
non-Dirichlet BCs, deriving the linear system
for the wi's
(pages 742-745 of Sect. 9.2).
The heat equation with Dirichlet BCs:
setting up the problem on finite spatial interval,
boundary and initial conditions,
spatial discretization - semidiscrete approximation
vi(t)≈u(xi,t),
initial conditions for the functions
vi(t),
solving the system for vi(t)
by using methods for solving IVPs;
alternative approach - spatio-temporal discretization
wi(n)≈u(xi,tn)
(pages 805-808 of Sec. 10.1).
Grading:
Your grade will be determined by your performance
on the following coursework:
Coursework |
Weight |
Homework |
35% |
Quizzes |
10% |
Midterm exam 1 |
15% |
Midterm exam 2 |
15% |
Final exam |
25% |
Taking the class as MATH 5093:
Students taking the class for graduate credit
(i.e., as MATH 5093)
will have to work on additional problems in the homework
and in the exams, and will have to complete a programming project
in addition to the coursework assigned to the other students
in the class.
Homework: Homework assignments will be set
regularly throughout the semester;
the assignments will be posted on this web-site.
Each homework will consist of several problems,
of which some pseudo-randomly chosen problems will be graded.
Your lowest homework grade will be dropped.
All hand-in assignments will carry a specific due date
and must be submitted at the start of class on the due date.
All homework should be written on a 8.5"×11" paper
with your name clearly written, and should be stapled.
The order of the text in your write-up should be exactly the
same as the order of the problems in the homework assignment
(in particular, the computer outputs should not be attached at the
end). No late homeworks will be accepted!
Exams:
There will be two in-class midterm exams and a comprehensive in-class final.
All tests must be taken at the scheduled times,
except in extraordinary circumstances.
Please do not arrange travel plans that will prevent you
from taking any of the exams at the scheduled time.
Tentative date for the midterm exams:
February 21 (Monday) and April 4 (Monday).
The final is scheduled for May 13 (Friday), 1:30-3:30 p.m.
Attendance:
You are required to attend class on those days when an
examination is being given; attendance during other class periods is also
strongly encouraged.
You are fully responsible for the
material covered in each class, whether or not you attend.
Make-ups for missed exams will be given only if
there is a compelling reason for the absence,
which I know about beforehand
and can document independently of your testimony
(for example, via a note or phone call from a
doctor or a parent).
Technology:
The students are expected to write and execute programs
in MATLAB - a high-level programming language and interactive
environment.
Previous knowledge of MATLAB is not assumed,
but the students are expected to learn quickly the basics
of MATLAB in order to complete the homework assignments and projects.
For all computations required on in-class exams
a simple (non-graphing) calculator will be sufficient.
Occasionally we may also use Mathematica.
MATLAB and Mathematica are available on the computers in the
University's computer labs. MATLAB is abundantly documented
- see the links at "MATLAB tutorials" below.
Academic calendar for
Spring 2011.
Course schedule for
Spring 2011.
Policy on W/I Grades :
Through February 25 (Friday), you can withdraw
from the course with an automatic "W".
In addition, from February 28 (Monday) to May 6 (Friday),
you may withdraw and receive a "W" or "F"
according to your standing in the class.
Dropping after April 4 (Monday) requires a petition to the Dean.
(Such petitions are not often granted.
Furthermore, even if the petition
is granted, I will give you a grade
of "Withdrawn Failing" if you are
indeed failing at the time of your petition.)
Please check the dates in the Academic Calendar!
The grade of "I" (Incomplete)
is not intended to serve as
a benign substitute for the grade of "F".
I only give the "I" grade
if a student has completed the majority
of the work in the course
(for example everything except the final exam),
the coursework cannot be completed
because of compelling and verifiable problems
beyond the student's control, and the student expresses a clear
intention of making up the missed work as soon as possible.
Academic Misconduct: All cases of suspected academic misconduct will
be referred to the Dean of the College of Arts and Sciences for prosecution
under the University's Academic Misconduct Code. The penalties can be quite
severe. Don't do it!
For details on the University's
policies concerning academic integrity see the
Student's Guide to Academic Integrity
at the
Academic Integrity web-site.
For information on your rights to appeal charges
of academic misconduct consult the
Academic Misconduct Code.
Students are also bound by the provisions of the
OU Student Code.
Students With Disabilities:
The University of Oklahoma is committed to providing reasonable accommodation
for all students with disabilities. Students with disabilities who require
accommodations in this course are requested to speak with the instructor
as early in the semester as possible. Students with disabilities must be
registered with the Office of Disability Services prior to receiving
accommodations in this course. The Office of Disability Services is located
in Goddard Health Center, Suite 166: phone 405-325-3852 or TDD only
405-325-4173.
MATLAB tutorials:
Good to know: