MATH 4093/5093 - Applied Numerical Methods - Fall 2010
Thu 12:30-3:10 p.m., OU-Tulsa, Learning Center 220
Instructor:
Nikola Petrov, 802 PHSC, (405)325-4316, npetrov AT math.ou.edu.
Prerequisites:
Multivariable Calculus at the level of MATH 2443, Ordinary
Differential Equations at the level of MATH 3113 or MATH 3413,
Linear Algebra at the level of MATH 3333 or 4373,
or permission of the instructor.
Previous knowledge of Numerical Analysis is not assumed.
Course catalog description:
Numerical treatment of ordinary differential equations,
numerical linear algebra and applications,
basic numerical methods for partial differential equations.
No student can earn credit for both 4093 and 5093. (Alt. Sp.)
Course Content
(here is a link
to a more detailed description):
-
Numerical linear algebra.
-
Numerical solutions of initial-value problems for ordinary
differential equations.
-
Numerical approximation of functions.
-
Numerical solutions of nonlinear systems of equations.
-
Numerical solutions of boundary-value problems for
ordinary differential equations.
-
Numerical solutions of partial differential equations.
Text:
B. Bradie.
A Friendly Introduction to Numerical Analysis.
Pearson/Prentice Hall, 2006, ISBN 0-13-012054-0.
Homework:
-
Homework 1, due Thursday,
September 2.
-
Homework 2, due Friday, September
10.
-
Homework 3, due Friday, September
17.
-
Homework 4, due Monday, October 4.
-
Homework 5, NOT due Friday, October 8.
The MATLAB code
newton.m
needed for Homework 5.
-
Homework 6, due Monday, November 1.
-
Homework 7, due Friday, November 12.
-
Homework 8, due Monday, November 22.
Content of the lectures:
-
Lecture 1 (Thu, Aug 26):
Review of calculus: notations, functions,
sequences as functions with domain the natural numbers,
continuity of a function (express through limits of sequences),
relationship between continuity and differentiability of a function
mean value theorem (MVT),
notation Cn([a,b]),
definition of a Riemannian integral over a finite interval
[a,b] expressed as a limit of Riemann sums,
Taylor's theorem, Taylor polynomial, remainder term
(truncation error), examples - Taylor expansions of
exp(x), sin(x), cos(x), ln(1+x)
around x0=0.
Initial value problems (IVPs) for ordinary differential equations
(ODEs) - Key numerical concepts and elements of continuous theory:
IVP=ODE+IC (IC=initial condition(s)),
key issues - existence, uniqueness, stability;
an example of an IVP without solution,
an example of an IVP with infinitely many solutions,
Lipschitz functions, theorem of existence and uniqueness
of solutions of IVPs;
numerical treatment of IVPs - mesh, time stepping,
yi=y(ti)
versus wi,
a simple-minded derivation of a difference equation
fot the wi's - Euler's method;
classification of the methods - one-step versus multistep methods,
explicit versus implicit methods, examples
(Sec. 7.1).
-
Lecture 2 (Thu, Sep 2):
IVPs for ODEs - key numerical concepts and elements of continuous
theory (cont.):
Fundamental numerical concepts: local truncation error,
consistency of a method, global discretization error,
convergent methods, stability
(Sec. 7.1).
Euler's method:
derivation using Taylor's theorem, derivation using integrals,
analysis of Euler's method, numerical example (using Matlab),
example of a numerical investigation of the rate of convergence
of Euler's method as h tends to 0
(Sec. 7.2).
Higher-order one-step methods - Taylor methods:
derivation, local truncation error,
big O notation for rate of convergence, examples,
a numerical example of application of the Taylor method
of order 2
(Sec. 7.3).
-
Lecture 3 (Thu, Sep 9):
Runge-Kutta (RK) methods:
idea, derivation of second-order RK methods,
classical fourth-order RK method, general form
of RK methods (Sec. 7.4).
Lagrange form of the interpolating polynomial:
general set-up of interpolation,
motivation from the case of linear interpolation
based on two points,
Lagrange form of the interpolating polynomial
based on function values at (n+1) points,
interpolation error (Sec. 5.1).
Multistep methods:
general form of an -step method,
idea of Adams-Bashforth (AB) methods,
derivation of the 2-step AB method (AB2)
(Sec. 7.5).
-
Lecture 4 (Thu, Sep 16):
Multistep methods (cont.):
proof that the 2-step AB method is of order 2;
idea of Adams-Moulton (AM) methods,
derivation of the 2-step AM method,
the 2-step AM method is of order 3 (without proof);
predictor-corrector schemes
(Sec. 7.5).
Convergence and stability analysis:
consistency, convergence, stability;
(consistency)=(convergence) for 1-step methods;
consistency conditions for m-step methods;
stability of m-step methods:
characteristic equation, root condition,
strong versus weak stability
(Sec. 7.6).
-
Lecture 5 (Thu, Sep 23):
Convergence and stability analysis (cont.):
strong versus weak stability - a comparison
of the 2-step AB method and the leapfrog method
(Sec. 7.6).
Error control and variable step size algorithms:
idea and implementation, changing the stepsize h,
Runge-Kutta-Fehlbert 4th order - 5th order method (RKF45)
(Sec. 7.7).
The bisection method:
idea - coming from the Intermediate Value Theorem,
implementation, stopping criteria,
rate of convergence O(2-n),
advantages (always converges, the root is always bracketed),
disadvantages (slow convergence)
(Sec. 2.1).
Fixed point iteration (FPI):
definition of a fixed point of a function,
fixed point iteration,
order of convergence α and asymptotic error constant λ
of a convergent sequence,
linear, quadratic, cubic,... convergence,
illustration of different orders of convergence,
finding α and λ practically,
graphical representation of FPI,
existence and uniqueness of the FP
(Theorem on page 84 of the book),
geometric idea of the proof,
if a sequence {pn}
generated by
pn=g(pn-1)
converges, then its limit p
is a fixed point of g: g(p)=p
(Sec. 2.3).
-
Lecture 6 (Thu, Sep 30):
Fixed point iteration (cont.):
linear convergence in the case g'(p)≠0
(Theorem on page 90, with proof),
example - finding 121/3 by FPI
(Sec. 2.3).
Newton's method:
idea, derivation, multiplicity of a zero, simple
and multiple roots, computing the multiplicity of a zero
through derivatives (Theorem on page 56, without proof),
examples of zeros of different multiplicities,
finding the multiplicity of a zero by using the Taylor series,
Newton's method converges at least quadratically
for simple zeros (Theorem on page 98, without proof),
example of aplication of Newton's method
for computing 121/3
(Sec. 2.4 and text on pages 56-57).
The secant method:
goal - avoiding the computation of derivatives,
idea - replace the tangent line by a secant line,
the method converges with order ov convergence
α=(1+51/2)/2≈1.62
(Sec. 2.5).
Linear algebra review:
matrix, addition and scalar multiplication of matrices,
transpose of a matrix, inverse matrix, invertible (nonsingular)
and noninvertible (singular) matrices,
results about nonsingular matrices (Theorem on page 145),
determinant of a square matrix, properties of determinants;
relation between invertibility of a matrix, the determinant
of the matrix and the solutions of a system of linear equations
(Sec. 3.0).
Gaussian elimination:
augmented matrix of a linear system of equations,
upper triangular matrices, Gaussian elimination,
elementary row operations,
example of a Gaussian elimination followed by a back substitution,
operation count
(Sec. 3.1).
-
Lecture 7 (Thu, Oct 7):
Gaussian elimination (cont.):
Gauss-Jordan elimination versus (Gaussian elimination + back
substitution)
(Sec. 3.1).
Pivoting strategies:
partial pivoting, using a row vector r to keep track
of the interchanging the rows of the augmented matrix
(A|b) (i.e., of interchanging the order
of the equations in the linear system);
scaled partial pivoting, scale vector s
(Sec. 3.2).
Vector and matrix norms:
linear space, a (vector) norm on Rn,
lp-norms,
Cauchy-Schwarz inequality,
proof that the l2-norm is indeed a norm,
convergence of sequences of vectors in Rn
equipped with a particular norm, equivalent norms,
proof that
the l1- and l∞-norms
are equivalent, pictures of the unit balls in R2
endowed with the l2-norm
and with the l∞-norm,
equivalence of norms in terms of inscibing "balls";
matrix norms, operator norm
(Sec. 3.3).
-
Lecture 8 (Thu, Oct 14):
Vector and matrix norms (cont.):
expression for the || ||∞ operator norm;
eigenvectors, eigenvalues, spectrum, spectral radius ρ(A),
characteristic polynomial of a matrix A,
expression for || ||2,
the spectral radius ρ(A) is the greatest lower bound
on all natural norms of a matrix
(Sec. 3.3).
Error estimates and condition number:
error e and residual vector r of an approximate
solution of a linear system Ax=b,
an example of a small residual but large error;
a theorem giving lower and upper bounds
on the error ||e|| and the relative error
||e||/||x|| in terms of the residual
||r|| and the relative residual
||r||/||b||;
condition number κ(A) of a square matrix;
perturbations to A and b,
an upper bound on the relative error
||δx||/||x||
in terms of the condition number κ(A)
and the relative perturbations
||δA||/||A|| and
||δb||/||b||;
briefly on the rounding errors introduced by Gaussian elimination;
an example of ill-conditioned matrices - Hilbert matrices
(Sec. 3.4).
-
Lecture 9 (Thu, Oct 21):
LU decomposition:
upper and lower triangular matrices,
the product of two upper (lower) triangular matrices
is an upper (lower) triangular matrix;
can every square matrix A be written as A=LU?;
is the decomposition A=LU unique?
- yes, up to a rescaling by a diagonal matrix;
obtaining an LU decomposition
by Gaussian elimination if no row interchanges are necessary;
an LU decomposition
by Gaussian elimination with row interchanges
- permutation matrix and row vector, PA=LU;
using LU decomposition to solve many linear systems
with identical coefficient matrix
(Sec. 3.5).
Direct factorization:
Doolittle and Crout factorizations,
deriving the algorithm for the Crout factorization,
possibility to perform Crout factorization
without additional memory space
(Sec. 3.6).
Special matrices:
symmetric positive definite matrices - definition and properties,
Cholesky decomposition A=LLT;
tridiagonal matrices, LU decomposition of a tridiagonal
matrix, computational cost of solving a tridiagonal system
(Sec. 3.7).
Itertive techniques for linear systems: basic concepts and
methods:
deriving an iterative procedure for solving a linear system,
conditions for convergence, rate of convergence;
(M,N)-splitting of a square matrix;
Jacobi, Gauss-Seidel, and successive overrelaxation (SOR) methods
(Sec. 3.8).
-
Lecture 10 (Thu, Oct 28):
Iterative techniques for linear systems: conjugate gradient
method:
vectors as functions from {1,2,...,n} to R,
functions and functionals,
quadratic functionals
f(x)=(1/2)xTAx−bx+c
with a symmetric positive definite matrix A,
connection between the minimizer of f
and the solution of the linear system Ax=b,
idea of the conjugate gradient method,
choosing the step size and the search direction,
A-conjugate pair of vectors,
idea of preconditioning
(Sec. 3.9).
Nonlinear systems of equations:
Newton method in n dimensions,
Jacobian matrix, quasi-Newton mehtods
(Sec. 3.10).
Cubic spline interpolation:
partition of an interval,
a cubic spline interpolant of a function
relative to a partition,
conditions on the functions sj(x)
(the restrictions of s(x) to the intervals
[xj,xj+1]),
not-a-knot, clamped, and free boundary conditions,
deriving a tridiagonal linear system for the coefficients
cj, error in cubic interpolation
(Sec. 5.6).
-
Lecture 11 (Thu, Nov 4):
Derivation of the heat equation:
derivation of the equation from the conservation of the heat energy,
Dirichlet, Neumann, and Robin boundary conditions (BCs),
initial condition (IC).
Numerical differentiation:
idea - the derivative of the function at a point is approximate equal
to the derivative of a Lagrange interpolating polynomial
approximating the function in a neighborhood of this point;
derivation of the "forward-difference formula" (with error
O(h)) and the "three-point formula" (with error
O(h2)) for the first derivative,
and a formula for the second derivative with error
O(h2)
(Sec. 6.2).
Finite difference (FD) method, part 1 - the linear problem
with Dirichlet BCs:
set-up,
derivation of the FD approximation,
matrix formulation, solvability of the discrete equations,
diagonally dominant matrices,
alternative matrix formulation,
maximum absolute error and root mean square error,
empirical illustration that the error of the method is
O(h2))
(Sec. 8.1).
FD method, part 2 - the linear problem
with non-Dirichlet BCs:
discussion of possible strategies to deal with
non-Dirichlet BCs, incorporating the Robin BCs
in the linear system for the unknowns
w0,w1,...,wN
(Sec. 8.2).
-
Lecture 12 (Thu, Nov 11):
The shooting method, part 1 - linear BVPs:
shooting method for a nonlinear BVP with Dirichlet BCs, example,
a method for solving linear BVPs, example
(Sec. 8.4, 8.5).
Partial differential equations - introduction:
heat equation in Rd and in
R2,
BCs (Dirichlet, Neumann, Robin), initial condition (IC);
steady-state solution (in the case when the BCs and
the coefficient functions in the heat equation are
time-independent),
stationary heat equation in a rectangle;
classification of linear 2nd order PDEs
for functions of two variables (elliptic, hyperbolic, parabolic);
a remark on the arbitrariness in the general solution
of a nth order PDE for a function of d variables,
example;
examples of PDEs: elliptic (Poisson), parabolic (heat, diffusion),
hyperbolic (wave) equations.
The Poisson equation on a rectangular domain, part 1
- Dirichlet BCs:
computational grid, boundary and interior grid points,
discretizing the PDE, stencil (computational template),
5-point star approximation for the Laplacian;
organizing the equations, lexicographic ordering,
general structure of the coefficient matrix of the linear system
Aw=b, properties of A
(non-singular, symmetric, positive-definite),
corollaries of these properties;
ideas of implementing non-Dirichlet BCs
- fictitious nodes along the walls with non-Dirichlet BCs;
the error of the methods considered so far is O(h2)
(Sec. 9.1 and a part of Sec. 9.2).
Solving the discrete equations: relaxation methods:
discussion of the advantages of the relaxation methods
(in comparison with the Gaussian elimination and back substitution)
- no need to store the computation matrix, stability
with respect to the numerical error;
discussion of the point relaxation methods (Jacobi, Gauss-Seidel, SOR)
from points of view of memory usage and parallelizability
of the algorithm
(Sec. 9.3).
-
Lecture 13 (Thu, Dec 2):
The heat equation with Birichlet BCs:
set-up and physical meaning, spatial discretization
(2nd order central difference approximation for
uxx),
writing the equations as a system of linear 1st order ODEs;
solving the semidiscrete system by temporal discretization:
FTCS, BTCS, and Crank-Nicolson schemes
(Sec. 10.1).
Absolute stability:
unconditionally stable and conditionally stable methods,
matrix stability analysis of the FTCS, BTCS, and Crank-Nicolson
schemes, von Neumann stability analysis
(Sec. 10.2).
Other problems related to parabolic PDEs (only an idea):
more general equations: including a source term and a decay term
(example: diffusion of an unstable contaminant) (Sec. 10.3);
non-Dirichlet BCs (Sec. 10.4);
polar coordinates: avoiding the artificial singularity at the origin
(Sec. 10.5);
problems in two spatial dimensions (Sec. 10.6).
Grading:
Your grade will be determined by your performance
on the following coursework:
Coursework |
Weight |
Homework |
35% |
Exam 1 |
20% |
Exam 2 |
20% |
Final Exam |
25% |
Homework: Homework assignments will be set
regularly throughout the semester.
Each homework
will consist of several problems,
of which some pseudo-randomly chosen problems will be graded.
Your lowest homework grade will be dropped.
All hand-in assignments
will carry a specific due date and must be submitted
in class on the due date.
No late homeworks will be accepted.
Shortly after a homework assignment's due date,
solutions to the problems from that assignment
will be placed on restricted reserve in the Chemistry-Mathematics Library
on the second floor of the Physical Sciences Center.
All homework assignments will be posted on this
page one week before the assignment is due.
Exams:
There will be two in-class midterms and a (comprehensive) in-class final.
Tentative dates for the midterms are
October 7 (Thursday) and November 18 (Thursday).
The final is scheduled for December 15 (Wednesday), 1:30-3:30 p.m.
All tests must be taken at the scheduled times,
except in extraordinary circumstances.
Please do not arrange travel plans that prevent you
from taking any of the exams at the scheduled time.
Attendance:
You are required to attend class on those days when an
examination is being given; attendance during other class periods is also
strongly encouraged.
You are fully responsible for the
material covered in each class, whether or not you attend.
Make-ups for missed exams will be given only if
there is a compelling reason for the absence,
which I know about beforehand
and can document independently of your testimony
(for example, via a note or phone call from a
doctor or a parent).
Technology:
Because this is a mathematics course,
we will emphasize the mathematical underpinnings of numerical
analysis and deliberately de-emphasize acquiring expertise in
any particular computer programming
language or software package. However, we will
frequently engage in computations
to illustrate the mathematical results that we derive.
For all computations required on in-class exams
and for most homework problems,
a calculator will be sufficient.
Even if the complexity of the numerical methods requires the
use of a computer,
the amount of programming you will need to do
will be very small, and previous programming experience is not assumed.
In class I will use MATLAB and Mathematica to illustrate
how some algorithms are implemented.
MATLAB and Mathematica are available on the computers in the
University's computer labs.
Academic calendar for
Fall 2010.
Course schedule for
Fall 2010.
Policy on W/I Grades :
Through October 1 (Friday), you can withdraw
from the course with an automatic "W".
In addition, from October 4 (Monday) to December 10 (Friday),
you may withdraw and receive a "W" or "F"
according to your standing in the class.
Dropping after November 1 (Monday) requires a petition to the Dean.
(Such petitions are not often granted.
Furthermore, even if the petition
is granted, I will give you a grade
of "Withdrawn Failing" if you are
indeed failing at the time of your petition.)
Please check the dates in the Academic Calendar!
The grade of "I" (Incomplete)
is not intended to serve as
a benign substitute for the grade of "F".
I only give the "I" grade
if a student has completed the majority
of the work in the course
(for example everything except the final exam),
the coursework cannot be completed
because of compelling and verifiable problems
beyond the student's control, and the student expresses a clear
intention of making up the missed work as soon as possible.
Academic Misconduct: All cases of suspected academic misconduct will
be referred to the Dean of the College of Arts and Sciences for prosecution
under the University's Academic Misconduct Code. The penalties can be quite
severe. Don't do it!
For details on the University's
policies concerning academic integrity see the
A Student's Guide to Academic Integrity.
For information on your rights to appeal charges
of academic misconduct consult the
Rights and Responsibilities Under the Academic Misconduct Code.
Students are also bound by the provisions of the
OU Student Code.
Students With Disabilities:
The University of Oklahoma is committed to providing reasonable accommodation
for all students with disabilities. Students with disabilities who require
accommodations in this course are requested to speak with the instructor
as early in the semester as possible. Students with disabilities must be
registered with the Office of Disability Services prior to receiving
accommodations in this course. The Office of Disability Services is located
in Goddard Health Center, Suite 166: phone 405-325-3852 or TDD only
405-325-4173.
MATLAB tutorials:
Good to know: