MATH 4073 - Numerical Analysis, Section 001 - Fall 2009
TR 1:30-2:45 p.m., 270 Nielsen Hall
Instructor:
Nikola Petrov, 802 PHSC, (405)325-4316, npetrov AT math.ou.edu.
Office Hours:
Tue 3:00-4:00 p.m., Wed 2:00-3:00 p.m., or by appointment.
Prerequisites:
MATH 3113 (Intro to ODEs) or 3413 (Physical Math I).
Course catalog description:
Solution of linear and nonlinear equations, approximation of functions, numerical integration and differentiation, introduction to analysis of convergence and errors, pitfalls in automatic computation, one-step methods in the solutions of ordinary differential equations. (F)
Course Objectives:
This course is intended to be a mathematical
introduction to the theory and practical use
of certain basic numerical methods
that often arise in applications.
While the emphasis of the course will be placed solidly on applications,
we will discuss some of the mathematical theory behind the methods we study.
Some theoretical understanding is critical to the proper practice
of numerical analysis because no numerical method works 100% of the time.
Thus when a method fails, the theory behind the method can often illuminate
what went wrong and perhaps give insights into alternative approaches
that may work better for the given problem.
Course Content:
-
Review of pertinent concepts from calculus.
-
Computer arithmetic and round-off errors.
-
Algorithms and convergence.
-
Solutions of equations in one variable
(bisection method, Newton's method, error analysis).
-
Interpolation and polynomial approximation
(Lagrange polynomials, cubic splines).
-
Numerical differentiation and integration
(Richardson's extrapolation,
Newton-Cotes formulae for integration,
Romberg integration, Gaussian quadrature).
-
Initial-value problems for ordinary differential equations
(Euler's method, higher-order Taylor methods, Runge-Kutta methods).
Text:
R. L. Burden, J. D. Faires.
Numerical Analysis.
8th ed, Brooks/Cole, 2004, ISBN 0-534-39200-8.
The course will cover major portions of Chapters 1-5.
Homework
(Solutions are posted after the due date in the Chemistry-Mathematics
Library, 207 PHSC)
-
Homework 1, due Thu, Sep 3.
-
Homework 2, due Thu, Sep 10.
-
Homework 3,
due
Tue, Sep 22 (please note the unusual date).
Codes needed for Homework 3 (and codes to help you):
-
Homework 4,
due Thu, Oct 1.
Codes needed for Homework 4:
-
Homework 5,
due Thu, Oct 8.
-
Homework 6,
due Thu, Oct 15.
-
Homework 7,
due Thu, Nov 5.
Codes needed for Homework 7:
-
Homework 8,
due Thu, Nov 12.
-
Homework 9,
due Thu, Nov 19.
-
Homework 10,
due
in class
on Tue, Dec 1.
Codes needed for Homework 10:
-
instructions
how to run the MATLAB codes below;
-
the MATLAB code
euler.m
from the book Numerical Methods Using MATLAB by J.H. Mathews and
K. D. Fink (4th edition, Pearson Prentice Hall, 2004),
and the MATLAB file rhs.m
;
-
the MATLAB code
taylor2.m
from the book
Numerical Methods Using MATLAB
by J.H. Mathews and K. D. Fink
(4th edition, Pearson Prentice Hall, 2004),
and the MATLAB file
twoders.m
.
Content of the lectures:
-
Lecture 1 (Tue, Aug 25):
Introduction.
Review of calculus:
notations, limit and continuity of a function,
limit of a sequence, intermediate value theorem,
derivative of a function,
continuity and differentiability,
Rolle's theorem, mean value theorem,
extreme value theorem, Riemann sums and integral,
fundamental theorem of calculus,
mean value theorem for integrals,
(Sec. 1.1).
-
Lecture 2 (Thu, Aug 27):
Review of calculus (cont.):
Taylor's theorem, examples of Taylor series (Sec. 1.1).
Computer arithmetic: binary, decimal and hexadecimal systems,
bits and bytes, 1985 IEEE standard for a long real number,
working with a finite number of digits - rounding and chopping,
roundoff errors - absolute and relative errors,
examples of roundoff errors - solving a quadratic equation
(Sec. 1.2).
-
Lecture 3 (Tue, Sep 1):
Computer arithmetic (cont.):
computing a sum, nested way of computing polynomials,
floating-point operations,
a C code
test_machine_epsilon.c
and the
output_test_machine_epsilon
from running it
(Sec. 1.2).
Algorithms and convergence:
pseudocode, loops, conditional execution, examples,
stable, unstable and conditionally stable
algorithms; linear and exponential growth of the error,
error in a numerical solution of a recurrence relation
(Sec. 1.3).
-
Lecture 4 (Thu, Sep 3):
Algorithms and convergence (cont.):
rate of convergence, "big O" notation, examples
(Sec. 1.3).
The bisection method:
general idea, algorithm, stopping criteria,
MATLAB codes
bisection.m
and myfunction.m
,
and the output
from running them
(Sec. 2.1).
-
Lecture 5 (Tue, Sep 8):
The bisection method:
rate of convergence
(Sec. 2.1).
Fixed-point iteration:
fixed points,
relation between finding a zero and finding a fixed point,
existence and uniqueness of fixed points -
statement, graphical interpretation, and proof of Theorem 2.2
on the existence and uniqueness of fixed points,
algorithm for fixed point iteration (FPI)
(Sec. 2.2).
A transparency
showing some preliminary work on finding an interval
[a,b] such that
g([a,b]) is a subset of
[a,b]
for finding a root of the equation
x2-2=0 by treating it
as a FP problem.
-
Lecture 6 (Thu, Sep 10):
Fixed-point iteration (cont.):
example 3 (page 57-58) of constructing
a FPI procedure for solving a nonlinear equation;
Fixed Point Theorem (Theorem 2.3), proof,
Corollary 2.4 (Math majors, please read the proof)
(Sec. 2.2).
Newton's method:
geometric representation,
obtaining it from Taylor expansion
of the function
(Sec. 2.3).
-
Lecture 7 (Tue, Sep 15):
Newton's method (cont.):
examples of failure of Newton's method,
problems with Newton's method
(one needs to compute derivatives,
one needs a good choice of initial point p0,
the root is not necessarily bracketed);
secant method:
does not involve computing derivatives,
geometric representation,
problems with the secant method
(one needs a good choice of initial points
p0 and p1,
the root is not necessarily bracketed);
method of false position (Regula Falsi):
in this method the root is bracketed between
pn and
pn-1 at each step,
please read details from the book
(Sec. 2.3).
Error analysis of iterative methods:
order of convergence α
and asymptotic error constant λ
of a convergent sequence (and of an iterative method),
linear and quadratic convergence,
illustration of the speed of convergence
in the linearly and in the quadratically converging cases,
theorem about linear convergence
of an iterative method
pn
= g(pn-1)
with g'(p)≠0
(Theorem 2.7, with proof),
theorem about quadratic convergence
of an iterative method
pn
= g(pn-1)
with g'(p)=0
(Theorem 2.8, with sketch of proof);
constructing a quadratically convergent
iterative procedure from a zero-finding problem
(and obtaining the Newton's method as a result!)
(Sec. 2.4).
-
Lecture 8 (Thu, Sep 17):
Error analysis of iterative methods (cont.):
multiplicity of a zero of a function,
simple zeros,
relation between the multiplicity of a zero
and the number of vanishing derivatives
(Theorems 2.10, 2.11),
examples;
handling multiple zeros
- finding a simple zero of
μ(x)=f(x)/f'(x)
is equivalent ot finding a non-simple zero
of f(x);
remarks about rate of convergence:
bisection (linearly convergent, λ=1/2),
Newton for a simple zero (quadratically convergent),
secant (convergent of order α=(1+51/2)/2=1.62...),
Regula Falsi (linearly convergent)
(Sec. 2.4).
Remark:
Representing iterations graphically,
attracting and repelling fixed points.
Accelerating convergence:
derivation of the Aitken Δ2 method
(Sec. 2.5).
-
Lecture 9 (Tue, Sep 22):
Accelerating convergence (cont.):
Aitken's method for accelerating convergence,
forward differences,
Steffensen's method for accelerating convergence,
order of convergence of Steffensen's modification
of a fixed-point iteration
(Sec. 2.5).
Zeros of polynomials and Muller's method:
fundamental theorem of algebra, corollaries,
Horner's method (synthetic division),
computing P(pn-1)
and P'(pn-1)
in Newton's method for polynomials
by using synthetic division,
method of deflation for finding zeros of polynomials
(Sec. 2.6).
-
Lecture 10 (Thu, Sep 24):
Zeros of polynomials and Muller's method (cont.):
Muller's method for finding zeros of polynomials.
Sensitivity of zeros of polynomials - Wilkinson's example
(Sec. 2.6).
Introduction to interpolation and approximation:
need of approximating data,
differnent approaches - interpolation and approximation.
Interpolation and Lagrange polynomial:
idea, construction of the polynomials
Ln,k(x),
explicit expression of Lagrange interpolating polynomial
(Sec. 3.1).
-
Lecture 11 (Tue, Sep 29):
Interpolation and Lagrange polynomial (cont.):
error bound in Lagrange interpolation (Theorem 3.3),
a sketch of an example of a local linear interpolation
- piecewise-linear interpolating polynomial
for the function exp(x) on the interval [0,1]
assuming that the values of the function are given
on an equidistant set of points
0=x0<x1<x2<...<xn=1
(Example 2 on p.109)
(Sec. 3.1).
Divided differences:
idea, notations for Newton's divided differences,
interpolating polynomial in Newton's divided differences form,
computing the divided differences manually (in a table),
efficiency of Newton's divided differences algorithm
(Sec. 3.2).
-
Lecture 12 (Thu, Oct 1):
Divided differences (cont.):
forward and backward differences,
"s choose k" notation,
forward-differences and backward-differences ways
of writing an interpolating polynomial
(Sec. 3.2).
Cubic spline interpolation:
general idea, cubic spline interpolant S(x)
and its restrictions Sj(x)
to [xj,xj+1],
compatibility conditions on the functions
Sj(x)
at the internal node points
xj+1 (j=0,1,...,n-2),
physical meaning and mathematical formulation
of the boundary conditions
(Sec. 3.4).
-
Lecture 13 (Tue, Oct 6):
Cubic spline interpolation (cont.):
reformulating the conditions on
Sj(x)
in terms of the coefficients
aj,
bj,
cj,
dj;
deriving a linear system for
bj
and
cj
from the conditions on
Sj(x)
at the internal points;
deriving a linear system of (n+1)
equations for the (n+1)
unknowns
c0,
c1, ...,
cn
from the conditions on
Sj(x)
at the internal points
and the free (natural) boundary conditions;
the linear system for
c0,
c1, ...,
cn
can be written as a matrix equation
A⋅c=v,
where A is a (n+1)×(n+1)
tridiagonal matrix, and the number of operations
to solve the system is of order n
(while for a general (n+1)×(n+1)
matrix it is of order n3)
(Sec. 3.4).
-
Lecture 14 (Thu, Oct 8):
Cubic spline interpolation (cont.):
theoretical error bounds: error = O(h4) (Theorem 3.13),
remarks about the practical importance
of cubic splines
(Sec. 3.4).
Numerical differentiation:
basic idea - using Lagrange interpolating polynomials;
forward- and backward-difference formulas (error = O(h)),
(n+1)-point formula,
three-point formulas (error = O(h2)),
formula for the second derivative (error = O(h2))
(Sec. 4.1).
-
Lecture 15 (Tue, Oct 13):
Richardson extrapolation:
idea and derivation,
example of applying Richardson extrapolation
to the computation of an integral
by left Riemann sums
(Sec. 4.2).
-
Lecture 16 (Thu, Oct 15):
Numerical integration:
goal, examples, ideas,
trapezoidal formula and Simpson's formula
for numerical quadrature
(Sec. 4.3).
-
Lecture 17 (Tue, Oct 20):
Hour Exam 1
(on everything before - and not including - spline interpolation).
-
Lecture 18 (Tue, Oct 27):
Numerical integration (cont.):
open and closed Newton-Cotes (N-C) formulas,
degree of accuracy of a quadrature formula,
examples of closed N-C formulas
(trapezoidal, Simpson's, Simpson's 3/8),
theorem on errors in open N-C formulas;
examples of closed N-C formulas (midpoint rule),
theorem on errors in open N-C formulas
(Sec. 4.3).
-
Lecture 19 (Thu, Oct 29):
Composite numerical integration:
derivation of the formulas and the error terms
of the composite trapezoidal and Simpson's rules
(Sec. 4.4).
Practical issues in numerical integration:
infinite limit(s) of integration
- choosing a large enough interval
[a,b] such that the "tails"
have small enough values,
changing variables so that the limits of integration become finite.
-
Lecture 20 (Tue, Nov 3):
Romberg integration:
idea - composite trapezoidal rule
for different stepsizes followed by
Richardson's extrapolation, an example
(Sec. 4.5).
Gaussian quadrature:
idea and a simple example;
linear space (vector space) - definition
(Sec. 4.7).
-
Lecture 21 (Thu, Nov 5):
Gaussian quadrature
- theoretical foundations:
linear space (vector space),
linear space of polynomials
of degree not exceeding n,
inner product linear space,
inner product in a space of polynomials,
weight function w,
examples of systems of orthogonal polynomials,
using orthogonal polynomials for numerical integration
- Theorem 1
in the
handout on Gaussian quadrature
(Sec. 4.7).
-
Lecture 22 (Tue, Nov 10):
Practical issues in numerical integration
(cont. from Lecture 19):
dealing with singularities in the integrand
(Sec. 4.9).
Elementary theory of initial-value problems:
initial-value problems (IVPs) for ordinary
differential equations (ODEs)
- definition, examples
Lipschitz condition and Lipschitz constant,
examples of Lipschitz and non-Lipschitz functions,
bounded derivative implies Lipschitz,
existence and uniqueness of solutions
of an IVP for a first-order ODE,
well-posedness of an an IVP for a first-order ODE,
an example of an IVP without a solution
(Sec. 5.1).
-
Lecture 23 (Thu, Nov 12):
Elementary theory of initial-value problems (cont.):
an example of an IVP with infinitely many solutions -
equation of formation of a droplet of a liquid in saturated vapor
(Sec. 5.1).
Euler's method:
idea, derivation, error analysis
(Sec. 5.2).
-
Lecture 24 (Tue, Nov 17):
Euler method (cont.):
remarks about the practical applications of the
error bound, example
(Sec. 5.2).
Higher-order Taylor mehtods:
local truncation error,
proof that the local truncation error
of Euler's method is O(h),
derivation of higher-order Taylor methods
(Sec. 5.3).
-
Lecture 25 (Thu, Nov 19):
Higher-order Taylor mehtods (cont.):
an example,
the error of the Taylor method of order n
is O(n) (Theorem 5.12, without proof)
(Sec. 5.3).
Runge-Kutta methods:
Taylor Theorem in 2 variables (Theorem 5.13, without proof),
practical difficulties of applying higher-order Taylor methods,
goal and idea of Runge-Kutta methods
(Sec. 5.4).
-
Lecture 26 (Tue, Nov 24):
Runge-Kutta methods (cont.):
derivation of the midpoint method - a simple RK method
that has the same local truncation error
as the Taylor method of order 2
(pages 275-276 of Sec. 5.4).
-
Lecture 27 (Tue, Dec 1):
Runge-Kutta methods (cont.):
derivation of the modified Euler method
and Heun's method;
Runge-Kutta method of order four
(without derivation),
error control and idea of
Runge-Kutta-Fehlberg method
(Sec. 5.2).
Classification of the methods for numerical
treatment of IVPs for ODEs:
one step methods - explicit and implicit,
examples (Euler - explicit, midpoint - implicit);
m-step methods.
-
Lecture 28 (Thu, Dec 3):
Hour Exam 2
(on spline interpolation and everything cover after that).
Grading:
Your grade will be determined by your performance
on the following coursework:
Coursework |
Weight |
Homework |
20% |
Exam 1 |
25% |
Exam 2 |
25% |
Final Exam |
30% |
Homework: Homework assignments will be set
regularly throughout the semester.
Each homework
will consist of several problems,
of which some pseudo-randomly chosen problems will be graded.
Your two lowest homework grades will be dropped.
All hand-in assignments
will carry a specific due date and must be submitted
in class on the due date.
If you cannot come to class, you can
turn in your homework in my office no later than 5 p.m.
on the same day (if I am not in my office,
you can slip it in under the door).
No late homeworks will be accepted.
Shortly after a homework assignment's due date,
solutions to the problems from that assignment
will be placed on restricted reserve in the Chemistry-Mathematics Library
on the second floor of the Physical Sciences Center.
All homework assignments will be posted on this
page one week before the assignment is due.
Exams:
There will be two in-class midterms and a (comprehensive) in-class final.
Tentative dates for the midterms are
October 13 (Tuesday) and November 17 (Tuesday).
The final is scheduled for December 18 (Friday), 1:30-3:30 p.m.
All tests must be taken at the scheduled times,
except in extraordinary circumstances.
Please do not arrange travel plans that prevent you
from taking any of the exams at the scheduled time.
Attendance:
You are required to attend class on those days when an
examination is being given; attendance during other class periods is also
strongly encouraged.
You are fully responsible for the
material covered in each class, whether or not you attend.
Make-ups for missed exams will be given only if
there is a compelling reason for the absence,
which I know about beforehand
and can document independently of your testimony
(for example, via a note or phone call from a
doctor or a parent).
Technology:
Because this is a mathematics course,
we will emphasize the mathematical underpinnings of numerical
analysis and deliberately de-emphasize acquiring expertise in
any particular computer programming
language or software package. However, we will
frequently engage in computations
to illustrate the mathematical results that we derive.
For all computations required on in-class exams
and for most homework problems,
a calculator will be sufficient.
Even if the complexity of the numerical methods requires the
use of a computer,
the amount of programming you will need to do
will be very small, and previous programming experience is not assumed.
Moreover, the textbook comes with a CD-ROM
which contains programs for all of the numerical algorithms
developed in the textbook
(written in Fortran, C, Maple, Mathematica, Pascal, and MATLAB);
these programs are also available
on the web at
http://www.as.ysu.edu/~faires/Numerical-Analysis/DiskMaterial/index.html.
In class I will use MATLAB and Mathematica to illustrate
how some algorithms are implemented.
MATLAB and Mathematica are available on the computers in the University's computer labs,
in particular, in the College of Arts and Sciences computer labs
in the Physical Sciences Center (room 232) and in Dale Hall Tower;
you can also purchase a student version of MATLAB to load on your own PC
at the University Bookstore (the price is about $100),
but please note that you are not required to buy it.
Academic calendar for
Fall 2009.
Course schedule for
Fall 2009.
Policy on W/I Grades :
Through October 2 (Friday), you can withdraw
from the course with an automatic "W".
In addition, from October 5 (Monday) to December 11 (Friday),
you may withdraw and receive a "W" or "F"
according to your standing in the class.
Dropping after November 30 (Monday) requires a petition to the Dean.
(Such petitions are not often granted.
Furthermore, even if the petition
is granted, I will give you a grade
of "Withdrawn Failing" if you are
indeed failing at the time of your petition.)
Please check the dates!
The grade of "I" (Incomplete)
is not intended to serve as
a benign substitute for the grade of "F".
I only give the "I" grade
if a student has completed the majority
of the work in the course
(for example everything except the final exam),
the coursework cannot be completed
because of compelling and verifiable problems
beyond the student's control, and the student expresses a clear
intention of making up the missed work as soon as possible.
Academic Misconduct: All cases of suspected academic misconduct will
be referred to the Dean of the College of Arts and Sciences for prosecution
under the University's Academic Misconduct Code. The penalties can be quite
severe. Don't do it!
For details on the University's
policies concerning academic integrity see the
A Student's Guide to Academic Integrity.
For information on your rights to appeal charges
of academic misconduct consult the
Rights and Responsibilities Under the Academic Misconduct Code.
Students are also bound by the provisions of the
OU Student Code.
Students With Disabilities:
The University of Oklahoma is committed to providing reasonable accommodation
for all students with disabilities. Students with disabilities who require
accommodations in this course are requested to speak with the instructor
as early in the semester as possible. Students with disabilities must be
registered with the Office of Disability Services prior to receiving
accommodations in this course. The Office of Disability Services is located
in Goddard Health Center, Suite 166: phone 405-325-3852 or TDD only
405-325-4173.
MATLAB tutorials:
Good to know: