MATH 4073 - Numerical Analysis, Section 001 - Fall 2010
MWF 1:30-2:20 p.m., 321 PHSC
Instructor:
Nikola Petrov, 802 PHSC, (405)325-4316, npetrov AT math.ou.edu.
Office Hours:
Wed 3:30-4:30 p.m., Fri 10:00-11:00 p.m., or by appointment.
Prerequisites:
MATH 3113 (Intro to ODEs) or 3413 (Physical Math I).
Course catalog description:
Solution of linear and nonlinear equations, approximation of functions, numerical integration and differentiation, introduction to analysis of convergence and errors, pitfalls in automatic computation, one-step methods in the solutions of ordinary differential equations. (F)
Course Objectives:
This course is intended to be a mathematical
introduction to the theory and practical use
of certain basic numerical methods
that often arise in applications.
While the emphasis of the course will be placed solidly on applications,
we will discuss some of the mathematical theory behind the methods we study.
Some theoretical understanding is critical to the proper practice
of numerical analysis because no numerical method works 100% of the time.
Thus when a method fails, the theory behind the method can often illuminate
what went wrong and perhaps give insights into alternative approaches
that may work better for the given problem.
Course Content:
-
Review of pertinent concepts from calculus.
-
Computer arithmetic and round-off errors.
-
Algorithms and convergence.
-
Solutions of equations in one variable
(bisection method, Newton's method, error analysis).
-
Interpolation and polynomial approximation
(Lagrange polynomials, cubic splines).
-
Numerical differentiation and integration
(Richardson's extrapolation,
Newton-Cotes formulae for integration,
Romberg integration, Gaussian quadrature).
-
Initial-value problems for ordinary differential equations
(Euler's method, higher-order Taylor methods, Runge-Kutta methods).
Text:
R. L. Burden, J. D. Faires.
Numerical Analysis.
8th ed, Brooks/Cole, 2004, ISBN 0-534-39200-8.
The course will cover major portions of Chapters 1-5.
Homework
(Solutions are posted after the due date in Bizzell Library)
-
Homework 1, due Friday, Sep 3.
-
Homework 2, due Friday, Sep 10.
-
Homework 3,
due
Fri, Sep 17.
Codes needed for Homework 3:
-
Homework 4,
due
Fri, Sep 24.
The Matlab code
newton.m
needed for Homework 4.
To use the code to find a root of the equation
f(x)=x3−12=0,
you can type
newton( inline('x^3−12'), inline('3*x^2'), 3.0, 1e-14, 100, 1)
Here the first and the second arguments
are the function f and its derivative;
the meaning of the other arguments is clear from the code
newton.m
.
Don't forget to type format long
in order to see more digits of the result.
-
Homework 5,
due Mon, Oct 4.
A code needed for Homework 5:
-
Homework 6,
due Wed, Oct 13.
A code needed for Homework 6:
-
Homework 7,
due Fri, Oct 29.
The due date is changed to Mon, Nov 1!
-
Homework 8,
due Mon, Nov 8.
-
Homework 9,
due Fri, Nov 19.
-
Homework 10,
due Fri, Dec 10.
Codes needed for Homework 10:
-
instructions
how to run the MATLAB codes below;
-
the MATLAB code
euler.m
from the book Numerical Methods Using MATLAB by J.H. Mathews and
K. D. Fink (4th edition, Pearson Prentice Hall, 2004),
and the MATLAB file rhs.m
for the example in the instructions above
(you have to write your own code rhs.m
for the problem in your homework);
-
the MATLAB code
taylor2.m
from the book
Numerical Methods Using MATLAB
by J.H. Mathews and K. D. Fink
(4th edition, Pearson Prentice Hall, 2004),
and the MATLAB file
twoders.m
for the example in the instructions above.
Content of the lectures:
-
Lecture 1 (Mon, Aug 23):
Introduction.
Review of calculus:
notations, limit and continuity of a function,
limit of a sequence, intermediate value theorem (IVT)
(Sec. 1.1).
-
Lecture 2 (Wed, Aug 25):
Review of calculus (cont.):
using the IVT to prove the existence of a solution
of an equation, using derivatives to prove the uniqueness
of the solution of an equation,
definition of a derivative of a function,
relationship between continuity and differentiability
of a function, Rolle's theorem, mean value theorem (MVT),
extreme value theorem,
Riemann sums and integral,
Taylor's theorem
(Sec. 1.1).
-
Lecture 3 (Fri, Aug 27):
Review of calculus (cont.):
fundamental theorem of calculus,
examples of Taylor series (Sec. 1.1).
Computer arithmetic:
binary, decimal and hexadecimal systems,
bits and bytes, 1985 IEEE standard for a long real number,
working with a finite number of digits - rounding and chopping,
roundoff errors - absolute and relative errors.
(Sec. 1.2).
-
Lecture 4 (Mon, Aug 30):
Computer arithmetic (cont.):
examples of roundoff errors - solving a quadratic equation,
a C code
test_machine_epsilon.c
and the
output_test_machine_epsilon
from running it,
computing a sum,
nested way of computing polynomials.
(Sec. 1.2)
-
Lecture 5 (Wed, Sep 1):
Algorithms and convergence:
pseudocode, loops, conditional execution, examples,
stable, unstable and conditionally stable
algorithms;
truncation error of an alternating series
whose terms decrease in absolute value;
stable, unstable and conditionally stable algorithms;
linear and exponential growth of the error,
error in a numerical solution of a recurrence relation
rate of convergence of a sequence, "big O" notation, examples
(Sec. 1.3).
-
Lecture 6 (Fri, Sep 3):
The bisection method:
general idea, algorithm, stopping criteria,
MATLAB codes
bisection.m
,
instructions how to run it,
and myfunction.m
,
and the output
from running them, rate of convergence
(Sec. 2.1).
-
Lecture 7 (Wed, Sep 8):
Fixed-point iteration:
fixed points,
relation between finding a zero and finding a fixed point,
existence and uniqueness of fixed points -
statement, graphical interpretation, and proof of Theorem 2.2
on the existence and uniqueness of fixed points
(Sec. 2.2).
-
Lecture 8 (Fri, Sep 10):
Fixed-point iteration (cont.):
an example of using Theorem 2.2,
Fixed Point Iteration (FPI) algorithm,
examples of reformulating an equation f(x)=0
as a fixed-point problem g(p)=p
in different ways,
the Matlab code
fixedpoint.m
(Sec. 2.2).
-
Lecture 9 (Mon, Sep 13):
Fixed-point iteration (cont.):
a theorem on the behavior of FPI (Theorem 2.3)
and its corollary (Corollary 2.4; Math majors,
please read the proof), an example
(Sec. 2.2).
-
Lecture 10 (Wed, Sep 15):
Newton's method:
geometric derivation,
obtaining it from Taylor expansion
of the function;
examples of failure of Newton's method,
problems with the Newton's method
(one needs to compute derivatives,
one needs a good choice of initial point p0,
the root is not necessarily bracketed);
secant method (does not involve computing derivatives):
derivation of the method
(Sec. 2.3).
-
Lecture 11 (Fri, Sep 17):
Newton's method (cont.):
geometric interpretation of the secant method;
method of false position (Regula Falsi)
- the root is bracketed between
pn and
pn-1 at each step,
please read details from the book
(Sec. 2.3).
Error analysis of iterative methods:
order of convergence α
and asymptotic error constant λ
of a convergent sequence (and of an iterative method),
linear and quadratic convergence,
illustration of the speed of convergence
in the linearly and in the quadratically converging cases,
theorem about linear convergence
of an iterative method
pn
= g(pn−1)
with g'(p)≠0
(Theorem 2.7, with proof)
(Sec. 2.4).
-
Lecture 12 (Mon, Sep 20):
Error analysis of iterative methods (cont.):
theorem about quadratic convergence
of an iterative method
pn
= g(pn−1)
with g'(p)=0
(Theorem 2.8, with sketch of proof);
constructing a quadratically convergent
iterative procedure from a zero-finding problem
(and obtaining the Newton's method as a result!);
multiplicity of a zero of a function,
simple zeros
(Sec. 2.4).
-
Lecture 13 (Wed, Sep 22):
Error analysis of iterative methods (cont.):
relation between the multiplicity of a zero
and the number of vanishing derivatives
(Theorems 2.10, 2.11),
examples;
handling multiple zeros
- finding a simple zero of
μ(x)=f(x)/f'(x)
is equivalent ot finding a non-simple zero
of f(x);
remarks about rate of convergence:
bisection (linearly convergent, λ=1/2),
Newton for a simple zero (quadratically convergent),
secant (convergent of order α=(1+51/2)/2=1.62...),
Regula Falsi (linearly convergent)
(Sec. 2.4).
Accelerating convergence:
derivation of the Aitken Δ2 method
(Sec. 2.5).
-
Lecture 14 (Fri, Sep 24):
Accelerating convergence (cont.):
forward differences, Aitken's method,
order of convergence of Aitken's method;
Steffensen's method for accelerating convergence,
order of convergence of Steffensen's modification
of a fixed-point iteration
(Sec. 2.5).
Zeros of polynomials and Muller's method:
fundamental theorem of algebra, corollaries
(Sec. 2.6).
-
Lecture 15 (Mon, Sep 27):
Zeros of polynomials and Muller's method (cont.):
Horner's method (synthetic division),
computing P(pn−1)
and P'(pn−1)
in Newton's method for polynomials
by using synthetic division,
method of deflation for finding zeros of polynomials
(Sec. 2.6).
-
Lecture 16 (Wed, Sep 29):
Zeros of polynomials and Muller's method (cont.):
Muller's method for finding zeros of polynomials.
Sensitivity of zeros of polynomials - Wilkinson's example
(Sec. 2.6).
Introduction to interpolation and approximation:
need of approximating data, idea of interpolation
(Intro to Chapter 3).
Interpolation and Lagrange polynomial:
examples of polynomial interpolation with polynomials
of degrees 0 and 1 (Sec. 3.1).
-
Lecture 17 (Mon, Oct 4):
Interpolation and Lagrange polynomial (cont.):
idea, construction of the polynomials
Ln,k(x),
explicit expression of Lagrange interpolating polynomial
error bound in Lagrange interpolation (Theorem 3.3),
a sketch of an example of a local linear interpolation
- piecewise-linear interpolating polynomial
for a function on an interval
(see Example 2 on p.109-110)
(Sec. 3.1).
-
Lecture 18 (Wed, Oct 6):
Divided differences:
idea, notations for Newton's divided differences,
interpolating polynomial in Newton's divided differences form,
computing the divided differences manually (in a table),
efficiency of Newton's divided differences algorithm
(Sec. 3.2).
-
Lecture 19 (Fri, Oct 8):
Divided differences (cont.):
forward and backward differences,
"s choose k" notation,
forward-differences and backward-differences ways
of writing an interpolating polynomial
(Sec. 3.2).
-
Lecture 20 (Mon, Oct 11):
Cubic spline interpolation:
general idea, cubic spline interpolant S(x)
and its restrictions Sj(x)
to [xj,xj+1],
compatibility conditions on the functions
Sj(x)
and their derivatives at the internal node points
xj+1 (j=0,1,...,n−2)
(Sec. 3.4).
-
Lecture 21 (Wed, Oct 13).
Cubic spline interpolation (cont.):
reformulating the conditions on
Sj(x)
in terms of the coefficients aj, bj,
cj, dj;
deriving a linear system for bj
and cj from the conditions on
Sj(x) at the internal points;
deriving a linear system of (n−1) equations
for the (n+1) unknowns
c0, c1, ..., cn
from the conditions on Sj(x)
at the internal points;
physical meaning and mathematical formulation
of the free (natural) and the clamped boundary conditions
(Sec. 3.4).
-
Lecture 22 (Fri, Oct 15):
Exam 1.
-
Lecture 23 (Mon, Oct 18):
Cubic spline interpolation (cont.):
deriving a linear system of (n+1)
equations for the (n+1) unknowns
c0,
c1, ...,
cn
from the conditions on
Sj(x)
at the internal points
and the free (natural) boundary conditions;
the linear system for c0,
c1, ...,
cn
can be written as a matrix equation
A⋅c=v,
where A is a (n+1)×(n+1)
tridiagonal matrix, and the number of operations
to solve the system is of order n
(while for a general (n+1)×(n+1)
matrix it is of order n3)
theoretical error bounds: error = O(h4) (Theorem 3.13),
remarks about the practical importance of cubic splines
(Sec. 3.4).
-
Lecture 24 (Wed, Oct 20):
Numerical differentiation:
basic idea - using Lagrange interpolating polynomials;
forward- and backward-difference formulas (error = O(h)),
general (n+1)-point formula
(Sec. 4.1).
-
Lecture 25 (Fri, Oct 22):
Numerical differentiation (cont.):
three-point formulas (error = O(h2)),
formula for the second derivative derived from the Taylor expansion
(error = O(h2))
(Sec. 4.1).
Richardson extrapolation:
idea and derivation (Sec. 4.2).
-
Lecture 26 (Mon, Oct 25):
Richardson extrapolation (cont.):
general formula, an example
(Sec. 4.2).
Numerical integration:
goal, setup, idea - using Lagrange interpolation polynomial
(Sec. 4.3).
-
Lecture 27 (Wed, Oct 27):
Numerical integration (cont.):
weighted mean value theorem for integrals;
derivation of the quadrature formula and the expression
for the error for the trapezoidal method (using linear
Lagrange interpolation); Simpson's formula for numerical
quadrature - idea
(Sec. 4.3).
-
Lecture 28 (Fri, Oct 29):
Numerical integration (cont.):
derivation of the Simpson's formula and the expression
for the error term;
degree of accuracy (precision) of a quadrature formula;
open and closed Newton-Cotes (N-C) formulas,
theorem on the errors in closed N-C formulas (Theorem 4.2),
examples of closed N-C formulas (trapezoidal, Simpson's,
Simpson's 3/8); idea of the open N-C formulas,
derivation of the midpoint rule (open N-C method with n=0)
(read pages 189-194 of Sec. 4.3).
-
Lecture 29 (Mon, Nov 1):
Numerical integration (cont.):
collection of integration formulae
(Sec. 4.3).
Composite numerical integration:
derivation of the composite trapezoidal rule and the error formula
(Theorem 4.5)
(Sec. 4.4).
-
Lecture 30 (Wed, Nov 3):
Composite numerical integration:
derivation of the formulas and the error terms
of the composite Simpson's rule
(Sec. 4.4).
Romberg integration:
idea - composite trapezoidal rule
for different stepsizes followed by
Richardson's extrapolation, derivation
of the formulas for Romberg integration
(Sec. 4.5).
-
Lecture 31 (Fri, Nov 5):
Gaussian quadrature:
idea and a simple example,
theoretical foundations of Gaussian quadrature:
linear space (vector space),
uniqueness of the zero element
(see the handout on Gaussian
quadrature).
-
Lecture 32 (Mon, Nov 8):
Gaussian quadrature (cont.):
theoretical foundations of Gaussian quadrature:
linear space of polynomials
of degree not exceeding n,
inner product linear space,
inner product in a space of polynomials,
weight function w,
constructing an orthogonal family of polynomials
(see the handout on Gaussian
quadrature).
-
Lecture 33 (Wed, Nov 10):
Gaussian quadrature (cont.):
using orthogonal polynomials for numerical integration
- Theorem 1 in the
handout on Gaussian quadrature,
an example, Legendre polynomials
(Sec. 4.7).
-
Lecture 34 (Fri, Nov 12):
Practical issues in numerical integration:
dealing with singularities in the integrand;
infinite limit(s) of integration -
changing variables so that the limits of integration become finite
(possibly after splitting the domain of integration),
choosing a large enough interval
[a,b] such that the "tails" have small enough values
(Sec. 4.9).
Elementary theory of IVPs for ODEs:
initial-value problems (IVPs) for ordinary differential equations
(ODEs) - definition, examples Lipschitz condition and Lipschitz
constant, examples of Lipschitz and non-Lipschitz functions,
relation of Lipschitz condition with continuity and differentiability
(Sec. 5.1).
-
Lecture 35 (Mon, Nov 15):
Elementary theory of IVPs for ODEs (cont.):
bounded derivative implies Lipschitz,
existence and uniqueness of solutions
of an IVP for a first-order ODE,
well-posedness of an an IVP for a first-order ODE,
an example of an IVP without a solution
(Sec. 5.1).
-
Lecture 36 (Wed, Nov 17):
Elementary theory of initial-value problems (cont.):
an example of an IVP with infinitely many solutions -
equation of formation of a droplet of a liquid in saturated vapor
(Sec. 5.1).
Euler's method:
idea, derivation
(Sec. 5.2).
-
Lecture 37 (Fri, Nov 19):
Euler's method (cont):
error analysis of the Euler method
(Sec. 5.2).
-
Lecture 38 (Mon, Nov 22):
Exam 2.
-
Lecture 39 (Mon, Nov 29):
Higher-order Taylor mehtods:
local truncation error,
proof that the local truncation error
of Euler's method is O(h)
(Sec. 5.3).
-
Lecture 40 (Wed, Dec 1):
Higher-order Taylor mehtods (cont.):
derivation of higher-order Taylor methods, an example,
the error of the Taylor method of order n
is O(n) (Theorem 5.12, without proof),
practical difficulties of applying higher-order Taylor methods
(Sec. 5.3).
Runge-Kutta methods:
Taylor Theorem in 2 variables (Theorem 5.13, without proof)
(Sec. 5.4).
-
Lecture 41 (Fri, Dec 3):
Runge-Kutta methods (cont.):
derivation of the midpoint method - a simple RK method
that has the same local truncation error
as the Taylor method of order 2,
geometric interpretation of the method
and comparison with Euler's method
(pages 275-276 of Sec. 5.4).
-
Lecture 42 (Mon, Dec 6):
Runge-Kutta methods (cont.):
derivation of the modified Euler method
and Heun's method;
the classical RK method of order four
(without derivation)
(Sec. 5.4).
-
Lecture 43 (Wed, Dec 8):
Runge-Kutta methods (cont.):
recapitulation on RK methods, notations
(Sec. 5.4).
Error control and the Runge-Kutta-Fehlberg method:
idea of using two methods of different accuracy
to control the error and adapt the stepsize
(pages 283-285 of Sec. 5.5).
Grading:
Your grade will be determined by your performance
on the following coursework:
Coursework |
Weight |
Homework |
20% |
Exam 1 |
25% |
Exam 2 |
25% |
Final Exam |
30% |
Homework: Homework assignments will be set
regularly throughout the semester.
Each homework
will consist of several problems,
of which some pseudo-randomly chosen problems will be graded.
Your two lowest homework grades will be dropped.
All hand-in assignments
will carry a specific due date and must be submitted
in class on the due date.
If you cannot come to class, you can
turn in your homework in my office no later than 5 p.m.
on the same day (if I am not in my office,
you can slip it in under the door).
No late homeworks will be accepted.
Shortly after a homework assignment's due date,
solutions to the problems from that assignment
will be placed on restricted reserve in Bizzell Library.
All homework assignments will be posted on this
page one week before the assignment is due.
Exams:
There will be two in-class midterms and a (comprehensive) in-class final.
Tentative dates for the midterms are
October 11 (Monday) and November 19 (Friday).
The final is scheduled for December 17 (Friday), 8:00-10:00 a.m.
All tests must be taken at the scheduled times,
except in extraordinary circumstances.
Please do not arrange travel plans that prevent you
from taking any of the exams at the scheduled time.
Attendance:
You are required to attend class on those days when an
examination is being given; attendance during other class periods is also
strongly encouraged.
You are fully responsible for the
material covered in each class, whether or not you attend.
Make-ups for missed exams will be given only if
there is a compelling reason for the absence,
which I know about beforehand
and can document independently of your testimony
(for example, via a note or phone call from a
doctor or a parent).
Technology:
Because this is a mathematics course,
we will emphasize the mathematical underpinnings of numerical
analysis and deliberately de-emphasize acquiring expertise in
any particular computer programming
language or software package. However, we will
frequently engage in computations
to illustrate the mathematical results that we derive.
For all computations required on in-class exams
and for most homework problems,
a calculator will be sufficient.
Even if the complexity of the numerical methods requires the
use of a computer,
the amount of programming you will need to do
will be very small, and previous programming experience is not assumed.
Moreover, the textbook comes with a CD-ROM
which contains programs for all of the numerical algorithms
developed in the textbook
(written in Fortran, C, Maple, Mathematica, Pascal, and MATLAB);
these programs are also available
on the web at
http://www.as.ysu.edu/~faires/Numerical-Analysis/DiskMaterial/index.html.
In class I will use MATLAB and Mathematica to illustrate
how some algorithms are implemented.
MATLAB and Mathematica are available on the computers in the University's computer labs,
in particular, in the College of Arts and Sciences computer labs
in the Physical Sciences Center (room 232) and in Dale Hall Tower;
you can also purchase a student version of MATLAB to load on your own PC
at the University Bookstore,
but please note that you are not required to buy it.
Academic calendar for
Fall 2010.
Course schedule for
Fall 2010.
Policy on W/I Grades :
Through October 1 (Friday), you can withdraw
from the course with an automatic "W".
In addition, from October 4 (Monday) to December 10 (Friday),
you may withdraw and receive a "W" or "F"
according to your standing in the class.
Dropping after November 1 (Monday) requires a petition to the Dean.
(Such petitions are not often granted.
Furthermore, even if the petition
is granted, I will give you a grade
of "Withdrawn Failing" if you are
indeed failing at the time of your petition.)
Please check the dates in the Academic Calendar!
The grade of "I" (Incomplete)
is not intended to serve as
a benign substitute for the grade of "F".
I only give the "I" grade
if a student has completed the majority
of the work in the course
(for example everything except the final exam),
the coursework cannot be completed
because of compelling and verifiable problems
beyond the student's control, and the student expresses a clear
intention of making up the missed work as soon as possible.
Academic Misconduct: All cases of suspected academic misconduct will
be referred to the Dean of the College of Arts and Sciences for prosecution
under the University's Academic Misconduct Code. The penalties can be quite
severe. Don't do it!
For details on the University's
policies concerning academic integrity see the
A Student's Guide to Academic Integrity.
For information on your rights to appeal charges
of academic misconduct consult the
Rights and Responsibilities Under the Academic Misconduct Code.
Students are also bound by the provisions of the
OU Student Code.
Students With Disabilities:
The University of Oklahoma is committed to providing reasonable accommodation
for all students with disabilities. Students with disabilities who require
accommodations in this course are requested to speak with the instructor
as early in the semester as possible. Students with disabilities must be
registered with the Office of Disability Services prior to receiving
accommodations in this course. The Office of Disability Services is located
in Goddard Health Center, Suite 166: phone 405-325-3852 or TDD only
405-325-4173.
MATLAB tutorials:
Good to know: