MATH 4073.980 - Numerical Analysis - Fall 2013
★ 10:30-11:45 & 2:00-3:15, in 4216 on
8/20, 9/10 , 10/1, 10/22, 11/12 and 12/3;
★ 1:30-4:10 in 4W135 on all other Tuesdays
Instructor:
Nikola Petrov, 802 PHSC, (405)325-4316, npetrov AT math.ou.edu
Course catalog description:
Prerequisite: 3113 or 3413. Solution of linear and nonlinear
equations, approximation of functions, numerical integration and
differentiation, introduction to analysis of convergence and errors,
pitfalls in automatic computation, one-step methods in the solutions
of ordinary differential equations. (F)
Text:
A. Greenbaum, T. P. Chartier, Numerical Methods,
Princeton University Press, 2012, ISBN 978-0691151229.
The course will cover major parts of chapters 4-6, 8-11.
Homework:
-
Homework 1, due August 27 (Tuesday).
-
Homework 2, due September 4
(Wednesday), by 4:00 p.m., in Ms. Wagenblatt's office.
Codes needed for Homework 2:
-
Homework 3, due September 11
(Wednesday), by 4:00 p.m., in Ms. Wagenblatt's office.
Codes needed for Homework 2:
-
Homework 4, due September 18
(Wednesday), by 4:00 p.m., in Ms. Wagenblatt's office.
-
Homework 5, due September 26
(Thursday), by 4:00 p.m., in Ms. Wagenblatt's office.
-
Homework 6, due October 22
(Tuesday), in the beginning of class.
-
Homework 7, due October 29
(Tuesday), by 4 p.m., in Ms. Wagenblatt's office.
Codes needed for Homework 7:
-
the MATLAB code
euler.m
;
-
the MATLAB code
taylor2nd.m
(both codes euler.m and taylor2nd.m are taken from the web-site
www.pcs.cnu.edu/~bbradie/matlab.html
with codes accompanying the book
A Friendly Introduction to Numerical Analysis
by Brian Bradie, Pearson/Prentice Hall, 2006).
-
Homework 8, due November 18
(Monday), by 12 noon, in Ms. Wagenblatt's office.
-
Homework 9, due December 3
(Tuesday), in class.
Content of the lectures:
-
Lecture 1 (Mon, Aug 20, in 4216):
Review of calculus:
-
notations, limit of a sequence, limit and continuity of a function;
-
intermediate value theorem (IVT),
using the IVT to prove the existence of a solution of an equation;
-
definition of a derivative of a function,
relationship between continuity and differentiability
of a function,
using derivatives to prove the uniqueness of the solution
of an equation;
-
Taylor's theorem, examples of Taylor series;
-
Riemann sums and integral;
-
fundamental theorem of calculus (FTC);
-
mean value theorem (MVT), mean value theorem for integrals.
Computer arithmetic:
-
binary, decimal and hexadecimal systems,
bits and bytes, 1985 IEEE standard for a long real number;
-
a C code
test_machine_epsilon.c
and the
output_test_machine_epsilon
from running it
(and a MATLAB code test_machine_epsilon_matlab.m
doing the same - running it with argument 0 or with argument 1 will
give you different results - why?);
-
working with a finite number of digits - rounding and chopping,
roundoff errors;
examples of potential roundoff errors - computing a sum of many terms.
-
Lecture 2 (Tue, Aug 27, in 4W135)
The bisection method [Sec. 4.1]:
-
general idea of the method;
-
theoretical justification - Intermediate Values Theorem;
-
rate of convergence of the method,
estimating the number of steps needed to achieve the desired accuracy;
-
advantages and disadvantages of the bisection method;
-
MATLAB codes
bisection.m
,
instructions how to run it,
and myfunction.m
,
and the output
from running them.
Newton's method [Sec. 4.3]:
-
idea - replace the graph of the function around a point
by the tangent line at that point and look for the intersection
of this straignt line and the x-axis;
-
derivation of the method;
-
rate of convergence of Newton's method
(Theorem 4.3.1);
-
advantages and disadvantages of Newton's method;
-
MATLAB code
newton.m
,
and the output
from running it (looking for square root of 2).
Quasi-Newton methods [Sec. 4.4]:
-
the constant slope method (linearly convergent);
-
the secant method (rate of convergence = golden mean ≈1.618);
-
MATLAB code
secant.m
,
and the output
from running it (looking for square root of 2).
Analysis of fixed point methods [pages 93-95 of Sec. 4.5]:
-
writing the equation f(x)=0
as a fixed-point equation: φ(x)=x
for φ(x)=f(x)+x;
-
proof that, if the function φ is continuous
and the sequence
(xk)k=0,1,2,...
defined iteratively by
xk+1=φ(xk)
converges to x*, then the limit
x* is a fixed point of φ,
i.e.,
φ(x*)=x*;
-
iterating functions graphically;
-
the condition |φ'(x)|<1
in an interval around the fixed point of φ
as a necessary condition for convergence of the fixed point
iteration (Theorem 4.5.1).
-
Lecture 3 (Tue, Sep 3, in 4W135)
Analysis of fixed point methods [pages 96-98 of Sec. 4.5]:
-
Theorem on existence and uniqueness of a limit
of a fixed-point iteration
(with a sketch of the basic ideas of the proof,
in a geometric form);
-
practical issues in choosing the fixed-point iteration
- an example showing that different choices of a fixed-point iteration
corresponding to the same zero-finding problem
lead to different rates of convergence, and in some cases
to a divergent sequence [Example 4.5.2 on pages 96-97 of the book];
-
definitions of order of convergence α and asymptotic error
constant λ of a convergent sequence;
linear (α=1) and quadratic (α=2) convergence;
-
Theorem on linearly convergent sequences: if
g(x*)=x* and
g'(x*)≠0,
then the convergence is linear with
λ=|g'(x*)|;
-
Theorem on sequences converging faster than linearly: if
g(x*)=x* and
g'(x*)=0,
then the convergence is at least quadratic;
-
derivation of Newton's method from the condition
g'(x*)=0;
-
linear convergence of Newton's method on a non-simple root;
-
multiplicity of a zero, examples;
-
finding the multiplicity of a zero p of a function f
from the number of vanishing derivatives of f at p;
-
examples illustrating the fact that the non-simple roots are "fragile".
-
Lecture 4 (Tue, Sep 10, in 4216)
Analysis of fixed point methods (cont.):
-
a pictorial derivation of the rate of convergence of a linearly
converging fixed point iteration;
-
accelerating convergence: Aitken's Δ2-method;
-
accelerating convergence: Steffensen's method (quadratically
convergent).
- illustration of the rate of convergence of Newton's method,
secant method, fixed point iteration, Aitken's extrapolation,
and Steffensen's method on the example of the equation
sin(x)+x=1:
the Mathematica notebook
rate_of_convergence1.nb
(and the file
rate_of_convergence1.txt
in plain text).
Interpolation of functions [pages 181-183, 185-192 of Sec. 8.2-8.4]:
-
general set-up of the problem of interpolating functions;
-
approximation by polynomials,
Weierstrass Approximation Theorem;
-
Lagrange interpolating polynomial - derivation
and rigorous error bound,
practical deficiencies;
-
Newton's divided differences method;
-
Newton's forward differences method.
Lecture 5 (Tue, Sep 17, in 4W135)
Piecewise polynomial interpolation:
piecewise linear interpolation [pages 197-200 of Sec. 8.6]:
-
advantages and disadvantages;
-
"big O" notation,
the error in piecewise linear interpolation
is O(h2);
-
idea of choosing the location of the nodes
adapted to the function that is being interpolated;
-
attempt to use piecewise quadratic interpolation
(unsuccessful).
Piecewise cubic Hermite interpolation
[pages 200-201 of Sec. 8.6.1]:
idea - constructing a cubic polynomial between
each pair of adjacent data points
(xi-1,yi-1),
(xi,yi)
that matches the value of the function f
and the value of the derivative f'
at the points xi-1
and xi;
disadvantage of piecewise cubic Hermite interpolation
- the interpolant is only C1
(i.e., it has only one continuous derivative),
the value of the derivatives
f'(xi)
are usually unknown.
Cubic spline interpolation [pages 201-204 of Sec. 8.6.2]:
-
idea - constructing piecewise cubic interpolant
that is C2
(i.e., it has only two continuous derivatives);
-
implementation by matching the values of two adjacent
functions si(x)
and their first and second derivatives at the node point
where the two functions si(x) "meet";
-
a simple example.
Lecture 6 (Tue, Sep 24, in 4W135)
Numerical differentiation [pages 213-216 of Sec. 9.1]:
-
sensitivity of "naive" numerical differentiation to roundoff errors
because of subtraction of close values and because of division
by a small number;
-
using Taylor's formula to derive a formula for f'(x)
with truncation (discretization) error O(h);
-
truncation (discretization) error and roundoff error,
"balancing" the two kinds of errors;
-
centered-difference formula for f'(x)
with truncation (discretization) error O(h2);
-
formula for f''(x)
with truncation (discretization) error O(h2).
Richardson extrapolation [Sec. 9.2]:
-
idea of Richardson extrapolation;
-
using Richardson extrapolation with the centered-difference
formula to achieve accuracy of O(h4);
-
generalizing the idea of the Richardson extrapolation
to derive a higher-accuracy values.
Newton-Cotes formulas for numerical integration [Sec. 10.1]:
-
idea - integrating the Lagrange interpolating polynomial
(of certain degree) instead of the original function;
-
using the linear Lagrange interpolating polynomial to derive
the trapezoidal rule, error O(h3);
-
derivation of the Simpson's rule for numerical integration
by the method of undetermined coefficients imposing the condition
that the method should use three points (the endpoints and the
midpoint) and should be exact for constant, linear, and quadratic
functions
(the standard method, integrating the quadratic Lagrange interpolating
polynomial can also be used, but the calculations are longer);
the error of the Simpson's rule is O(h5).
Formulas based on piecewise polynomial interpolation [Sec. 10.2]:
-
composite trapezoidal rule, error O(h2);
-
composite Simpson's rule, error O(h4).
Lecture 7 (Tue, Oct 1, in 4216)
Exam 1
[on the material covered in Lectures 1-5]
Theoretical foundations of Gaussian quadrature
[Sec. 1 of the handout
"Theoretical foundations of
Gaussian quadrature"]:
-
vector (linear) space;
-
normed linear space;
-
inner product linear space;
-
Cauchy-Schwarz inequality, angle between two vectors;
-
linear spaces of polynomials.
Lecture 8 (Tue, Oct 8, in 4W135)
Theoretical foundations of Gaussian quadrature (cont.)
[Sec. 2 and Sec. 3 of the handout
"Theoretical foundations of
Gaussian quadrature"]:
-
inner product in a space of polynomials, weight function;
-
examples: Legendre polynomials, Hermite polynomials;
-
Theorem 1 on Gaussian quadrature;
-
derivation of a quadrature formula of degree 2.
Lecture 9 (Tue, Oct 15, in 4W135, 1:30-4:10)
Romberg integration
[Sec. 10.5]:
-
Romberg integration = composite trapezoidal rule + several
Richardson's extrapolations;
-
proof that the composite trapezoidal rule followed by one step of
Richardson's extrapolation results in the composite Simpson's rule.
Singularities
[Sec. 10.7]:
-
types if improper integrals - integrand that becomes infinite,
or infinite domain of integration (or both);
-
example - dealing with an infinity in the integrand
in the integral of x−1/2e−x
from 0 to 1 by expanding the exponent in a Taylor series
in a small interval [0,a] for some small a>0;
-
example - dealing with an infinity in the integrand
and an infinite domain of integration
in the integral of
x−1/2e−x2
from 0 to ∞ by expanding
x−1/2e−x2
in a Taylor series in [0,0.1],
ordinary integration in [0.1,10], and ignoring the integral
of the function in the interval [10,∞)
by using the estimate that the neglected integral
is no more than integral of
10−1/2e−10x
over [0,∞) can be computed analytically
(and the value is 10−3/2e−100);
-
another way of dealing with integrals over infinite domains -
changing variables to make the domain finite.
Digression: On norms and inner products:
-
lp-norm on Rn;
-
inner product and the corresponding norm in a space of functions;
-
Fourier series of a periodic function;
Parceval' (Plancherel's) Theorem:
the mapping between a periodic function
and its Fourier series is an isometry.
Lecture 10 (Tue, Oct 22, in 4216, 10:30-11:45 and 2:00-3:15)
Existence and uniqueness of solutions of IVPs for ODEs
[Sec. 11.1]:
-
IVP for a 1st order ODE:
y'(t)=f(t,y),
a<t<b,
y(a)=α;
-
IVP for a system of 1st order ODEs:
y'(t)=f(t,y),
a<t<b,
y(a)=α,
where
y(t)=(y1(t),...,yn(t)),
f(t,y)=(f1(t,y),...,fn(t,y)),
α=(α1,...,αn);
-
converting an order-n ODE to a system of n
1st order ODEs;
-
Lipschitz functions, examples;
-
existence and uniqueness of solutions of IVPs;
-
an ODE without solutions;
-
an ODE with infinitely many solutions
(describing formation and growth of droplets
in oversaturated vapor);
-
well-posedness of IVPs.
Euler's method
-
setting up the problem,
step size h=(b−a)/N,
mesh points ti=a+ih;
-
derivation of Euler's method by using the Taylor series
and ignoring all terms of order h2
and higher;
-
geometric meaning of Euler's method;
-
error analysis for Euler's method.
Lecture 11 (Tue, Oct 29, in 3106, 1:30-4:10)
Euler's method (cont.):
-
illustration of the method and error analysis
on the example
y'(t)=y−t2+1,
0<t<2,
y(0)=1/2,
with exact solution
yexact(t)=(t+1)2−(1/2)et;
-
empirical study of the dependence of the error on the stepsize
by plotting
log|yN−wN|
versus log(h) (the slope is equal to the power of h).
Higher-order Taylor methods:
-
derivation of the order-n Taylor method by using Taylor's
Theorem;
-
definition of local truncation error of a difference method for solving ODEs;
-
error in higher-order Taylor methods:
the local truncation method of order-n Taylor method
is O(h2)
(in particular, for Euler's method the local truncation error is
O(h) because Euler's method is order-1 Taylor method);
-
a numerical example solved by using order-2 Taylor's method
- empirical study of the dependence of the error on h.
Lecture 12 (Tue, Nov 5, in 3106, 1:30-4:10)
Runge-Kutta methods:
-
goal: to develop accurate methods
(with error O(hk) for large k)
for which one does not need to compute derivatives
of the right-hand side of the ODE;
-
Taylor Theorem for a scalar function of two derivatives;
binomial coefficients, Pascal's triangle;
-
derivation of several 2nd order Runge-Kutta methods:
midpoint method, modified Euler method,
Heun's method;
-
geometric representations of the midpoint method
and the modified Euler method;
-
classical 4th order Runge-Kutta method;
-
symbolic way of writing Runge-Kutta methods;
-
automatic error control -
Runge-Kutta-Fehlberg (1970) method:
estimating the error of a 4th order Runge-Kutta method
by a 5th order Runge-Kutta method at each step.
Lecture 13 (Tue, Nov 12, in 4216, 10:30-11:45 and 2:00-3:15)
Error control and variable step size algorithms:
-
idea: use two methods of different order
- the more accurate one is used for controlling the local
truncation error;
-
implementation, rules for changing the step size.
Classification of the methods for numerical solution of IVPs for ODEs:
-
general form of a linear m-step method;
-
explicit vs. implicit m-step methods,
examples.
Multistep methods:
-
Adams-Bashforth methods - using Lagrange intepolation polynomial
to interpolate the right-hand side of the ODE,
f(t,y(t)),
by using the values
f(ti−m+1,wi−m+1),
f(ti−m+2,wi−m+2),
...,
f(ti,wi),
and using this polynomial in order to approximate
the value of the integral of f(t,y(t))
from t=ti
to t=ti+1;
-
derivation of the 2-step Adams-Bashforth method;
-
derivation of the error of the 2-step Adams-Bashforth method
by using the Mean Value Theorem for Integrals
- the local truncation error of the 2-step Adams-Bashforth method
is O(h2);
in general, the local truncation error of the k-step Adams-Bashforth method
is O(hk);
-
Adams-Moulton methods - using Lagrange intepolation polynomial
to interpolate the right-hand side of the ODE,
f(t,y(t)),
by using the values
f(ti−m+1,wi−m+1),
f(ti−m+2,wi−m+2),
...,
f(ti,wi),
f(ti+1,wi+1),
and using this polynomial in order to approximate
the value of the integral of f(t,y(t))
from t=ti
to t=ti+1;
at each step one has to solve an implicit equation
in order to compute the value of wi+1.
Lecture 14 (Tue, Nov 19, in 3106, 1:30-4:10)
Exam 2
[on the material covered in Lectures 7-13]
Adaptive quadrature:
-
estimating the error of the trapezoidal method for numerical
integration from the numerical values of
T(f,a,b)
and
T(f,a,m)+T(f,m,b)
(where T(f,a,b) is the outcome
of the trapezoidal method applied to the integral of
f over the interval [a,b])
- relying on the fact that the difference between of the trapezoidal method is
approximately K(b−a)3
where K is an (unknown) constant;
-
a MATLAB implementation of a recursive method for adaptive quadrature:
run these codes as follows:
tic, trap_adapt1(inline('1/(1+x)'),2.0,3.0,1e-3)-log(4/3), tic
which will give you the error between the exact value and the value
obtained by adaptive quadrature of the integral of 1/(1+x)
from 2 to 3 (whose exact value os log(4/3)); run this several times
replacing the allowed tolerance
1e-3
with
1e-4,
1e-5,
..., to compare the desired tolerance with the true error;
the tic
and toc
commands display the execution time.
Predictor-corrector methods:
-
idea of predictor-corrector methods:
use an explicit method to find a temporary value of
w1+1 and use this value
in the right-hand side of an implicit method
to avoid the need for solving implicit algebraic equation
in each step;
-
an example: combining the explicit 4-step 4th-order Adams-Bashforth
method,
and the implicit 3-step 4th-order Adams-Moulton method,
into the combined predictor-corrector method
Lecture 15 (Tue, Nov 26, in 3106, 1:30-4:10)
Discrete least squares approximation:
-
set-up: given a set of data,
(x1,y1),
(x2,y2),
...
(xm,ym),
find the straight line L(x)=ax+b
that minimizes the l2 error, i.e.,
the sum of
[yi−L(xi)]2;
-
using least squares method when the assumed dependence between
x and y is y=beax,
in which case
log(y) would be a linear function of log(x):
log(y)=log(b)+ax;
similarly, if y=bxa,
log(y)=log(b)+alog(x).
Orthogonal polynomials and least squares approximation:
-
idea: using a polynomial Pn
of degree n instead of a linear function
and computing the values of the coefficients of the
polynomial that minimize the "error", i.e., the L2 norm
of f−Pn;
-
a problem with this idea: the matrix of the system for the
coefficients is a Hilbert matrix, and the numerical solution
is very inaccurate numerically;
-
vector spaces of polynomials of degree no more than n,
inner product in spaces of polynomials;
-
reformulating the least squares approximation problem
in a geometric language: finding the orthogonal projection
of f onto a subspace;
-
definition and properties of Chebyshev polynomials.
Lecture 16 (Tue, Dec 3, in 4216, 10:30-11:45 and 2:00-3:15)
Attendance:
You are required to attend class on those days when an
examination is being given;
attendance during other class periods is also expected.
You are fully responsible for the
material covered in each class, whether or not you attend.
Make-ups for missed exams will be given only if
there is a compelling reason for the absence,
which I know about beforehand
and can document independently of your testimony
(for example, via a note or a phone call from a doctor).
Homework:
Homework will be assigned regularly and will be posted on the this
web-site.
The homework will be due at the start of class on the due date
(or, when the class is taught from Norman, should be given
to Ms. Wagenblatt no later than 4:00 p.m. on the due date).
Each homework will consist of several problems,
of which some pseudo-randomly chosen problems will be graded.
Your lowest homework grade will be dropped.
Your homework should have your name clearly written on it, and should
be stapled. Please write the problems in the same order in which they
are given in the assignment.
No late homework will be accepted!
You are allowed (and encouraged) to work in small groups.
However, each of you will need to prepare individual solutions
written in your own words - this is the only way to
achieve real understanding!
Exams:
There will be two in-class midterms (75 minutes each) and a
(comprehensive) 2-hour final.
Tentative dates for the midterms are October 1 and November 12.
All tests must be taken at the scheduled times,
except in extraordinary circumstances.
Please do not arrange travel plans that prevent you
from taking any of the exams at the scheduled time.
Grading:
Your grade will be determined by your performance
on the following coursework:
Coursework |
Weight |
Homework (lowest grade
dropped) |
30% |
Two midterm exams (20%
each) |
40% |
Final Exam |
30% |
Useful links:
the
academic calendar,
the
class schedules.
Policy on W/I Grades :
Until August 30, there is no record of a grade for dropped courses.
From September 3 through September 27, you may withdraw and receive a "W" grade,
no matter what scores you have so far achieved.
If you drop the course after September 27, you will receive either a "W" or an "F"
depending on your grade at time of your withdrawal.
A withdrawal on or after October 28 would require a petition to the Dean
(such petitions are not often granted).
Avoidance of a low grade is not sufficient reason to obtain permission
to withdraw after October 28.
The grade of "I" is a special-purpose grade given when a specific task needs to be completed to
finish the coursework, it is not intended to serve as
a benign substitute for the grade of "F".
Please check the dates in the
academic calendar!
Academic Misconduct: All cases of suspected academic
misconduct will
be referred to the Dean of the College of Arts and Sciences for
prosecution
under the University's Academic Misconduct Code. The penalties can be
quite
severe. Don't do it!
For details on the University's
policies concerning academic integrity see the
Student's Guide to Academic Integrity
at the
Academic Integrity web-site.
For information on your rights to appeal charges
of academic misconduct consult the
Academic Misconduct Code.
Students are also bound by the provisions of the
OU Student Code.
Students With Disabilities:
The University of Oklahoma is committed to providing reasonable
accommodation
for all students with disabilities. Students with disabilities who
require
accommodations in this course are requested to speak with the
instructor
as early in the semester as possible. Students with disabilities must
be
registered with the Office of Disability Services prior to receiving
accommodations in this course. The Office of Disability Services is
located
in Goddard Health Center, Suite 166: phone 405-325-3852 or TDD only
405-325-4173.
Good to know:
MATLAB tutorials: