Next: FAST'' POISSON SOLVER Up: NUMERICAL METHODS FOR THE Previous: The 5 and 9-Point   Contents

### Higher-Order Finite Difference Methods

Since the 5-point formula gives a method, one wonders if it is worth going to higher order: take and truncate the finite difference approximation using center differences at the next order:

The resulting computational cell for the differential operator will be as depicted in Figure 35.

Although error , this is not popular method: renders too many points as near-boundary ones (even on square grid!) requiring laborious treatment. More problematic, however, it gives systems that are considerably more expensive to solve!!

ALTERNATIVELY: The Nine-Point Formula'':

Computational Cell is given in Figure 36
In homework you will derive the truncation error to be

 (163)

i.e. same as 5-point!! So apparently, nothing is gained by adding 4 other nearest neighbor points.

In homework you will see that computation with 5-point formula is consistent with error decaying proportionally with . Will find, however, that for 9-point formula error decays , in spite of above error estimate result.

Why this discrepancy? Because truncation error estimate above was performed on Laplace equation , not Poisson .

From (164) the 9-point formula is an approximation of the equation

 (164)

let

Note exists for sufficiently small . Multiply (165) by and let get error.

We can exploit this fact as follows: for (165) with , multiply both sides by (for sufficiently small)

 (165)

is a Poisson equation for which the -point formula produces error. Not the equation we wanted to solve, but we can do the following:

let

so we see that (166) differs from by terms it is a good approximation to orignal problem!

This is an example of Preconditioning''. It is a very powerful way to get either a better truncation or a faster solution to linear systems with extra computation that is minimal... See homework.

SOLVING THE RESULTING SYSTEM

Define the resulting system

 (166)

Let's look at the structure of and from there decide what are some methods for solving the linear system. For both the 5-point as well as the 9-point formula it is clear that the matrices are symmetric and diagonally dominant. For small sized problems, direct solution is fine. For large problems, the fastest algorithm would have at its core a conjugate gradient iterative solve.

Let's consider the 5-point formula in some detail. Schematically, the matrix can be expressed in terms of smaller matrices:

Where is tridiagonal and is the identity matrix, where

and

diags are all equal

is said to be TST (Toeplitz Symmetric Tridiagonal), and each block is itself TST.

See class notes from 475A for a review of solving (167). Direct Method: OK for small matrices. Use a Cholesky factorization . Such factorization will take a time which is roughly proportional to number to nonzero elements in main diagonal. For sparse matrices, this is not too big a deal in terms of storage.

Iterative Methods

Incomplete splits into the tridiagonal part and the rest. Iteration is carried out on the tridiagonal part. Also called line relaxation method.'' The method is of low cost and is easy, but not the fastest.

ILU FACTORIZATION (Incomplete LU)

Suppose you want to solve

 (167)

Recall:

A linear one-step stationary scheme would lead to

 (168)

is iteration index.

such that

and

 (169)

Let's go back to (168): suppose that is in the form where the underlying factorization of (nonsingular) can be done easily. For example may be bounded (or in the context of elliptic problems, TST). Moreover assume is small compared to . Rewrite (168) as:

which suggests the iterative scheme

 (170)

This is the ILU (incomplete LU factorization). It requires a single LU (or Cholesky if symmetric) factorization, which can be reused at each iteration.

We can write (171) in the form of (169): let and

Of course, numerically, we solve the form given by (171) and not in the form directly above.

The ILU iteration (171) is an example of regular splitting.'' More generally , where is nonsingular matrix, and build

 (171)

Where is simple to solve (using LU or other means). Note that, formally, and .

Theorem: Suppose both and are symmetric and positive definite. Then (172) converges.

Proof: See numerical linear algebra book.

Remarks What other techniques can we use? From the classical iterative techniques we could use use Jacobi, Gauss-Seidel, SOR, Conjugate Gradient. SOR would be a good choice since we can tune the relaxation parameter and get speeds that exceed Jacobi and Gauss-Seidel, since we can easily compute the spectral radius of the matrix problem that results from the 5-point scheme. Conjugate Gradient is the other logical choice. Later on we'll introduce another fast solver technique, call Multi-grid'' techniques. Are there faster ways? One of the most important developments in linear algebra in recent years has not been in the area of new methods for the solution (although plenty of new idea and schemes have been developed) but where significant strides are made is in algorithmic aspects of the solver techniques: computer-science solutions to increase the speed or efficiency in storage of general linear algebra solvers. See Demmel's linear algebra book for full details on the latest techniques.

FASTER ITERATIVE TECHNIQUES FOR POISSON PROBLEM

One method that clearly out performs SOR is ADI: let where originates in the -direction central difference and originates in -direction central differences.

Assuming that represents an ILU decomposition to the problem , we can write

So generally, choose parameters and iterate

The choice of that beats SOR is given in

Another method (which is highly recommended) is the Conjugate gradient method (see details in Math 475A notes) which is applicable to symmetric positive definite 's. It is very fast.

The conjugate gradient and the preconditioned congugate gradient methods are widely available as mature code (see Netlib). These methods are part of a family of methods called Krylov Space Methods.'' Other Krylov-space methods worth knowing about are (GMRes) Generalized Minimal Residual and bi-conjugate gradient methods.

There are other types of methods for the solution of the Poisson equation and some of its close cousins: there's cyclic reduction and factorization,'' there is the very fast Multipole Techniques.'' We will feature two more methods here, both are very powerful and are widely applicable. First we'll consider FFT-Based Poisson Solvers'' and then briefly talk about Multigrid Methods.''

Remarks As a general rule of thumb, if the problem you're solving is very large, you'll have to resort to high performance solvers. But the first thing you should do is decide whether you have a problem that's big enough to warrant looking at high performance algorithms...this includes foreseeing that in the future you might look at big problems. Alternatively, suppose you have to solve the same problem millions of times (well, many times), whether it is small or large, you could save yourself time and storage by using a high performance solver. In any event, it doesn't hurt to be aware that these sort of things exist and most likely are in the form of mature code that you can adapt to your problem.

Next: FAST'' POISSON SOLVER Up: NUMERICAL METHODS FOR THE Previous: The 5 and 9-Point   Contents
Juan Restrepo 2003-05-02