Guide and Reference
This section provides some key points about using the sparse matrix
iterative solver subroutines.
If you need to solve linear systems with different
right-hand sides but with the same matrix using the preconditioned algorithms,
you can reuse the incomplete factorization computed during the first call to
the subroutine.
- The DSMCG and DSMGCG subroutines are provided for migration purposes from
earlier releases of ESSL. You get better performance and a wider choice of
algorithms if you use the DSRIS subroutine.
- To select the sparse matrix subroutine that provides the best performance,
you must consider the sparsity pattern of the matrix. From this, you can
determine the most efficient storage mode for your sparse matrix. ESSL
provides a number of versions of the sparse matrix iterative solve
subroutines. They operate on sparse matrices stored in row-wise, diagonal, and
compressed-matrix storage modes. These storage modes are described in "Sparse Matrix".
Storage-by-rows is generally applicable. You should use this storage mode
unless your matrices are already set up in one of the other storage modes. If,
however, your matrix has a regular sparsity pattern--that is, where the
nonzero elements are concentrated along a few diagonals--you may want to
use compressed-diagonal storage mode. This can save some storage space.
Compressed-matrix storage mode is provided for migration purposes from earlier
releases of ESSL and is not intended for use. (You get better performance and
a wider choice of algorithms if you use the DSRIS subroutine, which uses
storage-by-rows.)
- The performance achieved in the sparse matrix iterative solver subroutines
depends on the value specified for the relative accuracy epsilon. For
details, see "Notes" for each subroutine.
- You can select the iterative algorithm you want to use to solve your
linear system. The methods include conjugate gradient (CG), conjugate gradient
squared (CGS), generalized minimum residual (GMRES), more smoothly converging
variant of the CGS method (Bi-CGSTAB), or transpose-free quasi-minimal
residual method (TFQMR).
- For a general sparse or positive definite symmetric matrix, the iterative
algorithm may fail to converge for one of the following reasons:
- The value of epsilon is too small, asking for too much precision.
- The maximum number of iterations is too small, allowing too few iterations
for the algorithm to converge.
- The matrix is not positive real; that is, the symmetric part,
(A+AT)/2, is not positive definite.
- The matrix is ill-conditioned, which may cause overflows during the
computation.
- These algorithms have a tendency to generate underflows that may hurt
overall performance. The system default is to mask underflow, which improves
the performance of these subroutines.
[ Top of Page | Previous Page | Next Page | Table of Contents | Index ]