We describe the basics for building and applying AMG4PSBLAS one-level and multilevel (i.e., AMG) preconditioners with the Krylov solvers included in PSBLAS [16]. The following steps are required:
If the selected preconditioner is one-level, it is built in a single step, performed by the routine bld.
All the previous routines are available as methods of the preconditioner object. A detailed description of them is given in Section 6. Examples showing the basic use of AMG4PSBLAS are reported in Section 5.1.
type | string | default preconditioner |
No preconditioner | ’NONE’ | Considered to use the PSBLAS Krylov solvers with no preconditioner. |
Diagonal | ’DIAG’, ’JACOBI’, ’L1-JACOBI’ | Diagonal preconditioner. For any zero diagonal entry of the matrix to be preconditioned, the corresponding entry of the preconditioner is set to 1. |
Gauss-Seidel | ’GS’, ’L1-GS’ | Hybrid Gauss-Seidel (forward), that is, global block Jacobi with Gauss-Seidel as local solver. |
Symmetrized Gauss-Seidel | ’FBGS’, ’L1-FBGS’ | Symmetrized hybrid Gauss-Seidel, that is, forward Gauss-Seidel followed by backward Gauss-Seidel. |
Block Jacobi | ’BJAC’, ’L1-BJAC’ | Block-Jacobi with ILU(0) on the local blocks. |
Additive Schwarz | ’AS’ | Additive Schwarz (AS), with overlap 1 and ILU(0) on the local blocks. |
Multilevel | ’ML’ | V-cycle with one hybrid forward Gauss-Seidel (GS) sweep as pre-smoother and one hybrid backward GS sweep as post-smoother, decoupled smoothed aggregation as coarsening algorithm, and LU (plus triangular solve) as coarsest-level solver. See the default values in Tables 2-8 for further details of the preconditioner. |
Note that the module amg_prec_mod, containing the definition of the preconditioner
data type and the interfaces to the routines of AMG4PSBLAS, must be used
in any program calling such routines. The modules psb_base_mod, for the
sparse matrix and communication descriptor data types, and psb_krylov_mod,
for interfacing with the Krylov solvers, must be also used (see Section 5.1).
Remark 1. Coarsest-level solvers based on the LU factorization, such as those implemented in UMFPACK, MUMPS, SuperLU, and SuperLU_Dist, usually lead to smaller numbers of preconditioned Krylov iterations than inexact solvers, when the linear system comes from a standard discretization of basic scalar elliptic PDE problems. However, this does not necessarily correspond to the shortest execution time on parallel computers.
DA MODIFICARE PER INSERIRE TIPO DI AGGREGAZIONE