The code reported in Figure 1 shows how to set and apply the default multilevel
preconditioner available in the real double precision version of AMG4PSBLAS
(see Table 1). This preconditioner is chosen by simply specifying ’ML’
as the
second argument of P%init
(a call to P%set
is not needed) and is applied
with the CG solver provided by PSBLAS (the matrix of the system to be
solved is assumed to be positive definite). As previously observed, the modules
psb_base_mod
, amg_prec_mod
and psb_krylov_mod
must be used by the example
program.
The part of the code dealing with reading and assembling the sparse matrix and the
right-hand side vector and the deallocation of the relevant data structures, performed
through the PSBLAS routines for sparse matrix and vector management,
is not reported here for the sake of conciseness. The complete code can be
found in the example program file amg_dexample_ml.f90
, in the directory
samples/simple/file
read
of the AMG4PSBLAS implementation (see Section 3.5). A
sample test problem along with the relevant input data is available in
samples/simple/fileread/runs
. For details on the use of the PSBLAS routines, see
the PSBLAS User’s Guide [21].
The setup and application of the default multilevel preconditioner for the real single
precision and the complex, single and double precision, versions are obtained
with straightforward modifications of the previous example (see Section 5 for
details). If these versions are installed, the corresponding codes are available in
samples/simple/file
read
.
use psb_base_mod use amg_prec_mod use psb_krylov_mod ... ... ! ! sparse matrix type(psb_dspmat_type) :: A ! sparse matrix descriptor type(psb_desc_type) :: desc_A ! preconditioner type(amg_dprec_type) :: P ! right-hand side and solution vectors type(psb_d_vect_type) :: b, x ... ... ! ! initialize the parallel environment call psb_init(ctxt) call psb_info(ctxt,iam,np) ... ... ! ! read and assemble the spd matrix A and the right-hand side b ! using PSBLAS routines for sparse matrix / vector management ... ... ! ! initialize the default multilevel preconditioner, i.e. V-cycle ! with basic smoothed aggregation, 1 hybrid forward/backward ! GS sweep as pre/post-smoother and UMFPACK as coarsest-level ! solver call P%init(ctxt,’ML’,info) ! ! build the preconditioner call P%hierarchy_build(A,desc_A,info) call P%smoothers_build(A,desc_A,info) ! ! set the solver parameters and the initial guess ... ... ! ! solve Ax=b with preconditioned FCG call psb_krylov(’FCG’,A,P,b,x,tol,desc_A,info) ... ... ! ! deallocate the preconditioner call P%free(info) ! ! deallocate other data structures ... ... ! ! exit the parallel environment call psb_exit(ctxt) stop
Different versions of the multilevel preconditioner can be obtained by changing the
default values of the preconditioner parameters. The code reported in Figure 2 shows
how to set a V-cycle preconditioner which applies 1 block-Jacobi sweep as pre-
and post-smoother, and solves the coarsest-level system with 8 block-Jacobi
sweeps. Note that the ILU(0) factorization (plus triangular solve) is used as
local solver for the block-Jacobi sweeps, since this is the default associated
with block-Jacobi and set by P%init
. Furthermore, specifying block-Jacobi as
coarsest-level solver implies that the coarsest-level matrix is distributed among
the processes. Figure 3 shows how to set a W-cycle preconditioner using the
Coarsening based on Compatible Weighted Matching, aggregates of size at
most 8 and smoothed prolongators. It applies 2 hybrid Gauss-Seidel sweeps as
pre- and post-smoother, and solves the coarsest-level system with the parallel
flexible Conjugate Gradient method (KRM) coupled with the block-Jacobi
preconditioner having ILU(0) on the blocks. Default parameters are used for stopping
criterion of the coarsest solver. Note that, also in this case, specifying KRM as
coarsest-level solver implies that the coarsest-level matrix is distributed among the
processes.
The code fragments shown in Figures 2 and 3 are included in the example program
file amg_dexample_ml.f90
too.
Finally, Figure 4 shows the setup of a one-level additive Schwarz preconditioner,
i.e., RAS with overlap 2. Note also that a Krylov method different from CG
must be used to solve the preconditioned system, since the preconditione in
nonsymmetric. The corresponding example program is available in the file
amg_dexample_1lev.f90
.
For all the previous preconditioners, example programs where the sparse matrix
and the right-hand side are generated by discretizing a PDE with Dirichlet
boundary conditions are also available in the directory samples/simple/pdegen
.
... ... ! build a V-cycle preconditioner with 1 block-Jacobi sweep (with ! ILU(0) on the blocks) as pre- and post-smoother, and 8 block-Jacobi ! sweeps (with ILU(0) on the blocks) as coarsest-level solver call P%init(ctxt,’ML’,info) call P%set(’SMOOTHER_TYPE’,’BJAC’,info) call P%set(’COARSE_SOLVE’,’BJAC’,info) call P%set(’COARSE_SWEEPS’,8,info) call P%hierarchy_build(A,desc_A,info) call P%smoothers_build(A,desc_A,info) ... ...
... ... ! build a W-cycle preconditioner with 2 hybrid Gauss-Seidel sweeps ! as pre- and post-smoother, a distributed coarsest ! matrix, and MUMPS as coarsest-level solver call P%init(ctxt,’ML’,info) call P%set(’PAR_AGGR_ALG’,’COUPLED’,info) call P%set(’AGGR_TYPE’,’MATCHBOXP’,info) call P%set(’AGGR_SIZE’,8,info) call P%set(’ML_CYCLE’,’WCYCLE’,info) call P%set(’SMOOTHER_TYPE’,’FBGS’,info) call P%set(’SMOOTHER_SWEEPS’,2,info) call P%set(’COARSE_SOLVE’,’KRM’,info) call P%set(’COARSE_MAT’,’DIST’,info) call P%set(’KRM_METHOD’,’FCG’,info) call P%hierarchy_build(A,desc_A,info) call P%smoothers_build(A,desc_A,info) ... ...
... ... ! set RAS with overlap 2 and ILU(0) on the local blocks call P%init(ctxt,’AS’,info) call P%set(’SUB_OVR’,2,info) call P%bld(A,desc_A,info) ... ... ! solve Ax=b with preconditioned BiCGSTAB call psb_krylov(’BICGSTAB’,A,P,b,x,tol,desc_A,info)