Merge branch 'development' into dev-openmp

dev-openmp
sfilippone 9 months ago
commit 8b907dba79

@ -0,0 +1,12 @@
$Format:%d%n%n$
# Fall back version, probably last release:
3.9.0
# PSBLAS version file.
#
# Release archive created from commit:
# $Format:%H %d$
# $Format:Created on %ci by %cN, and$
# $Format:signed by %GS using %GK.$
# $Format:Signature status: %G?$
$Format:%GG$

2
.gitignore vendored

@ -1,11 +1,13 @@
*.a
*.o
*.mod
*.smod
*~
# header files generated
/cbind/*.h
/util/psb_metis_int.h
/base/modules/psb_config.h
# Make.inc generated
/Make.inc

File diff suppressed because it is too large Load Diff

@ -1,625 +0,0 @@
Changelog. A lot less detailed than usual, at least for past
history.
2022/05/20: Merge changes for REMOTE build. Bump v 3.8
2022/03/28: Introduce new non-blocking collectives.
2021/06/01: New CTXT object
2021/04/20: OpenMP integration
2021/04/10: Recognize MPICXX in configure
2021/02/10: Take out precset interface, only prec%set now.
2020/09/20: New getelem function to extract vector entries
2020/07/21: Fix configure for METIS sizes
2020/06/01: reworked bild internals for descriptors
2019/12/18: New internals and algorithms for FND_OWNER, faster and less
memory hungry.
2019/07/20: New SCAN collective; improve handling of SYMmetric
2019/07/08: Fix memory usage in conversions to/from coo
2019/07/01: Timer facility
2019/06/25: Fix memory footprint in spspmm and other borderline cases
2019/06/17: Make ADD default assembly action.
2019/05/30: new insertion routines in CSR mode
2019/05/27: Fix matrix generation.
2019/05/26: New timer facility (undocumented).
2019/05/20: Change checks on vectors
2019/05/05: Reworked linmap internals
2019/04/15: Improved sphalo and test programs printout
2019/04/09: bump version to 3.6.1
2019/04/09: new fnd_owner_halo method
2019/03/31: New RENUM_BLK descriptor method
2019/01/16: In bldext, implement ALL_TO_ALLV by hand for certain
compilers.
2018/10/10: New ICTXT argument in prec%init
2018/07/30: Improved implementations for TRIL/TRIU
2018/04/15: Fix internals to have tmp_ovrlap in local indexing. Change
default in CDALL with VL to no global checks.
2018/03/22: Fix defaults for matrix/vector internals
2018/02/03: Accept 'GMRES' as synonim with 'RGMRES'.
2018/11/23: Reimplement hash function.
2018/10/10: new ICTXT argument to prec%init method.
2018/09/04: Modified vector class get_vect method
2018/08/10: Optional arguments in GETROW method.
2018/07/30: Improved TRIL/TRIU implementations.
2018/06/14: New FCG code.
2018/04/24: Merged changes to error handling internals.
2018/04/23: Change default for CDALL with VL. New GLOBAL argument for
reductions.
2018/04/15: Fixed pargen benchmark programs. Made MOLD mandatory.
2018/01/10: Updated docs.
2017/12/15: Fixed preconditioner build.
2017/10/31: Updated target install directories.
2017/10/15: Fold genpde into examples for readability.
2017/10/02: Merged CBinding.
2017/09/30: Fixes for README, contributors, bug reporting address.
2017/08/09: New optional args to TRIL and TRIU to produce two output
matrices at once.
2017/07/20: Fixes to genpde_impl. Fixed bug in matdist with BLOCK.
2017/04/28: Further development of CBIND branch.
2017/04/25: Fix configure for MKL.
2017/04/10: Makefile fixes.
2017/04/09: Remove all remaining .F files, and remove configure checks for
F77. Define version 3.5.0. Fixed INTENT in preconditioners.
2017/04/08: All PREC methods now invoked through preconditioner object.
2017/04/05: make ISO_C_BINDING and MOVE_ALLOC mandatory prereq.
2017/03/20: Changes for GPU codes: added buffers to MAPs, reduce number of
malloc/free, added new maybe_free_buffers method on vector.
2017/02/12: New stopping criterion, simplified mat_dist
2017/02/06: Fixes for --enable-long-integers and for RPM build.
2016/07/14: Fix matrix print with renumbering.
2016/06/18: New thresholds for quicksort
2016/06/03: New source dir organization
2016/05/05: Fix fakempi
2016/02/24: Fix init of hash inside cdbldext.
2016/01/06: Default COND to 0.
2015/12/17: Added clean_zeros method to spmat.
2015/12/14: Fixed mat_dist
2015/12/08: Make LOCX allocatable
2015/12/02: Reworked scatter interfaces.
2015/10/17: Fix configure for ATLAS packaging.
2015/07/17: Fix cdbldext.
2015/07/10: Begin integration of multivectors.
2015/04/25: New optional args in V%set.
2015/04/14: New absval method for vectors.
2015/03/20: Optimizations of TO/FROM routines, use of is_by_rows in COO.
2015/01/05: Fix silly bug in format conversion csr_from_coo.
2014/12/21: Change error handling routines to make them more flexible for
C binding. More compact prologues/epilogues.
2014/11/12: Fix silly bug in MMIO: cycling through rank-2 dense read/write was
transposing!
2014/10/22: Implement norm-1 and norm-infinity at base_sparse_mat relying
on srwsum/aclsum.
2014/10/15: Merged changes into trunk. Declare version 3.3
2014/10/10: Fix recutions when root /= -1
2014/09/10: Use int32/64 kinds & friends from ISO_FORTRAN_ENV
2014/08/30: New multivector types. Vector insertion now takes other
vectors as well as arrays.
2014/07/10: CSPUT has now two specifics, with arrays or vector types.
2014/07/02: Matrices have host/device status.
2014/06/11: Check for bug on multiple generic names.
2014/05/05: Make sure CSPUT ignores (at most a warning) indices not
belonging.
2014/04/14: Sort status in COO. Changes in error handling for CSPUT.
2014/03/31: get_diag, rowsum & friends changed into functions.
2014/03/26: TRIL/TRIU interfaces. Fixed mm_array_read/write.
Multiple generic names for norms and io.
2014/03/18: New defaults for MOLD & friends.
2014/01/27: Fixed error conditions on mmio.
2013/12/12: New sparse-sparse matrix product, with CSR-CSR, CSC-CSC new
versions, while keeping the old version at hand.
2013/12/04: Fix computation of output space for getrow & friends.
2013/12/02: Reimplement coo%fix method: use more memory if available, but
faster. Fix small interfacing problems with getrow/getblk/getptn.
2013/11/25: Limit usage of coo%fix method.
2013/11/07: Merged integer vector changes.
2013/10/01: New desc_type methods calling into indxmap methods.
2013/09/30: Restructure index conversion methods, simplify.
2013/09/26: Simplify checks in matrix-vector products
2013/08/01: Defined new get_fmt method for vectors
2013/06/19: Fixed type match bug in daxpby/saxpby/zaxpby.
2013/06/05: Fix INTENT in APPEND methods.
2013/05/10: test/serial now contains an example of how to define a new
format.
2013/04/20: Fix scaling and norm methods for matrices with unit diagonal.
2013/04/20: Fix add-by-one for unit triangular matrices.
2013/03/31: Implement CLONE method for vectors, maps and preconditioners.
Make base_prec abstract.
2013/03/13: doxygen docs for base_vect. Fix afmt to be len=* in genpde.
2013/03/01: Changes in method naming scheme for matrices.
Change name of descriptor module.
2012/12/11: Various fixes for 8-bytes integers.
2012/12/04: Rename specifics for GLOB_TO_LOC/LOC_TO_GLOB. Parametrize type
identifiers for MPI calls.
2012/11/26: Infinite loop bug in sparse AXPBY.
2012/10/24: Split serial_mod to improve build time.
2012/07/12: Cleanup Krylov routines.
2012/06/26: Fixed bug in prec%apply with 'C', and usage of rotations in
GMRES.
2012/06/08: Fix silly bug in GPS renumbering.
2012/05/25: Fixed docs for release 3.0
2012/05/21: Fix configure script to work around configure failure on Cray.
2012/04/30: Change descriptor's move_alloc and free to work on
uninitialized input.
2012/04/15: New LOCAL argument to geins/spins. New LIDX argument to CDALL
with VL to allow for user-specified local numbering.
2012/04/05: Default implementation of MV_TO_COO and MV_FROM_COO based on
CP.
2012/03/01: Make ISO_C_BINDING a prerequisite.
2012/02/21: Added experimental support for 8-bytes integers.
Refactored the problem generation methods and the pargen
sample programs.
2012/02/15: Fixed major perf problem with genblock.
2012/01/30: Reworked norms 1 and infty, added sparse mat norm1.
2012/01/10: Bunch of fixes and configury improvements from Cray FTN
2012/01/03: Split preconditioners into interface/implementation.
2011/11/27: Merged may routines from preprocessing project psblas-testpre.
2011/11/21: Added test for ISO_C_BINDING and AMD renumbering.
2011/11/19: Added the scratch option to the vect ASB routine.
2011/11/11: Makefile fixes allowing for parallel make.
2011/10/25: Major upgrade defining the encapsulated vector types,
providing further support for GPU.
2011/10/05: Split preconditioner modules to alleviate memory pressure
on the compiler, esp. XLF.
Fixed bug in glist map.
2011/08/01: MOLD methods and various fixes for NAG configry.
2011/07/25: Bunch of fixes for problems uncovered by Cray FTN.
2011/06/15: Changed get_local_rows and friends into methods.
2011/03/25: Added version identification constants.
2011/03/10: Added support for sparse dot products. Changed intent of X in
preconditioner apply to allow for GPU extensions.
2011/02/27: Reworked PRINT methods, for vectors as well.
2011/02/11: Changes to accommodate Cray compiler.
2011/01/07: Silly bugs in spgather MPI data types and in z_nubmm calling
cnumbmm. Also, don't use allocate on assignment with GNU.
2011/01/02: Finish descriptor reform by eliminating matrix_data. Switch
all f03 extensions into f90.
2010/12/16: Merge new descriptors with CLASS(PSB_INDX_MAP) internal maps.
Updated various descriptor-related routines.
2010/11/29: opt subdir containing ELL and RSB. test/newfmt.
2010/11/22: PRECBLD now takes a MOLD argument for its matrices.
2010/10/26: Fixed configure to use MPI wrappers. Fixed example in CXX: the
MOLD actual argument should be polymorphic.
2010/09/02: Fixed inheritance hierarchy of MOLD method. Merged into trunk.
2010/09/01: Changed Makefiles to allow for multiple submakes.
2010/08/31: Defined the MOLD method. Put under IFDEF in opposition to
MOLD= in allocation of CLASS variables.
2010/07/29: Make the aux component of base_mat a static array; works
around a problem with gfortran. The library does not fully
work yet under gfortran.
2010/05/10: Fixed fakempi. Now works under XLF 13.1
2010/04/29: Restructure KRYLOV modules.
2010/04/29: Take out BLACS.
2010/04/27: Reworked inheritance chain for PREC.
2010/04/27: Fixed private attribute. Various fixes for compilation
with gfortran.
2010/04/21: Added shortcut for CSR in ILU_FACT
2010/03/23: Restructured the module structure.
2009/12/15: Tons of bug fixes, also from testing on IBM XLF.
2009/09/15: First working OO implementation for serial routines on sparse
matrix data structures. Only D for the time being.
2009/08/25: New configure flag
--enable-serial
for serial-only compilation.
2009/06/24: Changed order of arguments in sp_scal to make it uniform with
rest of library.
2009/05/15: Changed interface to matdist.
2009/05/12: Added support for NAG Fortran compiler
2009/03/16: Release 2.3.1
2009/01/27: Renamed psb_transfer into psb_move_alloc.
2009/01/08: Require GNU Fortran 4.3 or later.
2008/11/04: Repackaged and streamlined linear maps.
2008/10/16: Fixed internal structure of psb_inter_desc.
2008/09/23: Fix borderline cases where one process does not own any
indices from the global space.
2008/09/18: Defined psb_sizeof to be integer(8). Added support
into psb_sum, psb_amx and other reductions for long int
scalars.
2008/09/16: Implemented new scheme for index conversion.
Changed cdall with an option to suppress global checks.
2008/09/02: Improved psi_fnd_owner performace.
2008/09/01: Better timings in the pargen test cases.
2008/08/28: Changed CDALL in case of VL to handle overlapped indices.
2008/07/28: New sorting/reordering modules.
2008/07/24: Addded HTML version of user's guide.
2008/07/22: Fixed I/O for Harwell-Boeing and Matrix Market examples
2008/05/27: Merged single precision branch.
2008/04/28: Fixed trimming space in sparse matrix conversion.
Fixed performance issue in cdins.
2008/03/25: Fix performance bug in psi_idx_ins_cnv. Changed names of
some internal components of preconditioner data structure.
2008/03/27: Merged the experimental branch for implementing the AVL tree
data structure in Fortran instead of relying on C and passing
functions around to perform comparisons. There seems to be
some performance advantage, although not very large.
2008/03/25: Merged in changes from the 2.2-maint branch re: error
messages, performance bug in psi_idx_ins_cnv.
2008/02/26: New psb_linmap_init, psb_linmap_ins, psb_linmap_asb for a
general linear operator mapping among index spaces.
2008/02/18: Branched off for Version 2.2
2008/02/08: Merged changes from intermesh branch: we now have an
inter_desc_type object. Currently we only implement the
version needed for aggregation algorithms in the algebraic
multigrid preconditioners, but we'll define more general
(linear) maps soon enough.
2008/01/25: Various changes to variables controlling conditional
compilation on the Fortran side: removed NETLIB_BLACS, now
HAVE_METIS HAVE_ESSL_BLACS HAVE_KSENDID.
Files impacted: Make.inc.XXX, base/modules/psb_penv_mod,
util/psb_metispart_mod
2008/01/18: Centralized convergence checks. Still partial for RGMRES.
2008/01/14: Merged changes for handling of transpose vs. overlap.
2008/01/10: Changed name of GMRESR into RGMRES for consistency.
2007/12/21: Merged in debug infrastructure, internal and html docs.
2007/11/14: Fix INTENT(IN) on X vector in preconditioner routines.
2007/10/15: Repackaged the sorting routines in a submodule of their
own, adding some heap management and heapsort utilities for the
benefit of the multilevel preconditioners.
2007/09/28: Moved gelp and csrp to serial. Changed interface to
sphalo: the new one makes more sense.
Updated documentation.
2007/09/14: Second round of serial changes: merged into trunk, fixed
JAD regeneration and srch_upd now works.
2007/09/10: First round of serial changes: implemented serial
psb_spcnv unifying multiple functionalities.
2007/09/04: Implemented RGMRES for complex data.
2007/06/04: Fixed implementation of fctint and coins: assume size
arrays caused troubles on some compilers. Documentation of
set_large_threshold.
2007/05/22: Defined psb_precinit.
2007/05/15: Defined psb_sizeof.
2007/05/15: Merged in various fixes coming from tests on SP5 and
HP-Itanium.
2007/04/08: Changed the implementation of psb_sp_getrow & friends.
2007/03/27: Merged in changes for enabling compilation on SUN.
2007/02/22: Fixed various misalignments between real and complex.
Defined new psb_sp_clip routines.
Fixed psb_rwextd.
Changed the USE statements, minimizing size of modules and
maximizing consistency checks.
2007/02/01: Merged serial version: we provide a minimal fake mpi to
allow compiling and running without mpi and blacs. Only
tested with gnu42 so far.
2007/01/23: Defined new field ext_index in desc_type, and
fixed long standing inconsistency in usage of overlap for
AS preconditioners. Modified halo to accept selector for
halo_index vs. ext_index.
2007/01/11: Migrated repository to SVN.
2007/01/11: MLD2P4 has been moved to the new org. Now tackling the
test dirs.
2007/01/09: First try at reorganizing directories. Subdir MLD2P4 still
to be fixed. Documentation still to be updated.
2006/12/11: Documented options in glob_to_loc.
2006/12/06: Fixed raw aggregation.
2006/12/05: Taken out extra interfaces; inserted use modules with ONLY
clauses where appropriate.
2006/11/30: Fixed a bug in raw aggregation. Note: raw aggregation
gives different results from smoothed with omega=0.0,
because in the latter we have explicitly stored zero
coefficients that would be absent in the first, thus
generating different ILU factorizations.
2006/11/28: Merged the mods for descriptors of large index spaces to
avoid having the GLOB_TO_LOC array. Took the chance to
reorganize the descriptor build routines and define some
access functions for descriptor features and entries, so
as not to use the descriptor components directly. Tested
with AS, 2- and 3- level Post smoothers.
2006/11/09: The allocatable version works, but under gcc42 there is a
compiler bug when using -fbounds-check.
2006/11/08: Merged the allocatable version; hope everything works!
2006/11/08: Branched version psblas2-2-0-maint, and defined tag
2.0.2.6
2006/11/02: Done in the allocatable branch: repackaging of cdasb and
friends, taking out AVL trees where they were not
absolutely needed, and new dcsrmv routine.
2006/11/01: Merged changes in the handling of data exchange.
2006/10/03: Merged in the multilevel preconditioner stuff. This is
still experimental, especially the interfaces are not
stable yet.
2006/10/03: Declared version 2.0.2.5 for reference purposes.
2006/10/03: Fixed a bunch of minor bugs, incuding the sorting routines
imsr and imsrx. Added a default call to blacs_exit inside
psb_exit fixed a bad termination in test/pargen/ppde90.f90
2006/09/02: Declared version 2.0.2, after having fixed a lot of
details in the environment routines.
2006/07/25: Defined a new psb_wtime function. Modified precset to
have a non-optional INFO dummy argument.
2006/07/06: Fixed bug in swaptran. Added psb_krylov generic interface.
2006/07/04: Ooops, the GetRow mod in SMMP is a performance hit.
Need to investigate further.
2006/06/21: Bug fix in hb_read when dealing with symmetric matrices.
2006/06/20: Rewritten symbmm and numbmm from SMMP to be intependent of
CSR storage by using GetRow. Still need to test for
performance.
2006/06/16: Defined GetRow. This way we may close the mat objects.
Next we will rewrite SMMP to only make use of GetRow,
not to rely on CSR storage format.
2006/05/29: Added BLACS-like routines for data communication,
broadcasts, reductions, send/receive.
2006/05/25: Added environment management routines.
2006/05/03: Bug fixes, plus some change in the internals for SPINS,
preparing hooks for insertion with local numbering.
2006/04/24: Minor changes to the interface of dense tools routines,
trying to achieve a uniform look & feel.
Rewritten documentation; it is now reasonable, though not
perfect, except for the preconditioner routines.
We can now declare RC3.
2006/04/21: A bunch of fixes related to various matrix initialization
problems that were revealed while testing on SP5.
2006/04/18: Changed interface to spasb and csdp: better handling of
regeneration. To be tested further for sophisticated uses.
2006/03/31: We declare RC2 now. Improved I/O routines in test/Fileread.
2006/03/24: We have a complex version now, working (not necessarily bug free).
2006/03/15: Started move to complex version.
2006/03/01: Complete restructure of PREC section.
2006/02/01: New naming scheme.
2006/01/01: New multilevel preconditioning wih smoothed aggregation.
2005/09 : Now enabled UMFPACK complete factorization as basis for AS.
2005/05/04: Now enabled SuperLU complete factorization as basis for AS.
2005/04/29: First version with decoupled 2-level.
2005/04/06: Started work on decoupling the preconditioner aggregation
for 2-level from the main factorization.
2005/03/30: First version of new DSC/SP allocate/insert/assembly
routines.
2005/03/17: First version of RGMRES. To be refined.
2005/03/08: dSwapTran aligned with dSwapData. Taken out SwapOverlap.
also moved onto iSwapX.
2005/03/07: dSwapData rewritten to achieve: 1. better performance;
2. more flexible functionality. It is now possible to
avoid SwapOvrlap entirely, relying on just SwapData.
SwapTran is still alive, since it reads the descriptors in
"transpose" mode. Also, added work areas to preconditioner
routine, to avoid excessive allocation in the halo/overlap
exchange.
2005/03/04: Had to put in a workaround for a gfortran bug:
tolower/toupper cannot be functions.
2005/02/09: Explicit storage choice for the smoother. This seems
to be changing a little bit the actual preconditioner.
To be evaluated further.
2005/02/08: Renamed F90_PSPREC to PSB_PRCAPLY and Preconditioner to
PSB_PRCBLD. Changed the way PRCAPLY decides what to do.
Still needs a PSB_PRCSET to be called before PRCBLD.
2005/01/28: Started moving functionalities to a SERIAL F90 layer. Also
defined a new COMM layer, to enable implementing SPMM
directly in F90.
2005/01/20: Finally taken out a direct call to the F77 DCSDP from
SPASB.
2005/01/18: After much work, we now have 2-level Additive Schwarz
prototype implemented and working. We now start a major
code cleanup that will take some time. Mainly we want to
move a lot of the serial F77 functionality into a new F95
serial layer, to simplify the parallel F95 code.
2004/11/25: Following the introduction of Additive Shwarz and
variants, we have now renamed DECOMP_ and friends as
DESC_; this makes things more readable. Sooner or later
we're going to merge this into mainline, but this version
is still very much in a state of flux.
2004/07/18: For use with gfortran we need to declare the pointer
components with NULL() initialization. This rules out
VAST and PGI.
2004/07/15: First development version with gfortran from the current
snapshot of gcc 3.5.0.
It is now possible in PSI_dSwapData to opt for
SEND|RECEIVE|SYNC data exchange; plan is to extend to all
data exchange functions, plus making it available as an
option from the F90 level.
2004/07/06: Merged in a lot of stuff coming mainly from the ASM
development; full merge will have to wait a little more.
Among other things:
use of psimod
new choice parms for overlap
new data exchange for swapdata, to be extended.
multicolumn CSMM.
use psrealloc
new format for marking a matrix as suitable for update.
2003/12/09: Changed DSALLOC and DSASB to make sure whenever a dense
matrix is allocated it is also zeroed out.
2003/10/13: Added call to BLACS_SET in the solvers to ensure global
heterogeneous coherence in the combine operations.
2003/09/30: Added LOC_TO_GLOB and GLOB_TO_LOC support routines.
2003/09/30: Changed interface for smart update capabilities: choose
with optional parameters in ASB routines.
2003/09/16: IFC 7.0 had a strange behaviour in the test programs:
sometimes the declaration of PARTS dummy argument with an
INTERFACE would not work, requiring an EXTERNAL
declaration. The proper INTERFACE works now with 7.1.
2003/03/10: Halo data exchange in F90_PSHALO can now be applied to
integer data; create appropriate support routines.
2002/12/05: Initial version of Fileread sample programs.
2002/11/19: Fixes for JAD preconditioner.
2002/11/19: Methods for patterns: create a descriptor without a
matrix.
2001/11/16: Reviewed the interfaces: in the tools section we really
need the POINTER attribute for dense vectors, but not in
the computational routines; taking it out allows more
flexibility.
2001/09/16: Smart update capabilities.
2001/03/16: Renumbering routines.
2001/01/14: Added extensions to compute multiple DOTs and AMAXs at once;

@ -1,5 +1,5 @@
Parallel Sparse BLAS version 3.8
(C) Copyright 2006-2022
Parallel Sparse BLAS version 3.9
(C) Copyright 2006-2025
Salvatore Filippone
Alfredo Buttari

@ -14,20 +14,22 @@
FC=@FC@
CC=@CC@
CXX=@CXX@
FCOPT=@FCOPT@
CCOPT=@CCOPT@
CXXOPT=@CXXOPT@
FMFLAG=@FMFLAG@
FIFLAG=@FIFLAG@
EXTRA_OPT=@EXTRA_OPT@
FCOPT=@FCOPT@ $(EXTRA_OPT)
CCOPT=@CCOPT@ $(EXTRA_OPT)
CXXOPT=@CXXOPT@ $(EXTRA_OPT)
# These three should be always set!
MPFC=@MPIFC@
MPCC=@MPICC@
FLINK=@FLINK@
CLINK=@CLINK@
LIBS=@LIBS@
FLIBS=@FLIBS@
# BLAS, BLACS and METIS libraries.
BLAS=@BLAS_LIBS@
@ -67,6 +69,37 @@ UTILMODNAME=@UTILMODNAME@
CBINDLIBNAME=libpsb_cbind.a
OACCD=@OACCD@
OACCLD=@OACCLD@
FCOPENACC=@FCOPENACC@
CCOPENACC=@CCOPENACC@
CXXOPENACC=@CXXOPENACC@
CUDAD=@CUDAD@
CUDALD=@CUDALD@
LCUDA=@LCUDA@
SPGPU_LIBS=@SPGPU_LIBS@
CUDA_DIR=@CUDA_DIR@
CUDA_INCLUDES=@CUDA_INCLUDES@
CUDA_LIBS=@CUDA_LIBS@
PSB_CUDA_VERSION=@PSB_CUDA_VERSION@
PSB_CUDA_SHORT_VERSION=@PSB_CUDA_SHORT_VERSION@
CUDA_DEFINES=@CUDA_DEFINES@
FCUDEFINES=@FCUDEFINES@
CCUDEFINES=@CCUDEFINES@
CXXCUDEFINES=@CXXCUDEFINES@
EXTRA_NVCC=@EXTRA_NVCC@
NVCC=@CUDA_NVCC@ $(EXTRA_NVCC)
CUDEFINES=@CUDEFINES@
.SUFFIXES: .cu
.cu.o:
$(NVCC) $(CINCLUDES) $(CDEFINES) $(CUDEFINES) -c $<
@PSBLASRULES@
PSBGPULDLIBS=$(LCUDA) $(SPGPU_LIBS) $(CUDA_LIBS) $(PSBLDLIBS) $(LIBS)

@ -1,6 +1,6 @@
include Make.inc
all: dirs based precd kryld utild cbindd libd
all: dirs based precd linslvd utild cbindd extd $(CUDAD) $(OACCD) libd
@echo "====================================="
@echo "PSBLAS libraries Compilation Successful."
@ -11,27 +11,41 @@ dirs:
precd: based
utild: based
kryld: precd
linslvd: precd
extd: based
cudad: extd
oaccd: extd
cbindd: based precd linslvd utild
cbindd: based precd kryld utild
libd: based precd kryld utild cbindd
libd: based precd linslvd utild cbindd extd $(CUDALD) $(OACCLD)
$(MAKE) -C base lib
$(MAKE) -C prec lib
$(MAKE) -C krylov lib
$(MAKE) -C linsolve lib
$(MAKE) -C util lib
$(MAKE) -C cbind lib
$(MAKE) -C ext lib
cudald: cudad
$(MAKE) -C cuda lib
oaccld: oaccd
$(MAKE) -C openacc lib
based:
$(MAKE) -C base objs
precd:
$(MAKE) -C prec objs
kryld:
$(MAKE) -C krylov objs
linslvd:
$(MAKE) -C linsolve objs
utild:
$(MAKE) -C util objs
cbindd:
$(MAKE) -C cbind objs
extd:
$(MAKE) -C ext objs
cudad:
$(MAKE) -C cuda objs
oaccd:
$(MAKE) -C openacc objs
install: all
@ -48,30 +62,31 @@ install: all
mkdir -p $(INSTALL_DOCSDIR) && \
$(INSTALL_DATA) README.md LICENSE $(INSTALL_DOCSDIR)
mkdir -p $(INSTALL_SAMPLESDIR) && \
/bin/cp -fr test/pargen test/fileread $(INSTALL_SAMPLESDIR) && \
mkdir -p $(INSTALL_SAMPLESDIR)/cbind && /bin/cp -fr cbind/test/pargen/* $(INSTALL_SAMPLESDIR)/cbind
clean:
$(MAKE) -C base clean
$(MAKE) -C prec clean
$(MAKE) -C krylov clean
$(MAKE) -C util clean
$(MAKE) -C cbind clean
check: all
make check -C test/serial
/bin/cp -fr test/pdegen test/fileread $(INSTALL_SAMPLESDIR) && \
mkdir -p $(INSTALL_SAMPLESDIR)/cbind && /bin/cp -fr cbind/test/pdegen/* $(INSTALL_SAMPLESDIR)/cbind
clean: cleanlib
$(MAKE) -C base veryclean
$(MAKE) -C prec veryclean
$(MAKE) -C linsolve veryclean
$(MAKE) -C util veryclean
$(MAKE) -C cbind veryclean
$(MAKE) -C ext veryclean
$(MAKE) -C cuda veryclean
$(MAKE) -C openacc veryclean
cleantest:
cd test/fileread && $(MAKE) clean
cd test/pdegen && $(MAKE) clean
cd test/util && $(MAKE) clean
cleanlib:
(cd lib; /bin/rm -f *.a *$(.mod) *$(.fh) *.h)
(cd include; /bin/rm -f *.a *$(.mod) *$(.fh) *.h)
(cd modules; /bin/rm -f *.a *$(.mod) *$(.fh) *.h)
(cd modules; /bin/rm -f *.a *$(.mod) *$(.fh) *.h)
distclean: clean
/bin/rm -f Make.inc util/psb_metis_int.h base/modules/psb_config.h
check: all
make check -C test/serial
veryclean: cleanlib
cd base && $(MAKE) veryclean
cd prec && $(MAKE) veryclean
cd krylov && $(MAKE) veryclean
cd util && $(MAKE) veryclean
cd cbind && $(MAKE) veryclean
cd test/fileread && $(MAKE) clean
cd test/pargen && $(MAKE) clean
cd test/util && $(MAKE) clean

@ -1,48 +1,56 @@
PSBLAS library, version 3.8
===========================
# PSBLAS library, version 3.9
The architecture of the Fortran 2003 sparse BLAS is described in:
>S. Filippone, A. Buttari. Object-Oriented Techniques for Sparse Matrix
>Computations in Fortran 2003, ACM Trans. on Math. Software, vol. 38, No.
4, 2012.
The ideas are explored further with the paper:
>V. Cardellini, S. Filippone and D. Rouson. Design Patterns for
>sparse-matrix computations on hybrid CPU/GPU platforms, Scientific
>Programming, 22(2014), pp.1-19.
The PSBLAS library, developed with the aim to facilitate the parallelization of computationally intensive scientific applications, is designed to address parallel implementation of iterative solvers for sparse linear systems through the distributed memory paradigm. It includes routines for multiplying sparse matrices by dense matrices, solving block diagonal systems with triangular diagonal entries, preprocessing sparse matrices, and contains additional routines for dense matrix operations. The current implementation of PSBLAS addresses a distributed memory execution model operating with message passing.
Version 1.0 of the library is described in:
>S. Filippone, M. Colajanni. PSBLAS: A library for parallel linear
>algebra computation on sparse matrices, ACM Trans. on Math. Software,
>26(4), Dec. 2000, pp. 527-550.
The PSBLAS library version 3 is implemented in the Fortran 2008 programming language, with reuse and/or adaptation of existing Fortran 77 and Fortran 95 software, plus a handful of C routines.
## References
UTILITIES
---------
The `test/util` directory contains some utilities to convert to/from
Harwell-Boeing and MatrixMarket file formats.
The architecture, philosophy and implementation details of the library are contained in the following papers:
DOCUMENTATION
-------------
See docs/psblas-3.8.pdf; an HTML version of the same document is
available in docs/html. Please consult the sample programs, especially
test/pargen/psb_[sd]_pde[23]d.f90
- The architecture of the Fortran 2003 sparse BLAS is described in:
>S. Filippone, A. Buttari. Object-Oriented Techniques for Sparse Matrix
>Computations in Fortran 2003, ACM Trans. on Math. Software, vol. 38, No.
4, 2012.
- The software engineering ideas are further detailed in the paper:
>V. Cardellini, S. Filippone and D. Rouson. Design Patterns for
>sparse-matrix computations on hybrid CPU/GPU platforms, Scientific
>Programming, 22(2014), pp.1-19.
- The GPU support is explored in
> S. Filippone, V. Cardellini, D. Barbieri and A. Fanfarillo:
> Sparse Matrix-Vector Multiplication on GPGPUs ACM Transactions on Mathematical Software (TOMS), Volume 43 Issue 4, December 2016.
- Version 1.0 of the library is described in:
>S. Filippone, M. Colajanni. PSBLAS: A library for parallel linear
>algebra computation on sparse matrices, ACM Trans. on Math. Software,
>26(4), Dec. 2000, pp. 527-550.
- The software infrastructure changes required to accommodate the implementation of the
Additive-Schwarz preconditioners available in [AMG4PSBLAS](https://github.com/sfilippone/amg4psblas/) are detailed in:
> A. Buttari, P. D'Ambra, D. di Serafino, S. Filippone, Extending PSBLAS to build parallel Schwarz preconditioners, Applied Parallel Computing. State of the Art in Scientific Computing: 7th International Workshop, PARA 2004, LNCS 3732, 2006, pp. 593-602.
> A. Buttari, P. D'Ambra, D. Di Serafino, S. Filippone, 2LEV-D2P4: A package of high-performance preconditioners for scientific and engineering applications, Applicable Algebra in Engineering, Communications and Computing, 2007, 18(3), pp. 223-239.
> P. D'Ambra, D. Di Serafino, S. Filippone, MLD2P4: A package of parallel algebraic multilevel domain decomposition preconditioners in Fortran 95 ACM Transactions on Mathematical Software, 2010, 37(3), 30
PSBLAS is the backbone of the Parallel Sparse Computation Toolkit ([PSCToolkit](https://psctoolkit.github.io/)) suite of libraries. See the paper:
> DAmbra, P., Durastante, F., & Filippone, S. (2023). Parallel Sparse Computation Toolkit. Software Impacts, 15, 100463.
### Other Software credits
OTHER SOFTWARE CREDITS
----------------------
We originally included a modified implementation of some of the Sparker
(serial sparse BLAS) material; this has been completely rewritten, way
beyond the intention(s) and responsibilities of the original developers.
The main reference for the serial sparse BLAS is:
>Duff, I., Marrone, M., Radicati, G., and Vittoli, C. Level 3 basic
>linear algebra subprograms for sparse matrices: a user level interface,
>ACM Trans. Math. Softw., 23(3), 379-401, 1997.
>Duff, I., Marrone, M., Radicati, G., and Vittoli, C. Level 3 basic
>linear algebra subprograms for sparse matrices: a user level interface,
>ACM Trans. Math. Softw., 23(3), 379-401, 1997.
## Installing
INSTALLING
----------
To compile and run our software you will need the following
prerequisites (see also SERIAL below):
@ -53,33 +61,45 @@ prerequisites (see also SERIAL below):
http://math-atlas.sourceforge.net/
3. We have had good results with the METIS library, from
http://www-users.cs.umn.edu/~karypis/metis/metis/main.html.
https://github.com/KarypisLab/METIS.
This is optional; it is used in the util and test/fileread
directories but only if you specify `--with-metis`.
4. If you have the AMD package of Davis, Duff and Amestoy, you can
5. If you have the AMD package of Davis, Duff and Amestoy, you can
specify `--with-amd` (see `./configure --help` for more details).
We use the C interface to AMD.
6. If you have CUDA available, use
--enable-cuda to compile CUDA-enabled methods
--with-cudadir=<path> to specify the CUDA toolkit location
--with-cudacc=XX,YY,ZZ to specify a list of target CCs (compute
capabilities) to compile the CUDA code for.
The configure script will generate a Make.inc file suitable for building
the library. The script is capable of recognizing the needed libraries
with their default names; if they are in unusual places consider adding
the paths with `--with-libs`, or explicitly specifying the names in
`--with-blas`, etc. Please note that a common way for the configure script
to fail is to specify inconsistent MPI vs. plain compilers, either
directly or indirectly via environment variables; e.g. specifying the
Intel compiler with `FC=ifort` while at the same time having an
`MPIFC=mpif90` which points to GNU Fortran. The best way to avoid this
situation is (in our opinion) to use the environment modules package
(see http://modules.sourceforge.net/), and load the relevant
variables with (e.g.)
```
module load gnu46 openmpi
```
This will delegate to the modules setup to make sure that the version of
openmpi in use is the one compiled with the gnu46 compilers. After the
configure script has completed you can always tweak the Make.inc file
yourself.
`--with-blas`, etc.
>[!CAUTION]
> Please note that a common way for the configure script
> to fail is to specify inconsistent MPI vs. plain compilers, either
> directly or indirectly via environment variables; e.g. specifying the
> Intel compiler with `FC=ifort` while at the same time having an
> `MPIFC=mpif90` which points to GNU Fortran.
>[!TIP]
> The best way to avoid this
> situation is (in our opinion) to use the environment modules package
> (see [http://modules.sourceforge.net/](http://modules.sourceforge.net/)), and load the relevant
> variables with (e.g.)
> ```
> module load gcc/13.2.0 openmpi/4.1.6
> ```
> This will delegate to the modules setup to make sure that the version of
> openmpi in use is the one compiled with the gnu46 compilers. After the
> configure script has completed you can always tweak the Make.inc file
> yourself.
After you have Make.inc fixed, run
```
@ -91,58 +111,126 @@ install and the libraries will be installed under `/path/lib`, while the
module files will be installed under `/path/modules`. The regular and
experimental C interface header files are under `/path/include`.
SERIAL
------
### Packaging changes, CUDA and GPU support
This version of PSBLAS incorporates into a single package three
entities that were previously separated:
| Library | |
|---------|--------------------|
| PSBLAS | the base library |
| PSBLAS-EXT | a library providing additional storage formats for matrices and vectors |
| SPGPU | a package of kernels for NVIDIA GPUs originally written by Davide Barbieri and Salvatore Filippone; see the license file [cuda/License-spgpu.md](cuda/License-spgpu.md) |
Moreover, the module and library previously called psb_krylovv are now called
psb_linsolve, but their usage is otherwise unchanged.
### OpenACC
There is a highly experimental version of an OpenACC interface,
you can access it by speficifying
```bash
--enable-openacc --with-extraopenacc="-foffload=nvptx-none=-march=sm_70"
```
where the argument to the extraopenacc option depends on the compiler
you are using (the example shown here is relevant for the GNU
compiler).
### Serial
Configuring with `--enable-serial` will provide a fake MPI stub library
that enables running in pure serial mode; no MPI installation is needed
in this case (but note that the fake MPI stubs are only guaranteed to
cover what we use internally, it's not a complete replacement).
INTEGER SIZES
-------------
### Integers
We have two kind of integers: IPK for local indices, and LPK for
global indices. They can be specified independently at configure time,
e.g.
```bash
--with-ipk=4 --with-lpk=8
```
which is asking for 4-bytes local indices, and 8-bytes global indices
(this is the default).
## CMAKE
There is initial support for building with CMAKE. As of this time, it does not compile the CUDA part.
TODO
----
Fix all reamining bugs. Bugs? We dont' have any ! ;-)
## LLVM
The library has been successfully compiled and tested with LLVM version 20.1.0-rc2.
## Documentation
Further information on installation and configuration can be found in the documentation.
See [docs/psblas-3.9.pdf](docs/psblas-3.9.pdf); an HTML version of the same document is
available in docs/html. Please consult the sample programs, especially
- [test/pargen/psb_s_pde2d.F90](test/pargen/psb_s_pde2d.F90) [test/pargen/psb_d_pde2d.F90](test/pargen/psb_d_pde2d.F90)
- [test/pargen/psb_s_pde2d.F90](test/pargen/psb_s_pde3d.F90) [test/pargen/psb_d_pde2d.F90](test/pargen/psb_d_pde3d.F90)
which contain examples for the solution of linear systems obtained by the discretization of a generic second-order differential equation in two:
```math
- a_1 \frac{\partial^2 u}{\partial x^2}
- a_2 \frac{\partial^2 u}{\partial y^2}
+ b_1 \frac{\partial u}{\partial x}
+ b_2 \frac{\partial u}{\partial y}
+ c u = f
```
or three
```math
- a_1 \frac{\partial^2 u}{\partial x^2}
- a_2 \frac{\partial^2 u}{\partial y^2}
- a_3 \frac{\partial^2 u}{\partial z^2}
+ b_1 \frac{\partial u}{\partial x}
+ b_2 \frac{\partial u}{\partial y}
+ b_3 \frac{\partial u}{\partial z}
+ c u = f
```
dimensions on the unit square/cube with Dirichlet boundary conditions.
The PSBLAS team.
---------------
Project lead:
Salvatore Filippone
### Utilities
The [test/util](test/util) directory contains some utilities to convert to/from
Harwell-Boeing and MatrixMarket file formats.
Contributors (roughly reverse cronological order):
## TODO and bugs
Dimitri Walther
Andea Di Iorio
Stefano Petrilli
Soren Rasmussen
Zaak Beekman
Ambra Abdullahi Hassan
Pasqua D'Ambra
Alfredo Buttari
Daniela di Serafino
Michele Martone
Michele Colajanni
Fabio Cerioni
Stefano Maiolatesi
Dario Pascucci
- [ ] Improving OpenACC support
- [ ] Improving OpenMP support
- [X] Fix all reamining bugs. Bugs? We dont' have any ! 🤓
> [!NOTE]
> To report bugs 🐛 or issues ❓ please use the [GitHub issue system](https://github.com/sfilippone/psblas3/issues).
RELATED SOFTWARE
----------------
## The PSBLAS team.
**Project lead:**
Salvatore Filippone
**Contributors** (_roughly reverse cronological order_):
- Luca Pepè Sciarria
- Theophane Loloum
- Fabio Durastante
- Dimitri Walther
- Andea Di Iorio
- Stefano Petrilli
- Soren Rasmussen
- Zaak Beekman
- Ambra Abdullahi Hassan
- Pasqua D'Ambra
- Alfredo Buttari
- Daniela di Serafino
- Michele Martone
- Michele Colajanni
- Fabio Cerioni
- Stefano Maiolatesi
- Dario Pascucci
## RELATED SOFTWARE
If you are looking for more sophisticated preconditioners, you may be
interested in the package AMG4PSBLAS from
<http://github.com/sfilippone/amg4psblas>
<http://github.com/sfilippone/amg4psblas> and the whole [PSCTooolkit suite](https://psctoolkit.github.io/).
Contact: <https://github.com/sfilippone/psblas3>

@ -1,5 +1,12 @@
WHAT'S NEW
Version 3.9
1. PSBLAS3-EXT has been folded into the main library
2. Renamed GPU into CUDA.
3. Highly experimental OpenACC support.
4. The iterative solvers are now defined in psb_linsolve_mod
and implemented in libpsb_linsolve.a; existing code using
Krylov methods will work with no changes.
Version 3.8.0-2
1. CTXT is now an opaque object.
2. OpenMP is now better integrated.

@ -0,0 +1,643 @@
set(PSB_base_source_files
comm/psb_dovrl_a.f90
comm/psb_dovrl.f90
# comm/psb_i2halo_a.f90
comm/internals/psi_zswaptran.F90
# comm/internals/psi_i2ovrl_upd_a.f90
comm/internals/psi_lovrl_save.f90
comm/internals/psi_movrl_save_a.f90
comm/internals/psi_sovrl_restr_a.f90
comm/internals/psi_sovrl_upd_a.f90
comm/internals/psi_zswaptran_a.F90
comm/internals/psi_lovrl_restr.f90
comm/internals/psi_iswapdata.F90
comm/internals/psi_covrl_upd_a.f90
comm/internals/psi_dswaptran_a.F90
comm/internals/psi_lovrl_upd.f90
comm/internals/psi_dswapdata_a.F90
comm/internals/psi_movrl_upd_a.f90
# comm/internals/psi_i2swaptran_a.F90
comm/internals/psi_dswaptran.F90
comm/internals/psi_covrl_save_a.f90
comm/internals/psi_eovrl_restr_a.f90
comm/internals/psi_sswaptran_a.F90
comm/internals/psi_dovrl_save_a.f90
comm/internals/psi_lswapdata.F90
comm/internals/psi_cswapdata.F90
comm/internals/psi_dswapdata.F90
comm/internals/psi_sovrl_save.f90
comm/internals/psi_iswaptran.F90
comm/internals/psi_sswapdata_a.F90
comm/internals/psi_sswaptran.F90
comm/internals/psi_lswaptran.F90
comm/internals/psi_mswaptran_a.F90
# comm/internals/psi_i2ovrl_restr_a.f90
comm/internals/psi_covrl_restr.f90
comm/internals/psi_mswapdata_a.F90
comm/internals/psi_zovrl_restr_a.f90
comm/internals/psi_dovrl_restr_a.f90
comm/internals/psi_covrl_restr_a.f90
comm/internals/psi_sswapdata.F90
comm/internals/psi_sovrl_save_a.f90
comm/internals/psi_iovrl_upd.f90
comm/internals/psi_eswaptran_a.F90
comm/internals/psi_iovrl_save.f90
comm/internals/psi_zovrl_restr.f90
comm/internals/psi_zovrl_upd.f90
comm/internals/psi_dovrl_upd_a.f90
comm/internals/psi_dovrl_restr.f90
comm/internals/psi_zswapdata_a.F90
comm/internals/psi_dovrl_save.f90
comm/internals/psi_covrl_save.f90
# comm/internals/psi_i2swapdata_a.F90
comm/internals/psi_dovrl_upd.f90
comm/internals/psi_eovrl_save_a.f90
comm/internals/psi_zovrl_upd_a.f90
comm/internals/psi_zswapdata.F90
comm/internals/psi_covrl_upd.f90
comm/internals/psi_cswaptran.F90
# comm/internals/psi_i2ovrl_save_a.f90
comm/internals/psi_sovrl_upd.f90
comm/internals/psi_eswapdata_a.F90
comm/internals/psi_movrl_restr_a.f90
comm/internals/psi_iovrl_restr.f90
comm/internals/psi_cswapdata_a.F90
comm/internals/psi_zovrl_save.f90
comm/internals/psi_eovrl_upd_a.f90
comm/internals/psi_zovrl_save_a.f90
comm/internals/psi_cswaptran_a.F90
comm/internals/psi_sovrl_restr.f90
comm/psb_dhalo.f90
comm/psb_zgather_a.f90
comm/psb_zovrl.f90
comm/psb_mhalo_a.f90
comm/psb_zscatter_a.F90
comm/psb_chalo.f90
comm/psb_zscatter.F90
comm/psb_cscatter_a.F90
comm/psb_cspgather.F90
comm/psb_cscatter.F90
comm/psb_shalo_a.f90
comm/psb_cgather.f90
comm/psb_zhalo.f90
comm/psb_movrl_a.f90
comm/psb_chalo_a.f90
# comm/psb_i2scatter_a.F90
comm/psb_sgather_a.f90
# comm/psb_i2ovrl_a.f90
comm/psb_zovrl_a.f90
comm/psb_covrl.f90
comm/psb_shalo.f90
comm/psb_dscatter_a.F90
comm/psb_lgather.f90
comm/psb_iscatter.F90
comm/psb_sovrl_a.f90
comm/psb_dscatter.F90
comm/psb_eovrl_a.f90
comm/psb_lovrl.f90
## comm/psb_lspgather.F90
## comm/psb_ispgather.F90
comm/psb_zhalo_a.f90
comm/psb_sscatter_a.F90
comm/psb_lscatter.F90
# comm/psb_i2gather_a.f90
comm/psb_ihalo.f90
comm/psb_iovrl.f90
comm/psb_zspgather.F90
comm/psb_escatter_a.F90
comm/psb_mscatter_a.F90
comm/psb_egather_a.f90
comm/psb_covrl_a.f90
comm/psb_sgather.f90
comm/psb_dhalo_a.f90
comm/psb_zgather.f90
comm/psb_igather.f90
comm/psb_sovrl.f90
comm/psb_sspgather.F90
comm/psb_cgather_a.f90
comm/psb_ehalo_a.f90
comm/psb_dgather_a.f90
comm/psb_dspgather.F90
comm/psb_sscatter.F90
comm/psb_mgather_a.f90
comm/psb_dgather.f90
comm/psb_lhalo.f90
internals/psi_bld_glb_dep_list.F90
internals/psi_graph_fnd_owner.F90
internals/psi_sort_dl.f90
internals/psi_indx_map_fnd_owner.F90
internals/psi_fnd_owner.F90
internals/psi_bld_tmpovrl.f90
internals/psi_symm_dep_list.F90
internals/psi_desc_impl.f90
### internals/psi_compute_size.f90
internals/psi_hash_impl.f90
internals/psi_crea_ovr_elem.f90
internals/psi_a2a_fnd_owner.F90
internals/psi_bld_tmphalo.f90
internals/psi_crea_bnd_elem.f90
internals/psi_desc_index.F90
internals/psi_xtr_loc_dl.F90
internals/psi_crea_index.f90
internals/psi_srtlist.f90
internals/psi_adjcncy_fnd_owner.F90
tools/psb_sins.f90
tools/psb_zspasb.f90
tools/psb_zspalloc.f90
# tools/psb_i2_remote_vect.F90
tools/psb_sfree_a.f90
tools/psb_cdprt.f90
tools/psb_c_glob_transpose.F90
tools/psb_ssphalo.F90
tools/psb_sallc.f90
tools/psb_sspasb.f90
tools/psb_zasb.f90
tools/psb_z_par_csr_spspmm.f90
tools/psb_iasb.f90
tools/psb_cdalv.f90
tools/psb_sspfree.f90
tools/psb_icdasb.F90
tools/psb_zallc_a.f90
tools/psb_d_map.f90
tools/psb_lfree.f90
# tools/psb_i2ins_a.f90
tools/psb_s_remap.F90
tools/psb_cspalloc.f90
tools/psb_glob_to_loc.f90
tools/psb_cdrep.f90
tools/psb_mins_a.f90
tools/psb_dallc_a.f90
tools/psb_d_remote_vect.F90
tools/psb_cfree.f90
tools/psb_scdbldext.F90
tools/psb_cspins.F90
tools/psb_z_remote_vect.F90
tools/psb_ssprn.f90
tools/psb_cdals.f90
tools/psb_sgetelem.f90
tools/psb_cspfree.f90
tools/psb_cins.f90
# tools/psb_i2free_a.f90
tools/psb_dspins.F90
# tools/psb_i2asb_a.f90
tools/psb_dsphalo.F90
tools/psb_d_glob_transpose.F90
tools/psb_c_par_csr_spspmm.f90
tools/psb_callc_a.f90
tools/psb_masb_a.f90
tools/psb_ccdbldext.F90
tools/psb_dfree_a.f90
tools/psb_dspasb.f90
tools/psb_sasb_a.f90
tools/psb_z_remote_mat.F90
tools/psb_c_remote_vect.F90
tools/psb_cd_switch_ovl_indxmap.f90
tools/psb_dfree.f90
tools/psb_dasb.f90
tools/psb_cd_inloc.f90
tools/psb_mfree_a.f90
tools/psb_zspfree.f90
tools/psb_s_glob_transpose.F90
tools/psb_sfree.f90
tools/psb_dcdbldext.F90
tools/psb_eins_a.f90
tools/psb_s_map.f90
tools/psb_dsprn.f90
tools/psb_d_remap.F90
tools/psb_iins.f90
tools/psb_sasb.f90
tools/psb_zgetelem.f90
tools/psb_z_map.f90
tools/psb_dins_a.f90
tools/psb_loc_to_glob.f90
tools/psb_cgetelem.f90
tools/psb_zcdbldext.F90
tools/psb_d_remote_mat.F90
tools/psb_cd_set_bld.f90
tools/psb_zfree.f90
tools/psb_zallc.f90
tools/psb_lallc.f90
tools/psb_cd_reinit.f90
tools/psb_csphalo.F90
tools/psb_cfree_a.f90
tools/psb_cd_lstext.f90
tools/psb_zfree_a.f90
tools/psb_s_par_csr_spspmm.f90
tools/psb_dgetelem.f90
tools/psb_callc.f90
tools/psb_d_par_csr_spspmm.f90
tools/psb_sspins.F90
tools/psb_sallc_a.f90
tools/psb_c_remote_mat.F90
tools/psb_zins.f90
tools/psb_e_remote_vect.F90
tools/psb_zsphalo.F90
tools/psb_cdren.f90
tools/psb_casb_a.f90
tools/psb_dins.f90
tools/psb_ifree.f90
tools/psb_mallc_a.f90
tools/psb_s_remote_vect.F90
tools/psb_c_remap.F90
tools/psb_efree_a.f90
tools/psb_sins_a.f90
tools/psb_cdins.F90
tools/psb_cdall.f90
tools/psb_lasb.f90
tools/psb_csprn.f90
tools/psb_casb.f90
tools/psb_c_map.f90
tools/psb_lins.f90
tools/psb_cspasb.f90
tools/psb_dspfree.f90
tools/psb_sspalloc.f90
tools/psb_z_remap.F90
tools/psb_z_glob_transpose.F90
tools/psb_easb_a.f90
tools/psb_cins_a.f90
tools/psb_iallc.f90
tools/psb_m_remote_vect.F90
tools/psb_eallc_a.f90
tools/psb_dspalloc.f90
tools/psb_zasb_a.f90
tools/psb_s_remote_mat.F90
tools/psb_cd_remap.F90
tools/psb_zspins.F90
tools/psb_zins_a.f90
tools/psb_cdcpy.F90
# tools/psb_i2allc_a.f90
tools/psb_dallc.f90
tools/psb_cd_renum_block.F90
tools/psb_dasb_a.f90
tools/psb_zsprn.f90
tools/psb_get_overlap.f90
serial/psb_crwextd.f90
serial/psb_zspspmm.f90
serial/psb_drwextd.f90
serial/psb_dnumbmm.f90
serial/psb_damax_s.f90
serial/psb_zgeprt.f90
serial/impl/psb_c_coo_impl.F90
serial/impl/psb_d_coo_impl.F90
serial/impl/psb_d_csc_impl.F90
serial/impl/psb_s_coo_impl.F90
serial/impl/psb_c_csc_impl.F90
serial/impl/psb_c_rb_idx_tree_impl.F90
serial/impl/psb_z_csc_impl.F90
serial/impl/psb_d_mat_impl.F90
serial/impl/psb_s_csr_impl.F90
serial/impl/psb_c_mat_impl.F90
serial/impl/psb_c_csr_impl.F90
serial/impl/psb_z_mat_impl.F90
serial/impl/psb_s_rb_idx_tree_impl.F90
serial/impl/psb_d_csr_impl.F90
serial/impl/psb_s_mat_impl.F90
serial/impl/psb_s_base_mat_impl.F90
serial/impl/psb_base_mat_impl.f90
serial/impl/psb_d_rb_idx_tree_impl.F90
serial/impl/psb_z_rb_idx_tree_impl.F90
serial/impl/psb_z_csr_impl.F90
serial/impl/psb_z_coo_impl.F90
serial/impl/psb_c_base_mat_impl.F90
serial/impl/psb_z_base_mat_impl.F90
serial/impl/psb_d_base_mat_impl.F90
serial/impl/psb_s_csc_impl.F90
serial/smmp.f90
serial/psi_m_serial_impl.F90
serial/psb_spdot_srtd.f90
serial/psb_sasum_s.f90
serial/psb_snumbmm.f90
serial/psb_camax_s.f90
serial/lsmmp.f90
serial/psb_csymbmm.f90
serial/psb_dgeprt.f90
serial/psb_zrwextd.f90
serial/psb_srwextd.f90
serial/psb_znumbmm.f90
serial/sort/psb_c_msort_impl.f90
serial/sort/psb_c_hsort_impl.f90
serial/sort/psb_m_isort_impl.f90
serial/sort/psb_m_msort_impl.f90
serial/sort/psb_s_hsort_impl.f90
serial/sort/psb_e_isort_impl.f90
serial/sort/psb_m_qsort_impl.f90
serial/sort/psb_z_hsort_impl.f90
serial/sort/psb_s_qsort_impl.f90
serial/sort/psb_z_qsort_impl.f90
serial/sort/psb_c_isort_impl.f90
serial/sort/psb_e_msort_impl.f90
serial/sort/psb_d_msort_impl.f90
serial/sort/psb_d_qsort_impl.f90
serial/sort/psb_s_isort_impl.f90
serial/sort/psb_z_isort_impl.f90
serial/sort/psb_e_hsort_impl.f90
serial/sort/psb_z_msort_impl.f90
serial/sort/psb_s_msort_impl.f90
serial/sort/psb_m_hsort_impl.f90
serial/sort/psb_d_hsort_impl.f90
serial/sort/psb_e_qsort_impl.f90
serial/sort/psb_d_isort_impl.f90
serial/sort/psb_c_qsort_impl.f90
serial/psb_dasum_s.f90
serial/psi_z_serial_impl.F90
serial/psb_dsymbmm.f90
serial/psb_samax_s.f90
serial/psb_lsame.f90
serial/psb_dspspmm.f90
serial/psb_ssymbmm.f90
serial/psb_cgeprt.f90
serial/psb_sgeprt.f90
# serial/psi_i2_serial_impl.F90
serial/psi_e_serial_impl.F90
serial/psb_zsymbmm.f90
serial/psb_cspspmm.f90
serial/psb_aspxpby.f90
serial/psi_s_serial_impl.F90
serial/psb_zamax_s.f90
serial/psb_spge_dot.f90
serial/psb_zasum_s.f90
serial/psb_casum_s.f90
serial/psi_d_serial_impl.F90
serial/psi_c_serial_impl.F90
serial/psb_sspspmm.f90
serial/psb_cnumbmm.f90
psblas/psb_damax.f90
psblas/psb_dspmm.f90
psblas/psb_dasum.f90
psblas/psb_sgetmatinfo.F90
psblas/psb_dspnrm1.f90
### psblas/psb_zvmlt.f90
psblas/psb_daxpby.f90
psblas/psb_smlt_vect.f90
psblas/psb_dspsm.f90
psblas/psb_zabs_vect.f90
psblas/psb_zspmm.f90
psblas/psb_sinv_vect.f90
psblas/psb_zinv_vect.f90
psblas/psb_dmlt_vect.f90
psblas/psb_sabs_vect.f90
psblas/psb_ddot.f90
psblas/psb_camax.f90
psblas/psb_cdiv_vect.f90
psblas/psb_ddiv_vect.f90
psblas/psb_dabs_vect.f90
psblas/psb_zmlt_vect.f90
psblas/psb_caxpby.f90
psblas/psb_zaxpby.f90
psblas/psb_cspsm.f90
psblas/psb_sspnrm1.f90
psblas/psb_cabs_vect.f90
### psblas/psb_dvmlt.f90
psblas/psb_zdot.f90
psblas/psb_zgetmatinfo.F90
psblas/psb_znrm2.f90
psblas/psb_sspmm.f90
psblas/psb_cspmm.f90
psblas/psb_cnrmi.f90
psblas/psb_ccmp_vect.f90
psblas/psb_casum.f90
psblas/psb_scmp_vect.f90
### psblas/psb_svmlt.f90
psblas/psb_sdot.f90
psblas/psb_cmlt_vect.f90
psblas/psb_dnrmi.f90
psblas/psb_dcmp_vect.f90
psblas/psb_cnrm2.f90
psblas/psb_cgetmatinfo.F90
### psblas/psb_cvmlt.f90
psblas/psb_zamax.f90
psblas/psb_dinv_vect.f90
psblas/psb_dnrm2.f90
psblas/psb_zspsm.f90
psblas/psb_snrm2.f90
psblas/psb_sdiv_vect.f90
psblas/psb_zdiv_vect.f90
psblas/psb_znrmi.f90
psblas/psb_saxpby.f90
psblas/psb_zspnrm1.f90
psblas/psb_dgetmatinfo.F90
psblas/psb_sasum.f90
psblas/psb_zcmp_vect.f90
psblas/psb_samax.f90
psblas/psb_snrmi.f90
psblas/psb_cdot.f90
psblas/psb_cspnrm1.f90
psblas/psb_sspsm.f90
psblas/psb_cinv_vect.f90
psblas/psb_zasum.f90
modules/comm/psi_z_comm_v_mod.f90
# modules/comm/psb_i2_comm_a_mod.f90
modules/comm/psb_m_comm_a_mod.f90
modules/comm/psb_z_linmap_mod.f90
modules/comm/psi_s_comm_a_mod.f90
# modules/comm/psi_i2_comm_a_mod.f90
modules/comm/psi_m_comm_a_mod.f90
modules/comm/psi_l_comm_v_mod.f90
modules/comm/psb_comm_mod.f90
modules/comm/psb_l_comm_mod.f90
modules/comm/psb_d_linmap_mod.f90
modules/comm/psi_d_comm_v_mod.f90
modules/comm/psb_c_linmap_mod.f90
modules/comm/psb_s_comm_mod.f90
modules/comm/psb_base_linmap_mod.f90
modules/comm/psi_d_comm_a_mod.f90
modules/comm/psb_s_linmap_mod.f90
modules/comm/psi_s_comm_v_mod.f90
modules/comm/psb_s_comm_a_mod.f90
modules/comm/psb_c_comm_mod.f90
modules/comm/psb_i_comm_mod.f90
modules/comm/psi_c_comm_v_mod.f90
modules/comm/psb_d_comm_a_mod.f90
modules/comm/psi_z_comm_a_mod.f90
modules/comm/psb_z_comm_mod.f90
modules/comm/psi_i_comm_v_mod.f90
modules/comm/psb_e_comm_a_mod.f90
modules/comm/psb_d_comm_mod.f90
modules/comm/psi_e_comm_a_mod.f90
modules/comm/psb_c_comm_a_mod.f90
modules/comm/psb_linmap_mod.f90
modules/comm/psb_z_comm_a_mod.f90
modules/comm/psi_c_comm_a_mod.f90
# modules/auxil/psb_i2_isort_mod.f90
modules/auxil/psb_z_ip_reord_mod.F90
modules/auxil/psi_s_serial_mod.f90
modules/auxil/psb_s_hsort_x_mod.f90
modules/auxil/psb_s_qsort_mod.f90
modules/auxil/psb_d_hsort_mod.f90
modules/auxil/psi_alcx_mod.f90
modules/auxil/psb_e_ip_reord_mod.F90
# modules/auxil/psb_i2_msort_mod.f90
modules/auxil/psb_rb_idx_tree_mod.f90
modules/auxil/psb_m_isort_mod.f90
modules/auxil/psb_e_msort_mod.f90
modules/auxil/psb_c_msort_mod.f90
modules/auxil/psb_e_isort_mod.f90
modules/auxil/psb_c_rb_idx_tree_mod.f90
modules/auxil/psb_c_realloc_mod.F90
modules/auxil/psb_ip_reord_mod.F90
modules/auxil/psb_e_qsort_mod.f90
modules/auxil/psi_e_serial_mod.f90
modules/auxil/psi_serial_mod.f90
modules/auxil/psb_l_hsort_x_mod.f90
modules/auxil/psi_lcx_mod.f90
modules/auxil/psb_d_rb_idx_tree_mod.f90
modules/auxil/psb_m_realloc_mod.F90
modules/auxil/psb_z_isort_mod.f90
modules/auxil/psb_e_hsort_mod.f90
modules/auxil/psi_m_serial_mod.f90
# modules/auxil/psi_i2_serial_mod.f90
modules/auxil/psb_s_isort_mod.f90
modules/auxil/psb_e_realloc_mod.F90
modules/auxil/psb_c_hsort_mod.f90
modules/auxil/psb_z_msort_mod.f90
modules/auxil/psi_d_serial_mod.f90
modules/auxil/psb_z_qsort_mod.f90
# modules/auxil/psb_i2_hsort_mod.f90
modules/auxil/psb_m_msort_mod.f90
modules/auxil/psb_m_ip_reord_mod.F90
modules/auxil/psb_string_mod.f90
modules/auxil/psb_c_isort_mod.f90
modules/auxil/psb_d_hsort_x_mod.f90
modules/auxil/psb_s_hsort_mod.f90
modules/auxil/psb_i_hsort_x_mod.f90
modules/auxil/psb_d_qsort_mod.f90
modules/auxil/psb_s_realloc_mod.F90
modules/auxil/psb_m_hsort_mod.f90
modules/auxil/psb_z_realloc_mod.F90
modules/auxil/psb_z_rb_idx_tree_mod.f90
# modules/auxil/psb_i2_ip_reord_mod.F90
# modules/auxil/psb_i2_realloc_mod.F90
modules/auxil/psb_s_rb_idx_tree_mod.f90
modules/auxil/psb_c_hsort_x_mod.f90
modules/auxil/psb_s_ip_reord_mod.F90
modules/auxil/psb_d_isort_mod.f90
modules/auxil/psi_z_serial_mod.f90
# modules/auxil/psb_i2_qsort_mod.f90
modules/auxil/psb_d_msort_mod.f90
modules/auxil/psb_c_qsort_mod.f90
modules/auxil/psb_z_hsort_x_mod.f90
modules/auxil/psb_c_ip_reord_mod.F90
modules/auxil/psb_sort_mod.f90
modules/auxil/psi_acx_mod.f90
modules/auxil/psb_d_realloc_mod.F90
modules/auxil/psb_m_qsort_mod.f90
modules/auxil/psb_s_msort_mod.f90
modules/auxil/psi_c_serial_mod.f90
modules/auxil/psb_d_ip_reord_mod.F90
modules/auxil/psb_z_hsort_mod.f90
modules/psi_d_mod.F90
modules/psi_l_mod.F90
modules/penv/psi_d_collective_mod.F90
modules/penv/psi_m_p2p_mod.F90
# modules/penv/psi_i2_collective_mod.F90
modules/penv/psi_s_p2p_mod.F90
modules/penv/psi_e_p2p_mod.F90
modules/penv/psi_m_collective_mod.F90
modules/penv/psi_d_p2p_mod.F90
modules/penv/psi_p2p_mod.F90
modules/penv/psi_penv_mod.F90
modules/penv/psi_z_p2p_mod.F90
modules/penv/psi_c_collective_mod.F90
modules/penv/psi_collective_mod.F90
# modules/penv/psi_i2_p2p_mod.F90
modules/penv/psi_c_p2p_mod.F90
modules/penv/psi_e_collective_mod.F90
modules/penv/psi_z_collective_mod.F90
modules/penv/psi_s_collective_mod.F90
modules/psb_cbind_const_mod.F90
modules/psi_s_mod.F90
modules/psi_c_mod.F90
modules/tools/psb_s_tools_a_mod.f90
modules/tools/psb_d_tools_a_mod.f90
modules/tools/psb_z_tools_a_mod.f90
modules/tools/psb_i_tools_mod.F90
modules/tools/psb_s_tools_mod.F90
modules/tools/psb_tools_mod.f90
modules/tools/psb_m_tools_a_mod.f90
modules/tools/psb_cd_tools_mod.F90
modules/tools/psb_d_tools_mod.F90
modules/tools/psb_c_tools_mod.F90
modules/tools/psb_e_tools_a_mod.f90
# modules/tools/psb_i2_tools_a_mod.f90
modules/tools/psb_c_tools_a_mod.f90
modules/tools/psb_z_tools_mod.F90
modules/tools/psb_l_tools_mod.F90
modules/psb_realloc_mod.F90
modules/psb_check_mod.f90
modules/serial/psb_mat_mod.f90
modules/serial/psb_s_csr_mat_mod.f90
modules/serial/psb_z_mat_mod.F90
modules/serial/psb_z_vect_mod.F90
modules/serial/psb_l_base_vect_mod.F90
modules/serial/psb_c_serial_mod.f90
modules/serial/psb_z_csc_mat_mod.f90
modules/serial/psb_d_csc_mat_mod.f90
modules/serial/psb_z_serial_mod.f90
modules/serial/psb_c_base_mat_mod.F90
modules/serial/psb_z_base_mat_mod.F90
modules/serial/psb_z_csr_mat_mod.f90
modules/serial/psb_c_csc_mat_mod.f90
modules/serial/psb_z_base_vect_mod.F90
modules/serial/psb_l_vect_mod.F90
modules/serial/psb_d_csr_mat_mod.f90
modules/serial/psb_c_csr_mat_mod.f90
modules/serial/psb_s_base_mat_mod.F90
modules/serial/psb_base_mat_mod.F90
modules/serial/psb_i_base_vect_mod.F90
modules/serial/psb_s_vect_mod.F90
modules/serial/psb_s_base_vect_mod.F90
modules/serial/psb_d_base_vect_mod.F90
modules/serial/psb_c_mat_mod.F90
modules/serial/psb_d_base_mat_mod.F90
modules/serial/psb_c_vect_mod.F90
modules/serial/psb_d_mat_mod.F90
modules/serial/psb_s_mat_mod.F90
modules/serial/psb_i_vect_mod.F90
modules/serial/psb_d_vect_mod.F90
modules/serial/psb_c_base_vect_mod.F90
modules/serial/psb_vect_mod.f90
modules/serial/psb_d_serial_mod.f90
modules/serial/psb_s_csc_mat_mod.f90
modules/serial/psb_s_serial_mod.f90
modules/serial/psb_serial_mod.f90
modules/psi_mod.f90
modules/error.f90
modules/psb_const_mod.F90
modules/psblas/psb_c_psblas_mod.F90
modules/psblas/psb_s_psblas_mod.F90
modules/psblas/psb_d_psblas_mod.F90
modules/psblas/psb_z_psblas_mod.F90
modules/psblas/psb_psblas_mod.f90
modules/psb_error_impl.F90
modules/psb_penv_mod.F90
modules/psb_error_mod.F90
modules/psb_timers_mod.f90
modules/psi_i_mod.F90
modules/psi_z_mod.F90
modules/desc/psb_desc_const_mod.f90
modules/desc/psb_indx_map_mod.F90
modules/desc/psb_hash_mod.F90
modules/desc/psb_desc_mod.F90
modules/desc/psb_gen_block_map_mod.F90
modules/desc/psb_list_map_mod.F90
modules/desc/psb_repl_map_mod.F90
modules/desc/psb_hash_map_mod.F90
modules/desc/psb_glist_map_mod.F90
modules/psb_base_mod.f90
)
foreach(file IN LISTS PSB_base_source_files)
list(APPEND base_source_files ${CMAKE_CURRENT_LIST_DIR}/${file})
endforeach()
list(APPEND PSB_base_source_C_files modules/cutil.c)
list(APPEND PSB_base_source_C_files modules/desc/psb_hashval.c)
if (PSB_SERIAL_MPI)
list(APPEND PSB_base_source_C_files modules/psb_fakempi.c)
list(APPEND base_header_C_files ${CMAKE_CURRENT_LIST_DIR}/modules/psb_fakempi.h)
endif()
list(APPEND base_header_C_files ${CMAKE_CURRENT_LIST_DIR}/modules/psb_types.h)
foreach(file IN LISTS PSB_base_source_C_files)
list(APPEND base_source_C_files ${CMAKE_CURRENT_LIST_DIR}/${file})
endforeach()

@ -48,7 +48,8 @@ subroutine psi_covrl_restr_vect(x,xs,desc_a,info)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, isz
integer(psb_mpk_) :: np, me, isz
integer(psb_ipk_) :: err_act, i, idx
character(len=20) :: name, ch_err
name='psi_covrl_restr_vect'
@ -91,7 +92,8 @@ subroutine psi_covrl_restr_multivect(x,xs,desc_a,info)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, isz, nc
integer(psb_mpk_) :: np, me, isz,nc
integer(psb_ipk_) :: err_act, i, idx
character(len=20) :: name, ch_err
name='psi_covrl_restr_mv'

@ -48,7 +48,8 @@ subroutine psi_covrl_save_vect(x,xs,desc_a,info)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, isz
integer(psb_mpk_) :: np, me, isz
integer(psb_ipk_) :: err_act, i, idx
character(len=20) :: name, ch_err
name='psi_dovrl_saver1'
@ -97,7 +98,8 @@ subroutine psi_covrl_save_multivect(x,xs,desc_a,info)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, isz, nc
integer(psb_mpk_) :: np, me, isz, nc
integer(psb_ipk_) :: err_act, i, idx
character(len=20) :: name, ch_err
name='psi_dovrl_saver1'

@ -51,7 +51,8 @@ subroutine psi_covrl_upd_vect(x,desc_a,update,info)
! locals
complex(psb_spk_), allocatable :: xs(:)
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, ndm, nx
integer(psb_mpk_) :: np, me, isz, nx, ndm
integer(psb_ipk_) :: err_act, i, idx
integer(psb_ipk_) :: ierr(5)
character(len=20) :: name, ch_err
@ -133,7 +134,8 @@ subroutine psi_covrl_upd_multivect(x,desc_a,update,info)
! locals
complex(psb_spk_), allocatable :: xs(:,:)
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, ndm, nx, nc
integer(psb_mpk_) :: np, me, isz, ndm, nx, nc
integer(psb_ipk_) :: err_act, i, idx
integer(psb_ipk_) :: ierr(5)
character(len=20) :: name, ch_err

@ -96,11 +96,11 @@ subroutine psi_cswapdata_vect(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -184,11 +184,11 @@ subroutine psi_cswap_vidx_vect(ctxt,icomm,flag,beta,y,idx, &
use psb_desc_mod
use psb_penv_mod
use psb_c_base_vect_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -203,11 +203,11 @@ subroutine psi_cswap_vidx_vect(ctxt,icomm,flag,beta,y,idx, &
integer(psb_ipk_), intent(in) :: totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_) :: np, me
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size),&
& iret, nesd, nerv
integer(psb_mpk_), allocatable :: prcid(:)
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
@ -434,11 +434,11 @@ subroutine psi_cswapdata_multivect(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -522,11 +522,11 @@ subroutine psi_cswap_vidx_multivect(ctxt,icomm,flag,beta,y,idx, &
use psb_desc_mod
use psb_penv_mod
use psb_c_base_multivect_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -541,12 +541,11 @@ subroutine psi_cswap_vidx_multivect(ctxt,icomm,flag,beta,y,idx, &
integer(psb_ipk_), intent(in) :: totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv, n
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable :: prcid(:)
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false., debug=.false.

@ -90,15 +90,16 @@ subroutine psi_cswapdatam(flag,n,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
integer(psb_ipk_), intent(in) :: flag, n
integer(psb_mpk_), intent(in) :: n
integer(psb_ipk_), intent(in) :: flag
integer(psb_ipk_), intent(out) :: info
complex(psb_spk_) :: y(:,:), beta
complex(psb_spk_), target :: work(:)
@ -108,7 +109,8 @@ subroutine psi_cswapdatam(flag,n,beta,y,desc_a,work,info,data)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: icomm
integer(psb_ipk_) :: np, me, idxs, idxr, totxch, data_, err_act
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: idxs, idxr, totxch, data_, err_act
integer(psb_ipk_), pointer :: d_idx(:)
character(len=20) :: name
@ -161,17 +163,18 @@ subroutine psi_cswapidxm(ctxt,icomm,flag,n,beta,y,idx, &
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
type(psb_ctxt_type), intent(in) :: ctxt
integer(psb_mpk_), intent(in) :: icomm
integer(psb_ipk_), intent(in) :: flag,n
integer(psb_mpk_), intent(in) :: n
integer(psb_ipk_), intent(in) :: flag
integer(psb_ipk_), intent(out) :: info
complex(psb_spk_) :: y(:,:), beta
complex(psb_spk_), target :: work(:)
@ -179,19 +182,20 @@ subroutine psi_cswapidxm(ctxt,icomm,flag,n,beta,y,idx, &
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable, dimension(:) :: bsdidx, brvidx,&
& sdsz, rvsz, prcid, rvhd, sdhd
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false.
complex(psb_spk_), pointer, dimension(:) :: sndbuf, rcvbuf
#if !defined(FLANG)
volatile :: sndbuf, rcvbuf
#endif
character(len=20) :: name
info=psb_success_
@ -565,11 +569,11 @@ subroutine psi_cswapdatav(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -583,7 +587,8 @@ subroutine psi_cswapdatav(flag,beta,y,desc_a,work,info,data)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: icomm
integer(psb_ipk_) :: np, me, idxs, idxr, totxch, data_, err_act
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: idxs, idxr, totxch, data_, err_act
integer(psb_ipk_), pointer :: d_idx(:)
character(len=20) :: name
@ -647,11 +652,11 @@ subroutine psi_cswapidxv(ctxt,icomm,flag,beta,y,idx, &
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -664,19 +669,20 @@ subroutine psi_cswapidxv(ctxt,icomm,flag,beta,y,idx, &
integer(psb_ipk_), intent(in) :: idx(:),totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv, n
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable, dimension(:) :: bsdidx, brvidx,&
& sdsz, rvsz, prcid, rvhd, sdhd
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false.
complex(psb_spk_), pointer, dimension(:) :: sndbuf, rcvbuf
#if !defined(FLANG)
volatile :: sndbuf, rcvbuf
#endif
character(len=20) :: name
info=psb_success_

@ -98,11 +98,11 @@ subroutine psi_cswaptran_vect(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -185,11 +185,11 @@ subroutine psi_ctran_vidx_vect(ctxt,icomm,flag,beta,y,idx,&
use psb_desc_mod
use psb_penv_mod
use psb_c_base_vect_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -204,12 +204,11 @@ subroutine psi_ctran_vidx_vect(ctxt,icomm,flag,beta,y,idx,&
integer(psb_ipk_), intent(in) :: totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv, n
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable :: prcid(:)
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false., debug=.false.
@ -445,11 +444,11 @@ subroutine psi_cswaptran_multivect(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -533,11 +532,11 @@ subroutine psi_ctran_vidx_multivect(ctxt,icomm,flag,beta,y,idx,&
use psb_desc_mod
use psb_penv_mod
use psb_c_base_multivect_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -552,12 +551,11 @@ subroutine psi_ctran_vidx_multivect(ctxt,icomm,flag,beta,y,idx,&
integer(psb_ipk_), intent(in) :: totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv, n
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable :: prcid(:)
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false., debug=.false.

@ -94,15 +94,16 @@ subroutine psi_cswaptranm(flag,n,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
integer(psb_ipk_), intent(in) :: flag, n
integer(psb_mpk_), intent(in) :: n
integer(psb_ipk_), intent(in) :: flag
integer(psb_ipk_), intent(out) :: info
complex(psb_spk_) :: y(:,:), beta
complex(psb_spk_), target :: work(:)
@ -112,7 +113,8 @@ subroutine psi_cswaptranm(flag,n,beta,y,desc_a,work,info,data)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: icomm
integer(psb_ipk_) :: np, me, idxs, idxr, err_act, totxch, data_
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: idxs, idxr, err_act, totxch, data_
integer(psb_ipk_), pointer :: d_idx(:)
character(len=20) :: name
@ -166,36 +168,38 @@ subroutine psi_ctranidxm(ctxt,icomm,flag,n,beta,y,idx,&
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
type(psb_ctxt_type), intent(in) :: ctxt
integer(psb_mpk_), intent(in) :: icomm
integer(psb_ipk_), intent(in) :: flag,n
integer(psb_mpk_), intent(in) :: n
integer(psb_ipk_), intent(in) :: flag
integer(psb_ipk_), intent(out) :: info
complex(psb_spk_) :: y(:,:), beta
complex(psb_spk_), target :: work(:)
integer(psb_ipk_), intent(in) :: idx(:),totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable, dimension(:) :: bsdidx, brvidx,&
& sdsz, rvsz, prcid, rvhd, sdhd
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false.
complex(psb_spk_), pointer, dimension(:) :: sndbuf, rcvbuf
#if !defined(FLANG)
volatile :: sndbuf, rcvbuf
#endif
character(len=20) :: name
info=psb_success_
@ -577,11 +581,11 @@ subroutine psi_cswaptranv(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -659,11 +663,11 @@ subroutine psi_ctranidxv(ctxt,icomm,flag,beta,y,idx,&
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -676,19 +680,20 @@ subroutine psi_ctranidxv(ctxt,icomm,flag,beta,y,idx,&
integer(psb_ipk_), intent(in) :: idx(:),totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv, n
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable, dimension(:) :: bsdidx, brvidx,&
& sdsz, rvsz, prcid, rvhd, sdhd
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false.
complex(psb_spk_), pointer, dimension(:) :: sndbuf, rcvbuf
#if !defined(FLANG)
volatile :: sndbuf, rcvbuf
#endif
character(len=20) :: name
info=psb_success_

@ -48,7 +48,8 @@ subroutine psi_dovrl_restr_vect(x,xs,desc_a,info)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, isz
integer(psb_mpk_) :: np, me, isz
integer(psb_ipk_) :: err_act, i, idx
character(len=20) :: name, ch_err
name='psi_dovrl_restr_vect'
@ -91,7 +92,8 @@ subroutine psi_dovrl_restr_multivect(x,xs,desc_a,info)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, isz, nc
integer(psb_mpk_) :: np, me, isz,nc
integer(psb_ipk_) :: err_act, i, idx
character(len=20) :: name, ch_err
name='psi_dovrl_restr_mv'

@ -48,7 +48,8 @@ subroutine psi_dovrl_save_vect(x,xs,desc_a,info)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, isz
integer(psb_mpk_) :: np, me, isz
integer(psb_ipk_) :: err_act, i, idx
character(len=20) :: name, ch_err
name='psi_dovrl_saver1'
@ -97,7 +98,8 @@ subroutine psi_dovrl_save_multivect(x,xs,desc_a,info)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, isz, nc
integer(psb_mpk_) :: np, me, isz, nc
integer(psb_ipk_) :: err_act, i, idx
character(len=20) :: name, ch_err
name='psi_dovrl_saver1'

@ -51,7 +51,8 @@ subroutine psi_dovrl_upd_vect(x,desc_a,update,info)
! locals
real(psb_dpk_), allocatable :: xs(:)
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, ndm, nx
integer(psb_mpk_) :: np, me, isz, nx, ndm
integer(psb_ipk_) :: err_act, i, idx
integer(psb_ipk_) :: ierr(5)
character(len=20) :: name, ch_err
@ -133,7 +134,8 @@ subroutine psi_dovrl_upd_multivect(x,desc_a,update,info)
! locals
real(psb_dpk_), allocatable :: xs(:,:)
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, ndm, nx, nc
integer(psb_mpk_) :: np, me, isz, ndm, nx, nc
integer(psb_ipk_) :: err_act, i, idx
integer(psb_ipk_) :: ierr(5)
character(len=20) :: name, ch_err

@ -96,11 +96,11 @@ subroutine psi_dswapdata_vect(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -184,11 +184,11 @@ subroutine psi_dswap_vidx_vect(ctxt,icomm,flag,beta,y,idx, &
use psb_desc_mod
use psb_penv_mod
use psb_d_base_vect_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -203,11 +203,11 @@ subroutine psi_dswap_vidx_vect(ctxt,icomm,flag,beta,y,idx, &
integer(psb_ipk_), intent(in) :: totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_) :: np, me
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size),&
& iret, nesd, nerv
integer(psb_mpk_), allocatable :: prcid(:)
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
@ -434,11 +434,11 @@ subroutine psi_dswapdata_multivect(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -522,11 +522,11 @@ subroutine psi_dswap_vidx_multivect(ctxt,icomm,flag,beta,y,idx, &
use psb_desc_mod
use psb_penv_mod
use psb_d_base_multivect_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -541,12 +541,11 @@ subroutine psi_dswap_vidx_multivect(ctxt,icomm,flag,beta,y,idx, &
integer(psb_ipk_), intent(in) :: totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv, n
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable :: prcid(:)
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false., debug=.false.

@ -90,15 +90,16 @@ subroutine psi_dswapdatam(flag,n,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
integer(psb_ipk_), intent(in) :: flag, n
integer(psb_mpk_), intent(in) :: n
integer(psb_ipk_), intent(in) :: flag
integer(psb_ipk_), intent(out) :: info
real(psb_dpk_) :: y(:,:), beta
real(psb_dpk_), target :: work(:)
@ -108,7 +109,8 @@ subroutine psi_dswapdatam(flag,n,beta,y,desc_a,work,info,data)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: icomm
integer(psb_ipk_) :: np, me, idxs, idxr, totxch, data_, err_act
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: idxs, idxr, totxch, data_, err_act
integer(psb_ipk_), pointer :: d_idx(:)
character(len=20) :: name
@ -161,17 +163,18 @@ subroutine psi_dswapidxm(ctxt,icomm,flag,n,beta,y,idx, &
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
type(psb_ctxt_type), intent(in) :: ctxt
integer(psb_mpk_), intent(in) :: icomm
integer(psb_ipk_), intent(in) :: flag,n
integer(psb_mpk_), intent(in) :: n
integer(psb_ipk_), intent(in) :: flag
integer(psb_ipk_), intent(out) :: info
real(psb_dpk_) :: y(:,:), beta
real(psb_dpk_), target :: work(:)
@ -179,19 +182,20 @@ subroutine psi_dswapidxm(ctxt,icomm,flag,n,beta,y,idx, &
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable, dimension(:) :: bsdidx, brvidx,&
& sdsz, rvsz, prcid, rvhd, sdhd
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false.
real(psb_dpk_), pointer, dimension(:) :: sndbuf, rcvbuf
#if !defined(FLANG)
volatile :: sndbuf, rcvbuf
#endif
character(len=20) :: name
info=psb_success_
@ -565,11 +569,11 @@ subroutine psi_dswapdatav(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -583,7 +587,8 @@ subroutine psi_dswapdatav(flag,beta,y,desc_a,work,info,data)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: icomm
integer(psb_ipk_) :: np, me, idxs, idxr, totxch, data_, err_act
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: idxs, idxr, totxch, data_, err_act
integer(psb_ipk_), pointer :: d_idx(:)
character(len=20) :: name
@ -647,11 +652,11 @@ subroutine psi_dswapidxv(ctxt,icomm,flag,beta,y,idx, &
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -664,19 +669,20 @@ subroutine psi_dswapidxv(ctxt,icomm,flag,beta,y,idx, &
integer(psb_ipk_), intent(in) :: idx(:),totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv, n
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable, dimension(:) :: bsdidx, brvidx,&
& sdsz, rvsz, prcid, rvhd, sdhd
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false.
real(psb_dpk_), pointer, dimension(:) :: sndbuf, rcvbuf
#if !defined(FLANG)
volatile :: sndbuf, rcvbuf
#endif
character(len=20) :: name
info=psb_success_

@ -98,11 +98,11 @@ subroutine psi_dswaptran_vect(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -185,11 +185,11 @@ subroutine psi_dtran_vidx_vect(ctxt,icomm,flag,beta,y,idx,&
use psb_desc_mod
use psb_penv_mod
use psb_d_base_vect_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -204,12 +204,11 @@ subroutine psi_dtran_vidx_vect(ctxt,icomm,flag,beta,y,idx,&
integer(psb_ipk_), intent(in) :: totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv, n
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable :: prcid(:)
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false., debug=.false.
@ -445,11 +444,11 @@ subroutine psi_dswaptran_multivect(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -533,11 +532,11 @@ subroutine psi_dtran_vidx_multivect(ctxt,icomm,flag,beta,y,idx,&
use psb_desc_mod
use psb_penv_mod
use psb_d_base_multivect_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -552,12 +551,11 @@ subroutine psi_dtran_vidx_multivect(ctxt,icomm,flag,beta,y,idx,&
integer(psb_ipk_), intent(in) :: totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv, n
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable :: prcid(:)
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false., debug=.false.

@ -94,15 +94,16 @@ subroutine psi_dswaptranm(flag,n,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
integer(psb_ipk_), intent(in) :: flag, n
integer(psb_mpk_), intent(in) :: n
integer(psb_ipk_), intent(in) :: flag
integer(psb_ipk_), intent(out) :: info
real(psb_dpk_) :: y(:,:), beta
real(psb_dpk_), target :: work(:)
@ -112,7 +113,8 @@ subroutine psi_dswaptranm(flag,n,beta,y,desc_a,work,info,data)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: icomm
integer(psb_ipk_) :: np, me, idxs, idxr, err_act, totxch, data_
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: idxs, idxr, err_act, totxch, data_
integer(psb_ipk_), pointer :: d_idx(:)
character(len=20) :: name
@ -166,36 +168,38 @@ subroutine psi_dtranidxm(ctxt,icomm,flag,n,beta,y,idx,&
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
type(psb_ctxt_type), intent(in) :: ctxt
integer(psb_mpk_), intent(in) :: icomm
integer(psb_ipk_), intent(in) :: flag,n
integer(psb_mpk_), intent(in) :: n
integer(psb_ipk_), intent(in) :: flag
integer(psb_ipk_), intent(out) :: info
real(psb_dpk_) :: y(:,:), beta
real(psb_dpk_), target :: work(:)
integer(psb_ipk_), intent(in) :: idx(:),totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable, dimension(:) :: bsdidx, brvidx,&
& sdsz, rvsz, prcid, rvhd, sdhd
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false.
real(psb_dpk_), pointer, dimension(:) :: sndbuf, rcvbuf
#if !defined(FLANG)
volatile :: sndbuf, rcvbuf
#endif
character(len=20) :: name
info=psb_success_
@ -577,11 +581,11 @@ subroutine psi_dswaptranv(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -659,11 +663,11 @@ subroutine psi_dtranidxv(ctxt,icomm,flag,beta,y,idx,&
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -676,19 +680,20 @@ subroutine psi_dtranidxv(ctxt,icomm,flag,beta,y,idx,&
integer(psb_ipk_), intent(in) :: idx(:),totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv, n
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable, dimension(:) :: bsdidx, brvidx,&
& sdsz, rvsz, prcid, rvhd, sdhd
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false.
real(psb_dpk_), pointer, dimension(:) :: sndbuf, rcvbuf
#if !defined(FLANG)
volatile :: sndbuf, rcvbuf
#endif
character(len=20) :: name
info=psb_success_

@ -90,15 +90,16 @@ subroutine psi_eswapdatam(flag,n,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
integer(psb_ipk_), intent(in) :: flag, n
integer(psb_mpk_), intent(in) :: n
integer(psb_ipk_), intent(in) :: flag
integer(psb_ipk_), intent(out) :: info
integer(psb_epk_) :: y(:,:), beta
integer(psb_epk_), target :: work(:)
@ -108,7 +109,8 @@ subroutine psi_eswapdatam(flag,n,beta,y,desc_a,work,info,data)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: icomm
integer(psb_ipk_) :: np, me, idxs, idxr, totxch, data_, err_act
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: idxs, idxr, totxch, data_, err_act
integer(psb_ipk_), pointer :: d_idx(:)
character(len=20) :: name
@ -161,17 +163,18 @@ subroutine psi_eswapidxm(ctxt,icomm,flag,n,beta,y,idx, &
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
type(psb_ctxt_type), intent(in) :: ctxt
integer(psb_mpk_), intent(in) :: icomm
integer(psb_ipk_), intent(in) :: flag,n
integer(psb_mpk_), intent(in) :: n
integer(psb_ipk_), intent(in) :: flag
integer(psb_ipk_), intent(out) :: info
integer(psb_epk_) :: y(:,:), beta
integer(psb_epk_), target :: work(:)
@ -179,19 +182,20 @@ subroutine psi_eswapidxm(ctxt,icomm,flag,n,beta,y,idx, &
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable, dimension(:) :: bsdidx, brvidx,&
& sdsz, rvsz, prcid, rvhd, sdhd
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false.
integer(psb_epk_), pointer, dimension(:) :: sndbuf, rcvbuf
#if !defined(FLANG)
volatile :: sndbuf, rcvbuf
#endif
character(len=20) :: name
info=psb_success_
@ -565,11 +569,11 @@ subroutine psi_eswapdatav(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -583,7 +587,8 @@ subroutine psi_eswapdatav(flag,beta,y,desc_a,work,info,data)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: icomm
integer(psb_ipk_) :: np, me, idxs, idxr, totxch, data_, err_act
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: idxs, idxr, totxch, data_, err_act
integer(psb_ipk_), pointer :: d_idx(:)
character(len=20) :: name
@ -647,11 +652,11 @@ subroutine psi_eswapidxv(ctxt,icomm,flag,beta,y,idx, &
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -664,19 +669,20 @@ subroutine psi_eswapidxv(ctxt,icomm,flag,beta,y,idx, &
integer(psb_ipk_), intent(in) :: idx(:),totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv, n
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable, dimension(:) :: bsdidx, brvidx,&
& sdsz, rvsz, prcid, rvhd, sdhd
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false.
integer(psb_epk_), pointer, dimension(:) :: sndbuf, rcvbuf
#if !defined(FLANG)
volatile :: sndbuf, rcvbuf
#endif
character(len=20) :: name
info=psb_success_

@ -94,15 +94,16 @@ subroutine psi_eswaptranm(flag,n,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
integer(psb_ipk_), intent(in) :: flag, n
integer(psb_mpk_), intent(in) :: n
integer(psb_ipk_), intent(in) :: flag
integer(psb_ipk_), intent(out) :: info
integer(psb_epk_) :: y(:,:), beta
integer(psb_epk_), target :: work(:)
@ -112,7 +113,8 @@ subroutine psi_eswaptranm(flag,n,beta,y,desc_a,work,info,data)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: icomm
integer(psb_ipk_) :: np, me, idxs, idxr, err_act, totxch, data_
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: idxs, idxr, err_act, totxch, data_
integer(psb_ipk_), pointer :: d_idx(:)
character(len=20) :: name
@ -166,36 +168,38 @@ subroutine psi_etranidxm(ctxt,icomm,flag,n,beta,y,idx,&
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
type(psb_ctxt_type), intent(in) :: ctxt
integer(psb_mpk_), intent(in) :: icomm
integer(psb_ipk_), intent(in) :: flag,n
integer(psb_mpk_), intent(in) :: n
integer(psb_ipk_), intent(in) :: flag
integer(psb_ipk_), intent(out) :: info
integer(psb_epk_) :: y(:,:), beta
integer(psb_epk_), target :: work(:)
integer(psb_ipk_), intent(in) :: idx(:),totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable, dimension(:) :: bsdidx, brvidx,&
& sdsz, rvsz, prcid, rvhd, sdhd
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false.
integer(psb_epk_), pointer, dimension(:) :: sndbuf, rcvbuf
#if !defined(FLANG)
volatile :: sndbuf, rcvbuf
#endif
character(len=20) :: name
info=psb_success_
@ -577,11 +581,11 @@ subroutine psi_eswaptranv(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -659,11 +663,11 @@ subroutine psi_etranidxv(ctxt,icomm,flag,beta,y,idx,&
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -676,19 +680,20 @@ subroutine psi_etranidxv(ctxt,icomm,flag,beta,y,idx,&
integer(psb_ipk_), intent(in) :: idx(:),totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv, n
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable, dimension(:) :: bsdidx, brvidx,&
& sdsz, rvsz, prcid, rvhd, sdhd
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false.
integer(psb_epk_), pointer, dimension(:) :: sndbuf, rcvbuf
#if !defined(FLANG)
volatile :: sndbuf, rcvbuf
#endif
character(len=20) :: name
info=psb_success_

@ -90,15 +90,16 @@ subroutine psi_i2swapdatam(flag,n,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
integer(psb_ipk_), intent(in) :: flag, n
integer(psb_mpk_), intent(in) :: n
integer(psb_ipk_), intent(in) :: flag
integer(psb_ipk_), intent(out) :: info
integer(psb_i2pk_) :: y(:,:), beta
integer(psb_i2pk_), target :: work(:)
@ -108,7 +109,8 @@ subroutine psi_i2swapdatam(flag,n,beta,y,desc_a,work,info,data)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: icomm
integer(psb_ipk_) :: np, me, idxs, idxr, totxch, data_, err_act
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: idxs, idxr, totxch, data_, err_act
integer(psb_ipk_), pointer :: d_idx(:)
character(len=20) :: name
@ -161,17 +163,18 @@ subroutine psi_i2swapidxm(ctxt,icomm,flag,n,beta,y,idx, &
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
type(psb_ctxt_type), intent(in) :: ctxt
integer(psb_mpk_), intent(in) :: icomm
integer(psb_ipk_), intent(in) :: flag,n
integer(psb_mpk_), intent(in) :: n
integer(psb_ipk_), intent(in) :: flag
integer(psb_ipk_), intent(out) :: info
integer(psb_i2pk_) :: y(:,:), beta
integer(psb_i2pk_), target :: work(:)
@ -179,19 +182,20 @@ subroutine psi_i2swapidxm(ctxt,icomm,flag,n,beta,y,idx, &
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable, dimension(:) :: bsdidx, brvidx,&
& sdsz, rvsz, prcid, rvhd, sdhd
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false.
integer(psb_i2pk_), pointer, dimension(:) :: sndbuf, rcvbuf
#if !defined(FLANG)
volatile :: sndbuf, rcvbuf
#endif
character(len=20) :: name
info=psb_success_
@ -565,11 +569,11 @@ subroutine psi_i2swapdatav(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -583,7 +587,8 @@ subroutine psi_i2swapdatav(flag,beta,y,desc_a,work,info,data)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: icomm
integer(psb_ipk_) :: np, me, idxs, idxr, totxch, data_, err_act
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: idxs, idxr, totxch, data_, err_act
integer(psb_ipk_), pointer :: d_idx(:)
character(len=20) :: name
@ -647,11 +652,11 @@ subroutine psi_i2swapidxv(ctxt,icomm,flag,beta,y,idx, &
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -664,19 +669,20 @@ subroutine psi_i2swapidxv(ctxt,icomm,flag,beta,y,idx, &
integer(psb_ipk_), intent(in) :: idx(:),totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv, n
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable, dimension(:) :: bsdidx, brvidx,&
& sdsz, rvsz, prcid, rvhd, sdhd
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false.
integer(psb_i2pk_), pointer, dimension(:) :: sndbuf, rcvbuf
#if !defined(FLANG)
volatile :: sndbuf, rcvbuf
#endif
character(len=20) :: name
info=psb_success_

@ -94,15 +94,16 @@ subroutine psi_i2swaptranm(flag,n,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
integer(psb_ipk_), intent(in) :: flag, n
integer(psb_mpk_), intent(in) :: n
integer(psb_ipk_), intent(in) :: flag
integer(psb_ipk_), intent(out) :: info
integer(psb_i2pk_) :: y(:,:), beta
integer(psb_i2pk_), target :: work(:)
@ -112,7 +113,8 @@ subroutine psi_i2swaptranm(flag,n,beta,y,desc_a,work,info,data)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: icomm
integer(psb_ipk_) :: np, me, idxs, idxr, err_act, totxch, data_
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: idxs, idxr, err_act, totxch, data_
integer(psb_ipk_), pointer :: d_idx(:)
character(len=20) :: name
@ -166,36 +168,38 @@ subroutine psi_i2tranidxm(ctxt,icomm,flag,n,beta,y,idx,&
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
type(psb_ctxt_type), intent(in) :: ctxt
integer(psb_mpk_), intent(in) :: icomm
integer(psb_ipk_), intent(in) :: flag,n
integer(psb_mpk_), intent(in) :: n
integer(psb_ipk_), intent(in) :: flag
integer(psb_ipk_), intent(out) :: info
integer(psb_i2pk_) :: y(:,:), beta
integer(psb_i2pk_), target :: work(:)
integer(psb_ipk_), intent(in) :: idx(:),totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable, dimension(:) :: bsdidx, brvidx,&
& sdsz, rvsz, prcid, rvhd, sdhd
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false.
integer(psb_i2pk_), pointer, dimension(:) :: sndbuf, rcvbuf
#if !defined(FLANG)
volatile :: sndbuf, rcvbuf
#endif
character(len=20) :: name
info=psb_success_
@ -577,11 +581,11 @@ subroutine psi_i2swaptranv(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -659,11 +663,11 @@ subroutine psi_i2tranidxv(ctxt,icomm,flag,beta,y,idx,&
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -676,19 +680,20 @@ subroutine psi_i2tranidxv(ctxt,icomm,flag,beta,y,idx,&
integer(psb_ipk_), intent(in) :: idx(:),totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv, n
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable, dimension(:) :: bsdidx, brvidx,&
& sdsz, rvsz, prcid, rvhd, sdhd
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false.
integer(psb_i2pk_), pointer, dimension(:) :: sndbuf, rcvbuf
#if !defined(FLANG)
volatile :: sndbuf, rcvbuf
#endif
character(len=20) :: name
info=psb_success_

@ -48,7 +48,8 @@ subroutine psi_iovrl_restr_vect(x,xs,desc_a,info)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, isz
integer(psb_mpk_) :: np, me, isz
integer(psb_ipk_) :: err_act, i, idx
character(len=20) :: name, ch_err
name='psi_iovrl_restr_vect'
@ -91,7 +92,8 @@ subroutine psi_iovrl_restr_multivect(x,xs,desc_a,info)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, isz, nc
integer(psb_mpk_) :: np, me, isz,nc
integer(psb_ipk_) :: err_act, i, idx
character(len=20) :: name, ch_err
name='psi_iovrl_restr_mv'

@ -48,7 +48,8 @@ subroutine psi_iovrl_save_vect(x,xs,desc_a,info)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, isz
integer(psb_mpk_) :: np, me, isz
integer(psb_ipk_) :: err_act, i, idx
character(len=20) :: name, ch_err
name='psi_dovrl_saver1'
@ -97,7 +98,8 @@ subroutine psi_iovrl_save_multivect(x,xs,desc_a,info)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, isz, nc
integer(psb_mpk_) :: np, me, isz, nc
integer(psb_ipk_) :: err_act, i, idx
character(len=20) :: name, ch_err
name='psi_dovrl_saver1'

@ -51,7 +51,8 @@ subroutine psi_iovrl_upd_vect(x,desc_a,update,info)
! locals
integer(psb_ipk_), allocatable :: xs(:)
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, ndm, nx
integer(psb_mpk_) :: np, me, isz, nx, ndm
integer(psb_ipk_) :: err_act, i, idx
integer(psb_ipk_) :: ierr(5)
character(len=20) :: name, ch_err
@ -133,7 +134,8 @@ subroutine psi_iovrl_upd_multivect(x,desc_a,update,info)
! locals
integer(psb_ipk_), allocatable :: xs(:,:)
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, ndm, nx, nc
integer(psb_mpk_) :: np, me, isz, ndm, nx, nc
integer(psb_ipk_) :: err_act, i, idx
integer(psb_ipk_) :: ierr(5)
character(len=20) :: name, ch_err

@ -96,11 +96,11 @@ subroutine psi_iswapdata_vect(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -184,11 +184,11 @@ subroutine psi_iswap_vidx_vect(ctxt,icomm,flag,beta,y,idx, &
use psb_desc_mod
use psb_penv_mod
use psb_i_base_vect_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -203,11 +203,11 @@ subroutine psi_iswap_vidx_vect(ctxt,icomm,flag,beta,y,idx, &
integer(psb_ipk_), intent(in) :: totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_) :: np, me
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size),&
& iret, nesd, nerv
integer(psb_mpk_), allocatable :: prcid(:)
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
@ -434,11 +434,11 @@ subroutine psi_iswapdata_multivect(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -522,11 +522,11 @@ subroutine psi_iswap_vidx_multivect(ctxt,icomm,flag,beta,y,idx, &
use psb_desc_mod
use psb_penv_mod
use psb_i_base_multivect_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -541,12 +541,11 @@ subroutine psi_iswap_vidx_multivect(ctxt,icomm,flag,beta,y,idx, &
integer(psb_ipk_), intent(in) :: totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv, n
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable :: prcid(:)
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false., debug=.false.

@ -98,11 +98,11 @@ subroutine psi_iswaptran_vect(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -185,11 +185,11 @@ subroutine psi_itran_vidx_vect(ctxt,icomm,flag,beta,y,idx,&
use psb_desc_mod
use psb_penv_mod
use psb_i_base_vect_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -204,12 +204,11 @@ subroutine psi_itran_vidx_vect(ctxt,icomm,flag,beta,y,idx,&
integer(psb_ipk_), intent(in) :: totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv, n
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable :: prcid(:)
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false., debug=.false.
@ -445,11 +444,11 @@ subroutine psi_iswaptran_multivect(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -533,11 +532,11 @@ subroutine psi_itran_vidx_multivect(ctxt,icomm,flag,beta,y,idx,&
use psb_desc_mod
use psb_penv_mod
use psb_i_base_multivect_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -552,12 +551,11 @@ subroutine psi_itran_vidx_multivect(ctxt,icomm,flag,beta,y,idx,&
integer(psb_ipk_), intent(in) :: totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv, n
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable :: prcid(:)
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false., debug=.false.

@ -48,7 +48,8 @@ subroutine psi_lovrl_restr_vect(x,xs,desc_a,info)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, isz
integer(psb_mpk_) :: np, me, isz
integer(psb_ipk_) :: err_act, i, idx
character(len=20) :: name, ch_err
name='psi_lovrl_restr_vect'
@ -91,7 +92,8 @@ subroutine psi_lovrl_restr_multivect(x,xs,desc_a,info)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, isz, nc
integer(psb_mpk_) :: np, me, isz,nc
integer(psb_ipk_) :: err_act, i, idx
character(len=20) :: name, ch_err
name='psi_lovrl_restr_mv'

@ -48,7 +48,8 @@ subroutine psi_lovrl_save_vect(x,xs,desc_a,info)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, isz
integer(psb_mpk_) :: np, me, isz
integer(psb_ipk_) :: err_act, i, idx
character(len=20) :: name, ch_err
name='psi_dovrl_saver1'
@ -97,7 +98,8 @@ subroutine psi_lovrl_save_multivect(x,xs,desc_a,info)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, isz, nc
integer(psb_mpk_) :: np, me, isz, nc
integer(psb_ipk_) :: err_act, i, idx
character(len=20) :: name, ch_err
name='psi_dovrl_saver1'

@ -51,7 +51,8 @@ subroutine psi_lovrl_upd_vect(x,desc_a,update,info)
! locals
integer(psb_lpk_), allocatable :: xs(:)
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, ndm, nx
integer(psb_mpk_) :: np, me, isz, nx, ndm
integer(psb_ipk_) :: err_act, i, idx
integer(psb_ipk_) :: ierr(5)
character(len=20) :: name, ch_err
@ -133,7 +134,8 @@ subroutine psi_lovrl_upd_multivect(x,desc_a,update,info)
! locals
integer(psb_lpk_), allocatable :: xs(:,:)
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, ndm, nx, nc
integer(psb_mpk_) :: np, me, isz, ndm, nx, nc
integer(psb_ipk_) :: err_act, i, idx
integer(psb_ipk_) :: ierr(5)
character(len=20) :: name, ch_err

@ -96,11 +96,11 @@ subroutine psi_lswapdata_vect(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -184,11 +184,11 @@ subroutine psi_lswap_vidx_vect(ctxt,icomm,flag,beta,y,idx, &
use psb_desc_mod
use psb_penv_mod
use psb_l_base_vect_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -203,11 +203,11 @@ subroutine psi_lswap_vidx_vect(ctxt,icomm,flag,beta,y,idx, &
integer(psb_ipk_), intent(in) :: totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_) :: np, me
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size),&
& iret, nesd, nerv
integer(psb_mpk_), allocatable :: prcid(:)
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
@ -434,11 +434,11 @@ subroutine psi_lswapdata_multivect(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -522,11 +522,11 @@ subroutine psi_lswap_vidx_multivect(ctxt,icomm,flag,beta,y,idx, &
use psb_desc_mod
use psb_penv_mod
use psb_l_base_multivect_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -541,12 +541,11 @@ subroutine psi_lswap_vidx_multivect(ctxt,icomm,flag,beta,y,idx, &
integer(psb_ipk_), intent(in) :: totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv, n
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable :: prcid(:)
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false., debug=.false.

@ -98,11 +98,11 @@ subroutine psi_lswaptran_vect(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -185,11 +185,11 @@ subroutine psi_ltran_vidx_vect(ctxt,icomm,flag,beta,y,idx,&
use psb_desc_mod
use psb_penv_mod
use psb_l_base_vect_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -204,12 +204,11 @@ subroutine psi_ltran_vidx_vect(ctxt,icomm,flag,beta,y,idx,&
integer(psb_ipk_), intent(in) :: totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv, n
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable :: prcid(:)
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false., debug=.false.
@ -445,11 +444,11 @@ subroutine psi_lswaptran_multivect(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -533,11 +532,11 @@ subroutine psi_ltran_vidx_multivect(ctxt,icomm,flag,beta,y,idx,&
use psb_desc_mod
use psb_penv_mod
use psb_l_base_multivect_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -552,12 +551,11 @@ subroutine psi_ltran_vidx_multivect(ctxt,icomm,flag,beta,y,idx,&
integer(psb_ipk_), intent(in) :: totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv, n
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable :: prcid(:)
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false., debug=.false.

@ -90,15 +90,16 @@ subroutine psi_mswapdatam(flag,n,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
integer(psb_ipk_), intent(in) :: flag, n
integer(psb_mpk_), intent(in) :: n
integer(psb_ipk_), intent(in) :: flag
integer(psb_ipk_), intent(out) :: info
integer(psb_mpk_) :: y(:,:), beta
integer(psb_mpk_), target :: work(:)
@ -108,7 +109,8 @@ subroutine psi_mswapdatam(flag,n,beta,y,desc_a,work,info,data)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: icomm
integer(psb_ipk_) :: np, me, idxs, idxr, totxch, data_, err_act
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: idxs, idxr, totxch, data_, err_act
integer(psb_ipk_), pointer :: d_idx(:)
character(len=20) :: name
@ -161,17 +163,18 @@ subroutine psi_mswapidxm(ctxt,icomm,flag,n,beta,y,idx, &
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
type(psb_ctxt_type), intent(in) :: ctxt
integer(psb_mpk_), intent(in) :: icomm
integer(psb_ipk_), intent(in) :: flag,n
integer(psb_mpk_), intent(in) :: n
integer(psb_ipk_), intent(in) :: flag
integer(psb_ipk_), intent(out) :: info
integer(psb_mpk_) :: y(:,:), beta
integer(psb_mpk_), target :: work(:)
@ -179,19 +182,20 @@ subroutine psi_mswapidxm(ctxt,icomm,flag,n,beta,y,idx, &
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable, dimension(:) :: bsdidx, brvidx,&
& sdsz, rvsz, prcid, rvhd, sdhd
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false.
integer(psb_mpk_), pointer, dimension(:) :: sndbuf, rcvbuf
#if !defined(FLANG)
volatile :: sndbuf, rcvbuf
#endif
character(len=20) :: name
info=psb_success_
@ -565,11 +569,11 @@ subroutine psi_mswapdatav(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -583,7 +587,8 @@ subroutine psi_mswapdatav(flag,beta,y,desc_a,work,info,data)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: icomm
integer(psb_ipk_) :: np, me, idxs, idxr, totxch, data_, err_act
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: idxs, idxr, totxch, data_, err_act
integer(psb_ipk_), pointer :: d_idx(:)
character(len=20) :: name
@ -647,11 +652,11 @@ subroutine psi_mswapidxv(ctxt,icomm,flag,beta,y,idx, &
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -664,19 +669,20 @@ subroutine psi_mswapidxv(ctxt,icomm,flag,beta,y,idx, &
integer(psb_ipk_), intent(in) :: idx(:),totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv, n
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable, dimension(:) :: bsdidx, brvidx,&
& sdsz, rvsz, prcid, rvhd, sdhd
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false.
integer(psb_mpk_), pointer, dimension(:) :: sndbuf, rcvbuf
#if !defined(FLANG)
volatile :: sndbuf, rcvbuf
#endif
character(len=20) :: name
info=psb_success_

@ -94,15 +94,16 @@ subroutine psi_mswaptranm(flag,n,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
integer(psb_ipk_), intent(in) :: flag, n
integer(psb_mpk_), intent(in) :: n
integer(psb_ipk_), intent(in) :: flag
integer(psb_ipk_), intent(out) :: info
integer(psb_mpk_) :: y(:,:), beta
integer(psb_mpk_), target :: work(:)
@ -112,7 +113,8 @@ subroutine psi_mswaptranm(flag,n,beta,y,desc_a,work,info,data)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: icomm
integer(psb_ipk_) :: np, me, idxs, idxr, err_act, totxch, data_
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: idxs, idxr, err_act, totxch, data_
integer(psb_ipk_), pointer :: d_idx(:)
character(len=20) :: name
@ -166,36 +168,38 @@ subroutine psi_mtranidxm(ctxt,icomm,flag,n,beta,y,idx,&
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
type(psb_ctxt_type), intent(in) :: ctxt
integer(psb_mpk_), intent(in) :: icomm
integer(psb_ipk_), intent(in) :: flag,n
integer(psb_mpk_), intent(in) :: n
integer(psb_ipk_), intent(in) :: flag
integer(psb_ipk_), intent(out) :: info
integer(psb_mpk_) :: y(:,:), beta
integer(psb_mpk_), target :: work(:)
integer(psb_ipk_), intent(in) :: idx(:),totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable, dimension(:) :: bsdidx, brvidx,&
& sdsz, rvsz, prcid, rvhd, sdhd
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false.
integer(psb_mpk_), pointer, dimension(:) :: sndbuf, rcvbuf
#if !defined(FLANG)
volatile :: sndbuf, rcvbuf
#endif
character(len=20) :: name
info=psb_success_
@ -577,11 +581,11 @@ subroutine psi_mswaptranv(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -659,11 +663,11 @@ subroutine psi_mtranidxv(ctxt,icomm,flag,beta,y,idx,&
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -676,19 +680,20 @@ subroutine psi_mtranidxv(ctxt,icomm,flag,beta,y,idx,&
integer(psb_ipk_), intent(in) :: idx(:),totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv, n
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable, dimension(:) :: bsdidx, brvidx,&
& sdsz, rvsz, prcid, rvhd, sdhd
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false.
integer(psb_mpk_), pointer, dimension(:) :: sndbuf, rcvbuf
#if !defined(FLANG)
volatile :: sndbuf, rcvbuf
#endif
character(len=20) :: name
info=psb_success_

@ -48,7 +48,8 @@ subroutine psi_sovrl_restr_vect(x,xs,desc_a,info)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, isz
integer(psb_mpk_) :: np, me, isz
integer(psb_ipk_) :: err_act, i, idx
character(len=20) :: name, ch_err
name='psi_sovrl_restr_vect'
@ -91,7 +92,8 @@ subroutine psi_sovrl_restr_multivect(x,xs,desc_a,info)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, isz, nc
integer(psb_mpk_) :: np, me, isz,nc
integer(psb_ipk_) :: err_act, i, idx
character(len=20) :: name, ch_err
name='psi_sovrl_restr_mv'

@ -48,7 +48,8 @@ subroutine psi_sovrl_save_vect(x,xs,desc_a,info)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, isz
integer(psb_mpk_) :: np, me, isz
integer(psb_ipk_) :: err_act, i, idx
character(len=20) :: name, ch_err
name='psi_dovrl_saver1'
@ -97,7 +98,8 @@ subroutine psi_sovrl_save_multivect(x,xs,desc_a,info)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, isz, nc
integer(psb_mpk_) :: np, me, isz, nc
integer(psb_ipk_) :: err_act, i, idx
character(len=20) :: name, ch_err
name='psi_dovrl_saver1'

@ -51,7 +51,8 @@ subroutine psi_sovrl_upd_vect(x,desc_a,update,info)
! locals
real(psb_spk_), allocatable :: xs(:)
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, ndm, nx
integer(psb_mpk_) :: np, me, isz, nx, ndm
integer(psb_ipk_) :: err_act, i, idx
integer(psb_ipk_) :: ierr(5)
character(len=20) :: name, ch_err
@ -133,7 +134,8 @@ subroutine psi_sovrl_upd_multivect(x,desc_a,update,info)
! locals
real(psb_spk_), allocatable :: xs(:,:)
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, ndm, nx, nc
integer(psb_mpk_) :: np, me, isz, ndm, nx, nc
integer(psb_ipk_) :: err_act, i, idx
integer(psb_ipk_) :: ierr(5)
character(len=20) :: name, ch_err

@ -96,11 +96,11 @@ subroutine psi_sswapdata_vect(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -184,11 +184,11 @@ subroutine psi_sswap_vidx_vect(ctxt,icomm,flag,beta,y,idx, &
use psb_desc_mod
use psb_penv_mod
use psb_s_base_vect_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -203,11 +203,11 @@ subroutine psi_sswap_vidx_vect(ctxt,icomm,flag,beta,y,idx, &
integer(psb_ipk_), intent(in) :: totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_) :: np, me
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size),&
& iret, nesd, nerv
integer(psb_mpk_), allocatable :: prcid(:)
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
@ -434,11 +434,11 @@ subroutine psi_sswapdata_multivect(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -522,11 +522,11 @@ subroutine psi_sswap_vidx_multivect(ctxt,icomm,flag,beta,y,idx, &
use psb_desc_mod
use psb_penv_mod
use psb_s_base_multivect_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -541,12 +541,11 @@ subroutine psi_sswap_vidx_multivect(ctxt,icomm,flag,beta,y,idx, &
integer(psb_ipk_), intent(in) :: totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv, n
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable :: prcid(:)
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false., debug=.false.

@ -90,15 +90,16 @@ subroutine psi_sswapdatam(flag,n,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
integer(psb_ipk_), intent(in) :: flag, n
integer(psb_mpk_), intent(in) :: n
integer(psb_ipk_), intent(in) :: flag
integer(psb_ipk_), intent(out) :: info
real(psb_spk_) :: y(:,:), beta
real(psb_spk_), target :: work(:)
@ -108,7 +109,8 @@ subroutine psi_sswapdatam(flag,n,beta,y,desc_a,work,info,data)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: icomm
integer(psb_ipk_) :: np, me, idxs, idxr, totxch, data_, err_act
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: idxs, idxr, totxch, data_, err_act
integer(psb_ipk_), pointer :: d_idx(:)
character(len=20) :: name
@ -161,17 +163,18 @@ subroutine psi_sswapidxm(ctxt,icomm,flag,n,beta,y,idx, &
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
type(psb_ctxt_type), intent(in) :: ctxt
integer(psb_mpk_), intent(in) :: icomm
integer(psb_ipk_), intent(in) :: flag,n
integer(psb_mpk_), intent(in) :: n
integer(psb_ipk_), intent(in) :: flag
integer(psb_ipk_), intent(out) :: info
real(psb_spk_) :: y(:,:), beta
real(psb_spk_), target :: work(:)
@ -179,19 +182,20 @@ subroutine psi_sswapidxm(ctxt,icomm,flag,n,beta,y,idx, &
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable, dimension(:) :: bsdidx, brvidx,&
& sdsz, rvsz, prcid, rvhd, sdhd
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false.
real(psb_spk_), pointer, dimension(:) :: sndbuf, rcvbuf
#if !defined(FLANG)
volatile :: sndbuf, rcvbuf
#endif
character(len=20) :: name
info=psb_success_
@ -565,11 +569,11 @@ subroutine psi_sswapdatav(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -583,7 +587,8 @@ subroutine psi_sswapdatav(flag,beta,y,desc_a,work,info,data)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: icomm
integer(psb_ipk_) :: np, me, idxs, idxr, totxch, data_, err_act
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: idxs, idxr, totxch, data_, err_act
integer(psb_ipk_), pointer :: d_idx(:)
character(len=20) :: name
@ -647,11 +652,11 @@ subroutine psi_sswapidxv(ctxt,icomm,flag,beta,y,idx, &
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -664,19 +669,20 @@ subroutine psi_sswapidxv(ctxt,icomm,flag,beta,y,idx, &
integer(psb_ipk_), intent(in) :: idx(:),totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv, n
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable, dimension(:) :: bsdidx, brvidx,&
& sdsz, rvsz, prcid, rvhd, sdhd
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false.
real(psb_spk_), pointer, dimension(:) :: sndbuf, rcvbuf
#if !defined(FLANG)
volatile :: sndbuf, rcvbuf
#endif
character(len=20) :: name
info=psb_success_

@ -98,11 +98,11 @@ subroutine psi_sswaptran_vect(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -185,11 +185,11 @@ subroutine psi_stran_vidx_vect(ctxt,icomm,flag,beta,y,idx,&
use psb_desc_mod
use psb_penv_mod
use psb_s_base_vect_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -204,12 +204,11 @@ subroutine psi_stran_vidx_vect(ctxt,icomm,flag,beta,y,idx,&
integer(psb_ipk_), intent(in) :: totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv, n
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable :: prcid(:)
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false., debug=.false.
@ -445,11 +444,11 @@ subroutine psi_sswaptran_multivect(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -533,11 +532,11 @@ subroutine psi_stran_vidx_multivect(ctxt,icomm,flag,beta,y,idx,&
use psb_desc_mod
use psb_penv_mod
use psb_s_base_multivect_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -552,12 +551,11 @@ subroutine psi_stran_vidx_multivect(ctxt,icomm,flag,beta,y,idx,&
integer(psb_ipk_), intent(in) :: totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv, n
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable :: prcid(:)
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false., debug=.false.

@ -94,15 +94,16 @@ subroutine psi_sswaptranm(flag,n,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
integer(psb_ipk_), intent(in) :: flag, n
integer(psb_mpk_), intent(in) :: n
integer(psb_ipk_), intent(in) :: flag
integer(psb_ipk_), intent(out) :: info
real(psb_spk_) :: y(:,:), beta
real(psb_spk_), target :: work(:)
@ -112,7 +113,8 @@ subroutine psi_sswaptranm(flag,n,beta,y,desc_a,work,info,data)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: icomm
integer(psb_ipk_) :: np, me, idxs, idxr, err_act, totxch, data_
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: idxs, idxr, err_act, totxch, data_
integer(psb_ipk_), pointer :: d_idx(:)
character(len=20) :: name
@ -166,36 +168,38 @@ subroutine psi_stranidxm(ctxt,icomm,flag,n,beta,y,idx,&
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
type(psb_ctxt_type), intent(in) :: ctxt
integer(psb_mpk_), intent(in) :: icomm
integer(psb_ipk_), intent(in) :: flag,n
integer(psb_mpk_), intent(in) :: n
integer(psb_ipk_), intent(in) :: flag
integer(psb_ipk_), intent(out) :: info
real(psb_spk_) :: y(:,:), beta
real(psb_spk_), target :: work(:)
integer(psb_ipk_), intent(in) :: idx(:),totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable, dimension(:) :: bsdidx, brvidx,&
& sdsz, rvsz, prcid, rvhd, sdhd
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false.
real(psb_spk_), pointer, dimension(:) :: sndbuf, rcvbuf
#if !defined(FLANG)
volatile :: sndbuf, rcvbuf
#endif
character(len=20) :: name
info=psb_success_
@ -577,11 +581,11 @@ subroutine psi_sswaptranv(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -659,11 +663,11 @@ subroutine psi_stranidxv(ctxt,icomm,flag,beta,y,idx,&
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -676,19 +680,20 @@ subroutine psi_stranidxv(ctxt,icomm,flag,beta,y,idx,&
integer(psb_ipk_), intent(in) :: idx(:),totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv, n
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable, dimension(:) :: bsdidx, brvidx,&
& sdsz, rvsz, prcid, rvhd, sdhd
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false.
real(psb_spk_), pointer, dimension(:) :: sndbuf, rcvbuf
#if !defined(FLANG)
volatile :: sndbuf, rcvbuf
#endif
character(len=20) :: name
info=psb_success_

@ -48,7 +48,8 @@ subroutine psi_zovrl_restr_vect(x,xs,desc_a,info)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, isz
integer(psb_mpk_) :: np, me, isz
integer(psb_ipk_) :: err_act, i, idx
character(len=20) :: name, ch_err
name='psi_zovrl_restr_vect'
@ -91,7 +92,8 @@ subroutine psi_zovrl_restr_multivect(x,xs,desc_a,info)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, isz, nc
integer(psb_mpk_) :: np, me, isz,nc
integer(psb_ipk_) :: err_act, i, idx
character(len=20) :: name, ch_err
name='psi_zovrl_restr_mv'

@ -48,7 +48,8 @@ subroutine psi_zovrl_save_vect(x,xs,desc_a,info)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, isz
integer(psb_mpk_) :: np, me, isz
integer(psb_ipk_) :: err_act, i, idx
character(len=20) :: name, ch_err
name='psi_dovrl_saver1'
@ -97,7 +98,8 @@ subroutine psi_zovrl_save_multivect(x,xs,desc_a,info)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, isz, nc
integer(psb_mpk_) :: np, me, isz, nc
integer(psb_ipk_) :: err_act, i, idx
character(len=20) :: name, ch_err
name='psi_dovrl_saver1'

@ -51,7 +51,8 @@ subroutine psi_zovrl_upd_vect(x,desc_a,update,info)
! locals
complex(psb_dpk_), allocatable :: xs(:)
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, ndm, nx
integer(psb_mpk_) :: np, me, isz, nx, ndm
integer(psb_ipk_) :: err_act, i, idx
integer(psb_ipk_) :: ierr(5)
character(len=20) :: name, ch_err
@ -133,7 +134,8 @@ subroutine psi_zovrl_upd_multivect(x,desc_a,update,info)
! locals
complex(psb_dpk_), allocatable :: xs(:,:)
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np, me, err_act, i, idx, ndm, nx, nc
integer(psb_mpk_) :: np, me, isz, ndm, nx, nc
integer(psb_ipk_) :: err_act, i, idx
integer(psb_ipk_) :: ierr(5)
character(len=20) :: name, ch_err

@ -96,11 +96,11 @@ subroutine psi_zswapdata_vect(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -184,11 +184,11 @@ subroutine psi_zswap_vidx_vect(ctxt,icomm,flag,beta,y,idx, &
use psb_desc_mod
use psb_penv_mod
use psb_z_base_vect_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -203,11 +203,11 @@ subroutine psi_zswap_vidx_vect(ctxt,icomm,flag,beta,y,idx, &
integer(psb_ipk_), intent(in) :: totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_) :: np, me
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size),&
& iret, nesd, nerv
integer(psb_mpk_), allocatable :: prcid(:)
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
@ -434,11 +434,11 @@ subroutine psi_zswapdata_multivect(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -522,11 +522,11 @@ subroutine psi_zswap_vidx_multivect(ctxt,icomm,flag,beta,y,idx, &
use psb_desc_mod
use psb_penv_mod
use psb_z_base_multivect_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -541,12 +541,11 @@ subroutine psi_zswap_vidx_multivect(ctxt,icomm,flag,beta,y,idx, &
integer(psb_ipk_), intent(in) :: totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv, n
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable :: prcid(:)
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false., debug=.false.

@ -90,15 +90,16 @@ subroutine psi_zswapdatam(flag,n,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
integer(psb_ipk_), intent(in) :: flag, n
integer(psb_mpk_), intent(in) :: n
integer(psb_ipk_), intent(in) :: flag
integer(psb_ipk_), intent(out) :: info
complex(psb_dpk_) :: y(:,:), beta
complex(psb_dpk_), target :: work(:)
@ -108,7 +109,8 @@ subroutine psi_zswapdatam(flag,n,beta,y,desc_a,work,info,data)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: icomm
integer(psb_ipk_) :: np, me, idxs, idxr, totxch, data_, err_act
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: idxs, idxr, totxch, data_, err_act
integer(psb_ipk_), pointer :: d_idx(:)
character(len=20) :: name
@ -161,17 +163,18 @@ subroutine psi_zswapidxm(ctxt,icomm,flag,n,beta,y,idx, &
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
type(psb_ctxt_type), intent(in) :: ctxt
integer(psb_mpk_), intent(in) :: icomm
integer(psb_ipk_), intent(in) :: flag,n
integer(psb_mpk_), intent(in) :: n
integer(psb_ipk_), intent(in) :: flag
integer(psb_ipk_), intent(out) :: info
complex(psb_dpk_) :: y(:,:), beta
complex(psb_dpk_), target :: work(:)
@ -179,19 +182,20 @@ subroutine psi_zswapidxm(ctxt,icomm,flag,n,beta,y,idx, &
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable, dimension(:) :: bsdidx, brvidx,&
& sdsz, rvsz, prcid, rvhd, sdhd
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false.
complex(psb_dpk_), pointer, dimension(:) :: sndbuf, rcvbuf
#if !defined(FLANG)
volatile :: sndbuf, rcvbuf
#endif
character(len=20) :: name
info=psb_success_
@ -565,11 +569,11 @@ subroutine psi_zswapdatav(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -583,7 +587,8 @@ subroutine psi_zswapdatav(flag,beta,y,desc_a,work,info,data)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: icomm
integer(psb_ipk_) :: np, me, idxs, idxr, totxch, data_, err_act
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: idxs, idxr, totxch, data_, err_act
integer(psb_ipk_), pointer :: d_idx(:)
character(len=20) :: name
@ -647,11 +652,11 @@ subroutine psi_zswapidxv(ctxt,icomm,flag,beta,y,idx, &
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -664,19 +669,20 @@ subroutine psi_zswapidxv(ctxt,icomm,flag,beta,y,idx, &
integer(psb_ipk_), intent(in) :: idx(:),totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv, n
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable, dimension(:) :: bsdidx, brvidx,&
& sdsz, rvsz, prcid, rvhd, sdhd
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false.
complex(psb_dpk_), pointer, dimension(:) :: sndbuf, rcvbuf
#if !defined(FLANG)
volatile :: sndbuf, rcvbuf
#endif
character(len=20) :: name
info=psb_success_

@ -98,11 +98,11 @@ subroutine psi_zswaptran_vect(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -185,11 +185,11 @@ subroutine psi_ztran_vidx_vect(ctxt,icomm,flag,beta,y,idx,&
use psb_desc_mod
use psb_penv_mod
use psb_z_base_vect_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -204,12 +204,11 @@ subroutine psi_ztran_vidx_vect(ctxt,icomm,flag,beta,y,idx,&
integer(psb_ipk_), intent(in) :: totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv, n
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable :: prcid(:)
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false., debug=.false.
@ -445,11 +444,11 @@ subroutine psi_zswaptran_multivect(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -533,11 +532,11 @@ subroutine psi_ztran_vidx_multivect(ctxt,icomm,flag,beta,y,idx,&
use psb_desc_mod
use psb_penv_mod
use psb_z_base_multivect_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -552,12 +551,11 @@ subroutine psi_ztran_vidx_multivect(ctxt,icomm,flag,beta,y,idx,&
integer(psb_ipk_), intent(in) :: totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv, n
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable :: prcid(:)
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false., debug=.false.

@ -94,15 +94,16 @@ subroutine psi_zswaptranm(flag,n,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
integer(psb_ipk_), intent(in) :: flag, n
integer(psb_mpk_), intent(in) :: n
integer(psb_ipk_), intent(in) :: flag
integer(psb_ipk_), intent(out) :: info
complex(psb_dpk_) :: y(:,:), beta
complex(psb_dpk_), target :: work(:)
@ -112,7 +113,8 @@ subroutine psi_zswaptranm(flag,n,beta,y,desc_a,work,info,data)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: icomm
integer(psb_ipk_) :: np, me, idxs, idxr, err_act, totxch, data_
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: idxs, idxr, err_act, totxch, data_
integer(psb_ipk_), pointer :: d_idx(:)
character(len=20) :: name
@ -166,36 +168,38 @@ subroutine psi_ztranidxm(ctxt,icomm,flag,n,beta,y,idx,&
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
type(psb_ctxt_type), intent(in) :: ctxt
integer(psb_mpk_), intent(in) :: icomm
integer(psb_ipk_), intent(in) :: flag,n
integer(psb_mpk_), intent(in) :: n
integer(psb_ipk_), intent(in) :: flag
integer(psb_ipk_), intent(out) :: info
complex(psb_dpk_) :: y(:,:), beta
complex(psb_dpk_), target :: work(:)
integer(psb_ipk_), intent(in) :: idx(:),totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable, dimension(:) :: bsdidx, brvidx,&
& sdsz, rvsz, prcid, rvhd, sdhd
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false.
complex(psb_dpk_), pointer, dimension(:) :: sndbuf, rcvbuf
#if !defined(FLANG)
volatile :: sndbuf, rcvbuf
#endif
character(len=20) :: name
info=psb_success_
@ -577,11 +581,11 @@ subroutine psi_zswaptranv(flag,beta,y,desc_a,work,info,data)
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -659,11 +663,11 @@ subroutine psi_ztranidxv(ctxt,icomm,flag,beta,y,idx,&
use psb_error_mod
use psb_desc_mod
use psb_penv_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -676,19 +680,20 @@ subroutine psi_ztranidxv(ctxt,icomm,flag,beta,y,idx,&
integer(psb_ipk_), intent(in) :: idx(:),totxch,totsnd, totrcv
! locals
integer(psb_ipk_) :: np, me
integer(psb_mpk_) :: np, me, nesd, nerv, n
integer(psb_mpk_) :: proc_to_comm, p2ptag, p2pstat(mpi_status_size), iret
integer(psb_mpk_), allocatable, dimension(:) :: bsdidx, brvidx,&
& sdsz, rvsz, prcid, rvhd, sdhd
integer(psb_ipk_) :: nesd, nerv,&
& err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti, n
integer(psb_ipk_) :: err_act, i, idx_pt, totsnd_, totrcv_,&
& snd_pt, rcv_pt, pnti
logical :: swap_mpi, swap_sync, swap_send, swap_recv,&
& albf,do_send,do_recv
logical, parameter :: usersend=.false.
complex(psb_dpk_), pointer, dimension(:) :: sndbuf, rcvbuf
#if !defined(FLANG)
volatile :: sndbuf, rcvbuf
#endif
character(len=20) :: name
info=psb_success_

@ -58,10 +58,11 @@ subroutine psb_cgather_vect(globx, locx, desc_a, info, iroot)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, me, root, iiroot, icomm, myrank, rootrank
integer(psb_mpk_) :: np, me, root, iiroot, icomm, myrank, rootrank, loc_rows
integer(psb_ipk_) :: ierr(5), err_act, jlx, ilx, lda_locx, lda_globx, i
integer(psb_lpk_) :: m, n, k, ilocx, jlocx, idx, iglobx, jglobx
complex(psb_spk_), allocatable :: llocx(:)
integer(psb_mpk_), allocatable :: szs(:)
character(len=20) :: name, ch_err
name='psb_cgatherv'
@ -125,32 +126,34 @@ subroutine psb_cgather_vect(globx, locx, desc_a, info, iroot)
goto 9999
end if
call psb_realloc(m,globx,info)
if (info /= psb_success_) then
info=psb_err_alloc_dealloc_
call psb_errpush(info,name)
goto 9999
end if
globx(:) = czero
llocx = locx%get_vect()
do i=1,desc_a%get_local_rows()
call psb_loc_to_glob(i,idx,desc_a,info)
globx(idx) = llocx(i)
end do
llocx = locx%get_vect()
! adjust overlapped elements
do i=1, size(desc_a%ovrlap_elem,1)
if (me /= desc_a%ovrlap_elem(i,3)) then
idx = desc_a%ovrlap_elem(i,1)
call psb_loc_to_glob(idx,desc_a,info)
globx(idx) = czero
llocx(idx) = czero
end if
end do
call psb_sum(ctxt,globx(1:m),root=root)
allocate(szs(np))
loc_rows = desc_a%get_local_rows()
call psb_gather(ctxt,loc_rows,szs,root=root)
if ((me == root).or.(root == -1)) then
if (sum(szs) /= m) then
info=psb_err_internal_error_
call psb_errpush(info,name)
goto 9999
end if
call psb_realloc(m,globx,info)
if (info /= psb_success_) then
info=psb_err_alloc_dealloc_
call psb_errpush(info,name)
goto 9999
end if
end if
call psb_gatherv(ctxt,llocx(1:loc_rows),globx,szs,root=root)
call psb_erractionrestore(err_act)
return

@ -66,8 +66,8 @@ subroutine psb_chalom(x,desc_a,info,jx,ik,work,tran,mode,data)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: err_act, iix, jjx, k, maxk, nrow, imode, i,&
integer(psb_mpk_) :: np, me, k
integer(psb_ipk_) :: err_act, iix, jjx, maxk, nrow, imode, i,&
& liwork,data_, ldx
integer(psb_lpk_) :: m, n, ix, ijx
complex(psb_spk_),pointer :: iwork(:), xp(:,:)

@ -77,8 +77,8 @@ subroutine psb_covrlm(x,desc_a,info,jx,ik,work,update,mode)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: err_act, iix, jjx, nrow, ncol, k, maxk, update_,&
integer(psb_mpk_) :: np, me, k
integer(psb_ipk_) :: err_act, iix, jjx, nrow, ncol, maxk, update_,&
& mode_, liwork, ldx
integer(psb_lpk_) :: m, n, ix, ijx
complex(psb_spk_),pointer :: iwork(:), xp(:,:)

@ -46,11 +46,11 @@
subroutine psb_cscatterm(globx, locx, desc_a, info, root)
use psb_base_mod, psb_protect_name => psb_cscatterm
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -63,7 +63,8 @@ subroutine psb_cscatterm(globx, locx, desc_a, info, root)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, me, iroot, icomm, myrank, rootrank, iam, nlr
integer(psb_mpk_) :: np, me, iroot, icomm, myrank, rootrank, iam,&
& nlr, minfo
integer(psb_ipk_) :: ierr(5), err_act, nrow,&
& ilocx, jlocx, lda_locx, lda_globx, lock, globk, k, maxk, &
& col,pos
@ -162,13 +163,13 @@ subroutine psb_cscatterm(globx, locx, desc_a, info, root)
rootrank = psb_get_mpi_rank(ctxt,iroot)
!
! This is potentially unsafe when IPK=8
! But then, IPK=8 is highly experimental anyway.
! This is potentially unsafe when PSB_IPK=8
! But then, PSB_IPK=8 is highly experimental anyway.
!
nlr = nrow
call mpi_gather(nlr,1,psb_mpi_mpk_,all_dim,&
& 1,psb_mpi_mpk_,rootrank,icomm,info)
& 1,psb_mpi_mpk_,rootrank,icomm,minfo)
info = minfo
if (iam == iroot) then
displ(1)=0
do i=2,np
@ -195,8 +196,8 @@ subroutine psb_cscatterm(globx, locx, desc_a, info, root)
call mpi_gatherv(ltg,nlr,&
& psb_mpi_lpk_,l_t_g_all,all_dim,&
& displ,psb_mpi_lpk_,rootrank,icomm,info)
& displ,psb_mpi_lpk_,rootrank,icomm,minfo)
info = minfo
do col=1, k
! prepare vector to scatter
if(iam == iroot) then
@ -211,9 +212,9 @@ subroutine psb_cscatterm(globx, locx, desc_a, info, root)
! scatter
call mpi_scatterv(scatterv,all_dim,displ,&
& psb_mpi_c_spk_,locx(1,col),nrow,&
& psb_mpi_c_spk_,rootrank,icomm,info)
& psb_mpi_c_spk_,locx(1,col),nlr,&
& psb_mpi_c_spk_,rootrank,icomm,minfo)
info = minfo
end do
deallocate(l_t_g_all, scatterv,stat=info)
@ -291,11 +292,11 @@ end subroutine psb_cscatterm
!
subroutine psb_cscatterv(globx, locx, desc_a, info, root)
use psb_base_mod, psb_protect_name => psb_cscatterv
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -308,7 +309,7 @@ subroutine psb_cscatterv(globx, locx, desc_a, info, root)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, iam, iroot, iiroot, icomm, myrank, rootrank, nlr
integer(psb_mpk_) :: np, iam, iroot, iiroot, icomm, myrank, rootrank, nlr, minfo
integer(psb_ipk_) :: ierr(5), err_act, nrow,&
& ilocx, jlocx, lda_locx, lda_globx, k, pos, ilx, jlx
integer(psb_lpk_) :: m, n, i, j, idx, iglobx, jglobx
@ -398,13 +399,13 @@ subroutine psb_cscatterv(globx, locx, desc_a, info, root)
else
rootrank = psb_get_mpi_rank(ctxt,iroot)
!
! This is potentially unsafe when IPK=8
! But then, IPK=8 is highly experimental anyway.
! This is potentially unsafe when PSB_IPK=8
! But then, PSB_IPK=8 is highly experimental anyway.
!
nlr = nrow
call mpi_gather(nlr,1,psb_mpi_mpk_,all_dim,&
& 1,psb_mpi_mpk_,rootrank,icomm,info)
& 1,psb_mpi_mpk_,rootrank,icomm,minfo)
info = minfo
if(iam == iroot) then
displ(1)=0
do i=2,np
@ -436,8 +437,8 @@ subroutine psb_cscatterv(globx, locx, desc_a, info, root)
call mpi_gatherv(ltg,nlr,&
& psb_mpi_lpk_,l_t_g_all,all_dim,&
& displ,psb_mpi_lpk_,rootrank,icomm,info)
& displ,psb_mpi_lpk_,rootrank,icomm,minfo)
info = minfo
! prepare vector to scatter
if (iam == iroot) then
do i=1,np
@ -451,9 +452,9 @@ subroutine psb_cscatterv(globx, locx, desc_a, info, root)
end if
call mpi_scatterv(scatterv,all_dim,displ,&
& psb_mpi_c_spk_,locx,nrow,&
& psb_mpi_c_spk_,rootrank,icomm,info)
& psb_mpi_c_spk_,locx,nlr,&
& psb_mpi_c_spk_,rootrank,icomm,minfo)
info = minfo
deallocate(l_t_g_all, scatterv,stat=info)
if(info /= psb_success_) then
info=psb_err_from_subroutine_

@ -33,8 +33,8 @@
!
! Gathers a sparse matrix onto a single process.
! Two variants:
! 1. Gathers to PSB_c_SPARSE_MAT (i.e. to matrix with IPK_ indices)
! 2. Gathers to PSB_lc_SPARSE_MAT (i.e. to matrix with LPK_ indices)
! 1. Gathers to PSB_c_SPARSE_MAT (i.e. to matrix with PSB_IPK_ indices)
! 2. Gathers to PSB_lc_SPARSE_MAT (i.e. to matrix with PSB_LPK_ indices)
!
! Note: this function uses MPI_ALLGATHERV. At this time, the size of the
! resulting matrix must be within the range of 4 bytes because of the
@ -48,11 +48,12 @@ subroutine psb_csp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,keep
use psb_penv_mod
use psb_mat_mod
use psb_tools_mod
#ifdef MPI_MOD
use iso_c_binding
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
type(psb_cspmat_type), intent(inout) :: loca
@ -62,7 +63,7 @@ subroutine psb_csp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,keep
integer(psb_ipk_), intent(in), optional :: root, dupl
logical, intent(in), optional :: keepnum,keeploc
type(psb_c_coo_sparse_mat) :: loc_coo, glob_coo
type(psb_c_coo_sparse_mat), target :: loc_coo, glob_coo
integer(psb_ipk_) :: nrg, ncg, nzg, nzl
integer(psb_ipk_) :: err_act, dupl_
integer(psb_ipk_) :: ip,naggrm1,naggrp1, i, j, k
@ -156,27 +157,27 @@ subroutine psb_csp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,keep
enddo
ndx = nzbr(me+1)
if (root_ == -1) then
call mpi_allgatherv(loc_coo%val,ndx,psb_mpi_c_spk_,&
call mpi_allgatherv((loc_coo%val),ndx,psb_mpi_c_spk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_c_spk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(locia,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((locia),ndx,psb_mpi_lpk_,&
& glbia,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(locja,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((locja),ndx,psb_mpi_lpk_,&
& glbja,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
else
call mpi_gatherv(loc_coo%val,ndx,psb_mpi_c_spk_,&
call mpi_gatherv((loc_coo%val),ndx,psb_mpi_c_spk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_c_spk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(locia,ndx,psb_mpi_lpk_,&
& mpi_gatherv((locia),ndx,psb_mpi_lpk_,&
& glbia,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(locja,ndx,psb_mpi_lpk_,&
& mpi_gatherv((locja),ndx,psb_mpi_lpk_,&
& glbja,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
@ -231,11 +232,12 @@ subroutine psb_lcsp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,kee
use psb_penv_mod
use psb_mat_mod
use psb_tools_mod
#ifdef MPI_MOD
use iso_c_binding
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
type(psb_cspmat_type), intent(inout) :: loca
@ -245,7 +247,7 @@ subroutine psb_lcsp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,kee
integer(psb_ipk_), intent(in), optional :: root, dupl
logical, intent(in), optional :: keepnum,keeploc
type(psb_lc_coo_sparse_mat) :: loc_coo, glob_coo
type(psb_lc_coo_sparse_mat), target :: loc_coo, glob_coo
integer(psb_lpk_) :: nrg, ncg, nzg
integer(psb_ipk_) :: err_act, dupl_
integer(psb_ipk_) :: ip,naggrm1,naggrp1, i, j, k, nzl
@ -337,27 +339,27 @@ subroutine psb_lcsp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,kee
enddo
ndx = nzbr(me+1)
if (root_ == -1) then
call mpi_allgatherv(loc_coo%val,ndx,psb_mpi_c_spk_,&
call mpi_allgatherv((loc_coo%val),ndx,psb_mpi_c_spk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_c_spk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(loc_coo%ia,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((loc_coo%ia),ndx,psb_mpi_lpk_,&
& glob_coo%ia,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(loc_coo%ja,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((loc_coo%ja),ndx,psb_mpi_lpk_,&
& glob_coo%ja,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
else
call mpi_gatherv(loc_coo%val,ndx,psb_mpi_c_spk_,&
call mpi_gatherv((loc_coo%val),ndx,psb_mpi_c_spk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_c_spk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(loc_coo%ia,ndx,psb_mpi_lpk_,&
& mpi_gatherv((loc_coo%ia),ndx,psb_mpi_lpk_,&
& glob_coo%ia,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(loc_coo%ja,ndx,psb_mpi_lpk_,&
& mpi_gatherv((loc_coo%ja),ndx,psb_mpi_lpk_,&
& glob_coo%ja,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
end if
@ -369,7 +371,7 @@ subroutine psb_lcsp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,kee
call loc_coo%free()
!
! Is the code below safe? For very large cases
! the indices in glob_coo will overflow. But then,
! the indices in glob_coo will overflow. But then),
! for very large cases it does not make sense to
! gather the matrix on a single procecss anyway...
!
@ -403,11 +405,12 @@ subroutine psb_lclcsp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,k
use psb_penv_mod
use psb_mat_mod
use psb_tools_mod
#ifdef MPI_MOD
use iso_c_binding
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
type(psb_lcspmat_type), intent(inout) :: loca
@ -417,7 +420,7 @@ subroutine psb_lclcsp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,k
integer(psb_ipk_), intent(in), optional :: root, dupl
logical, intent(in), optional :: keepnum,keeploc
type(psb_lc_coo_sparse_mat) :: loc_coo, glob_coo
type(psb_lc_coo_sparse_mat), target :: loc_coo, glob_coo
integer(psb_lpk_) :: nrg, ncg, nzg
integer(psb_ipk_) :: err_act, dupl_
integer(psb_lpk_) :: ip,naggrm1,naggrp1, i, j, k, nzl
@ -507,27 +510,27 @@ subroutine psb_lclcsp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,k
ndx = nzbr(me+1)
if (root_ == -1) then
call mpi_allgatherv(loc_coo%val,ndx,psb_mpi_c_spk_,&
call mpi_allgatherv((loc_coo%val),ndx,psb_mpi_c_spk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_c_spk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(loc_coo%ia,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((loc_coo%ia),ndx,psb_mpi_lpk_,&
& glob_coo%ia,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(loc_coo%ja,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((loc_coo%ja),ndx,psb_mpi_lpk_,&
& glob_coo%ja,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
else
call mpi_gatherv(loc_coo%val,ndx,psb_mpi_c_spk_,&
call mpi_gatherv((loc_coo%val),ndx,psb_mpi_c_spk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_c_spk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(loc_coo%ia,ndx,psb_mpi_lpk_,&
& mpi_gatherv((loc_coo%ia),ndx,psb_mpi_lpk_,&
& glob_coo%ia,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(loc_coo%ja,ndx,psb_mpi_lpk_,&
& mpi_gatherv((loc_coo%ja),ndx,psb_mpi_lpk_,&
& glob_coo%ja,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
end if

@ -58,10 +58,11 @@ subroutine psb_dgather_vect(globx, locx, desc_a, info, iroot)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, me, root, iiroot, icomm, myrank, rootrank
integer(psb_mpk_) :: np, me, root, iiroot, icomm, myrank, rootrank, loc_rows
integer(psb_ipk_) :: ierr(5), err_act, jlx, ilx, lda_locx, lda_globx, i
integer(psb_lpk_) :: m, n, k, ilocx, jlocx, idx, iglobx, jglobx
real(psb_dpk_), allocatable :: llocx(:)
integer(psb_mpk_), allocatable :: szs(:)
character(len=20) :: name, ch_err
name='psb_dgatherv'
@ -125,32 +126,34 @@ subroutine psb_dgather_vect(globx, locx, desc_a, info, iroot)
goto 9999
end if
call psb_realloc(m,globx,info)
if (info /= psb_success_) then
info=psb_err_alloc_dealloc_
call psb_errpush(info,name)
goto 9999
end if
globx(:) = dzero
llocx = locx%get_vect()
do i=1,desc_a%get_local_rows()
call psb_loc_to_glob(i,idx,desc_a,info)
globx(idx) = llocx(i)
end do
llocx = locx%get_vect()
! adjust overlapped elements
do i=1, size(desc_a%ovrlap_elem,1)
if (me /= desc_a%ovrlap_elem(i,3)) then
idx = desc_a%ovrlap_elem(i,1)
call psb_loc_to_glob(idx,desc_a,info)
globx(idx) = dzero
llocx(idx) = dzero
end if
end do
call psb_sum(ctxt,globx(1:m),root=root)
allocate(szs(np))
loc_rows = desc_a%get_local_rows()
call psb_gather(ctxt,loc_rows,szs,root=root)
if ((me == root).or.(root == -1)) then
if (sum(szs) /= m) then
info=psb_err_internal_error_
call psb_errpush(info,name)
goto 9999
end if
call psb_realloc(m,globx,info)
if (info /= psb_success_) then
info=psb_err_alloc_dealloc_
call psb_errpush(info,name)
goto 9999
end if
end if
call psb_gatherv(ctxt,llocx(1:loc_rows),globx,szs,root=root)
call psb_erractionrestore(err_act)
return

@ -66,8 +66,8 @@ subroutine psb_dhalom(x,desc_a,info,jx,ik,work,tran,mode,data)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: err_act, iix, jjx, k, maxk, nrow, imode, i,&
integer(psb_mpk_) :: np, me, k
integer(psb_ipk_) :: err_act, iix, jjx, maxk, nrow, imode, i,&
& liwork,data_, ldx
integer(psb_lpk_) :: m, n, ix, ijx
real(psb_dpk_),pointer :: iwork(:), xp(:,:)

@ -77,8 +77,8 @@ subroutine psb_dovrlm(x,desc_a,info,jx,ik,work,update,mode)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: err_act, iix, jjx, nrow, ncol, k, maxk, update_,&
integer(psb_mpk_) :: np, me, k
integer(psb_ipk_) :: err_act, iix, jjx, nrow, ncol, maxk, update_,&
& mode_, liwork, ldx
integer(psb_lpk_) :: m, n, ix, ijx
real(psb_dpk_),pointer :: iwork(:), xp(:,:)

@ -46,11 +46,11 @@
subroutine psb_dscatterm(globx, locx, desc_a, info, root)
use psb_base_mod, psb_protect_name => psb_dscatterm
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -63,7 +63,8 @@ subroutine psb_dscatterm(globx, locx, desc_a, info, root)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, me, iroot, icomm, myrank, rootrank, iam, nlr
integer(psb_mpk_) :: np, me, iroot, icomm, myrank, rootrank, iam,&
& nlr, minfo
integer(psb_ipk_) :: ierr(5), err_act, nrow,&
& ilocx, jlocx, lda_locx, lda_globx, lock, globk, k, maxk, &
& col,pos
@ -162,13 +163,13 @@ subroutine psb_dscatterm(globx, locx, desc_a, info, root)
rootrank = psb_get_mpi_rank(ctxt,iroot)
!
! This is potentially unsafe when IPK=8
! But then, IPK=8 is highly experimental anyway.
! This is potentially unsafe when PSB_IPK=8
! But then, PSB_IPK=8 is highly experimental anyway.
!
nlr = nrow
call mpi_gather(nlr,1,psb_mpi_mpk_,all_dim,&
& 1,psb_mpi_mpk_,rootrank,icomm,info)
& 1,psb_mpi_mpk_,rootrank,icomm,minfo)
info = minfo
if (iam == iroot) then
displ(1)=0
do i=2,np
@ -195,8 +196,8 @@ subroutine psb_dscatterm(globx, locx, desc_a, info, root)
call mpi_gatherv(ltg,nlr,&
& psb_mpi_lpk_,l_t_g_all,all_dim,&
& displ,psb_mpi_lpk_,rootrank,icomm,info)
& displ,psb_mpi_lpk_,rootrank,icomm,minfo)
info = minfo
do col=1, k
! prepare vector to scatter
if(iam == iroot) then
@ -211,9 +212,9 @@ subroutine psb_dscatterm(globx, locx, desc_a, info, root)
! scatter
call mpi_scatterv(scatterv,all_dim,displ,&
& psb_mpi_r_dpk_,locx(1,col),nrow,&
& psb_mpi_r_dpk_,rootrank,icomm,info)
& psb_mpi_r_dpk_,locx(1,col),nlr,&
& psb_mpi_r_dpk_,rootrank,icomm,minfo)
info = minfo
end do
deallocate(l_t_g_all, scatterv,stat=info)
@ -291,11 +292,11 @@ end subroutine psb_dscatterm
!
subroutine psb_dscatterv(globx, locx, desc_a, info, root)
use psb_base_mod, psb_protect_name => psb_dscatterv
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -308,7 +309,7 @@ subroutine psb_dscatterv(globx, locx, desc_a, info, root)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, iam, iroot, iiroot, icomm, myrank, rootrank, nlr
integer(psb_mpk_) :: np, iam, iroot, iiroot, icomm, myrank, rootrank, nlr, minfo
integer(psb_ipk_) :: ierr(5), err_act, nrow,&
& ilocx, jlocx, lda_locx, lda_globx, k, pos, ilx, jlx
integer(psb_lpk_) :: m, n, i, j, idx, iglobx, jglobx
@ -398,13 +399,13 @@ subroutine psb_dscatterv(globx, locx, desc_a, info, root)
else
rootrank = psb_get_mpi_rank(ctxt,iroot)
!
! This is potentially unsafe when IPK=8
! But then, IPK=8 is highly experimental anyway.
! This is potentially unsafe when PSB_IPK=8
! But then, PSB_IPK=8 is highly experimental anyway.
!
nlr = nrow
call mpi_gather(nlr,1,psb_mpi_mpk_,all_dim,&
& 1,psb_mpi_mpk_,rootrank,icomm,info)
& 1,psb_mpi_mpk_,rootrank,icomm,minfo)
info = minfo
if(iam == iroot) then
displ(1)=0
do i=2,np
@ -436,8 +437,8 @@ subroutine psb_dscatterv(globx, locx, desc_a, info, root)
call mpi_gatherv(ltg,nlr,&
& psb_mpi_lpk_,l_t_g_all,all_dim,&
& displ,psb_mpi_lpk_,rootrank,icomm,info)
& displ,psb_mpi_lpk_,rootrank,icomm,minfo)
info = minfo
! prepare vector to scatter
if (iam == iroot) then
do i=1,np
@ -451,9 +452,9 @@ subroutine psb_dscatterv(globx, locx, desc_a, info, root)
end if
call mpi_scatterv(scatterv,all_dim,displ,&
& psb_mpi_r_dpk_,locx,nrow,&
& psb_mpi_r_dpk_,rootrank,icomm,info)
& psb_mpi_r_dpk_,locx,nlr,&
& psb_mpi_r_dpk_,rootrank,icomm,minfo)
info = minfo
deallocate(l_t_g_all, scatterv,stat=info)
if(info /= psb_success_) then
info=psb_err_from_subroutine_

@ -33,8 +33,8 @@
!
! Gathers a sparse matrix onto a single process.
! Two variants:
! 1. Gathers to PSB_d_SPARSE_MAT (i.e. to matrix with IPK_ indices)
! 2. Gathers to PSB_ld_SPARSE_MAT (i.e. to matrix with LPK_ indices)
! 1. Gathers to PSB_d_SPARSE_MAT (i.e. to matrix with PSB_IPK_ indices)
! 2. Gathers to PSB_ld_SPARSE_MAT (i.e. to matrix with PSB_LPK_ indices)
!
! Note: this function uses MPI_ALLGATHERV. At this time, the size of the
! resulting matrix must be within the range of 4 bytes because of the
@ -48,11 +48,12 @@ subroutine psb_dsp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,keep
use psb_penv_mod
use psb_mat_mod
use psb_tools_mod
#ifdef MPI_MOD
use iso_c_binding
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
type(psb_dspmat_type), intent(inout) :: loca
@ -62,7 +63,7 @@ subroutine psb_dsp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,keep
integer(psb_ipk_), intent(in), optional :: root, dupl
logical, intent(in), optional :: keepnum,keeploc
type(psb_d_coo_sparse_mat) :: loc_coo, glob_coo
type(psb_d_coo_sparse_mat), target :: loc_coo, glob_coo
integer(psb_ipk_) :: nrg, ncg, nzg, nzl
integer(psb_ipk_) :: err_act, dupl_
integer(psb_ipk_) :: ip,naggrm1,naggrp1, i, j, k
@ -156,27 +157,27 @@ subroutine psb_dsp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,keep
enddo
ndx = nzbr(me+1)
if (root_ == -1) then
call mpi_allgatherv(loc_coo%val,ndx,psb_mpi_r_dpk_,&
call mpi_allgatherv((loc_coo%val),ndx,psb_mpi_r_dpk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_r_dpk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(locia,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((locia),ndx,psb_mpi_lpk_,&
& glbia,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(locja,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((locja),ndx,psb_mpi_lpk_,&
& glbja,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
else
call mpi_gatherv(loc_coo%val,ndx,psb_mpi_r_dpk_,&
call mpi_gatherv((loc_coo%val),ndx,psb_mpi_r_dpk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_r_dpk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(locia,ndx,psb_mpi_lpk_,&
& mpi_gatherv((locia),ndx,psb_mpi_lpk_,&
& glbia,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(locja,ndx,psb_mpi_lpk_,&
& mpi_gatherv((locja),ndx,psb_mpi_lpk_,&
& glbja,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
@ -231,11 +232,12 @@ subroutine psb_ldsp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,kee
use psb_penv_mod
use psb_mat_mod
use psb_tools_mod
#ifdef MPI_MOD
use iso_c_binding
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
type(psb_dspmat_type), intent(inout) :: loca
@ -245,7 +247,7 @@ subroutine psb_ldsp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,kee
integer(psb_ipk_), intent(in), optional :: root, dupl
logical, intent(in), optional :: keepnum,keeploc
type(psb_ld_coo_sparse_mat) :: loc_coo, glob_coo
type(psb_ld_coo_sparse_mat), target :: loc_coo, glob_coo
integer(psb_lpk_) :: nrg, ncg, nzg
integer(psb_ipk_) :: err_act, dupl_
integer(psb_ipk_) :: ip,naggrm1,naggrp1, i, j, k, nzl
@ -337,27 +339,27 @@ subroutine psb_ldsp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,kee
enddo
ndx = nzbr(me+1)
if (root_ == -1) then
call mpi_allgatherv(loc_coo%val,ndx,psb_mpi_r_dpk_,&
call mpi_allgatherv((loc_coo%val),ndx,psb_mpi_r_dpk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_r_dpk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(loc_coo%ia,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((loc_coo%ia),ndx,psb_mpi_lpk_,&
& glob_coo%ia,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(loc_coo%ja,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((loc_coo%ja),ndx,psb_mpi_lpk_,&
& glob_coo%ja,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
else
call mpi_gatherv(loc_coo%val,ndx,psb_mpi_r_dpk_,&
call mpi_gatherv((loc_coo%val),ndx,psb_mpi_r_dpk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_r_dpk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(loc_coo%ia,ndx,psb_mpi_lpk_,&
& mpi_gatherv((loc_coo%ia),ndx,psb_mpi_lpk_,&
& glob_coo%ia,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(loc_coo%ja,ndx,psb_mpi_lpk_,&
& mpi_gatherv((loc_coo%ja),ndx,psb_mpi_lpk_,&
& glob_coo%ja,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
end if
@ -369,7 +371,7 @@ subroutine psb_ldsp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,kee
call loc_coo%free()
!
! Is the code below safe? For very large cases
! the indices in glob_coo will overflow. But then,
! the indices in glob_coo will overflow. But then),
! for very large cases it does not make sense to
! gather the matrix on a single procecss anyway...
!
@ -403,11 +405,12 @@ subroutine psb_ldldsp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,k
use psb_penv_mod
use psb_mat_mod
use psb_tools_mod
#ifdef MPI_MOD
use iso_c_binding
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
type(psb_ldspmat_type), intent(inout) :: loca
@ -417,7 +420,7 @@ subroutine psb_ldldsp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,k
integer(psb_ipk_), intent(in), optional :: root, dupl
logical, intent(in), optional :: keepnum,keeploc
type(psb_ld_coo_sparse_mat) :: loc_coo, glob_coo
type(psb_ld_coo_sparse_mat), target :: loc_coo, glob_coo
integer(psb_lpk_) :: nrg, ncg, nzg
integer(psb_ipk_) :: err_act, dupl_
integer(psb_lpk_) :: ip,naggrm1,naggrp1, i, j, k, nzl
@ -507,27 +510,27 @@ subroutine psb_ldldsp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,k
ndx = nzbr(me+1)
if (root_ == -1) then
call mpi_allgatherv(loc_coo%val,ndx,psb_mpi_r_dpk_,&
call mpi_allgatherv((loc_coo%val),ndx,psb_mpi_r_dpk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_r_dpk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(loc_coo%ia,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((loc_coo%ia),ndx,psb_mpi_lpk_,&
& glob_coo%ia,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(loc_coo%ja,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((loc_coo%ja),ndx,psb_mpi_lpk_,&
& glob_coo%ja,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
else
call mpi_gatherv(loc_coo%val,ndx,psb_mpi_r_dpk_,&
call mpi_gatherv((loc_coo%val),ndx,psb_mpi_r_dpk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_r_dpk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(loc_coo%ia,ndx,psb_mpi_lpk_,&
& mpi_gatherv((loc_coo%ia),ndx,psb_mpi_lpk_,&
& glob_coo%ia,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(loc_coo%ja,ndx,psb_mpi_lpk_,&
& mpi_gatherv((loc_coo%ja),ndx,psb_mpi_lpk_,&
& glob_coo%ja,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
end if

@ -66,8 +66,8 @@ subroutine psb_ehalom(x,desc_a,info,jx,ik,work,tran,mode,data)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: err_act, iix, jjx, k, maxk, nrow, imode, i,&
integer(psb_mpk_) :: np, me, k
integer(psb_ipk_) :: err_act, iix, jjx, maxk, nrow, imode, i,&
& liwork,data_, ldx
integer(psb_lpk_) :: m, n, ix, ijx
integer(psb_epk_),pointer :: iwork(:), xp(:,:)

@ -77,8 +77,8 @@ subroutine psb_eovrlm(x,desc_a,info,jx,ik,work,update,mode)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: err_act, iix, jjx, nrow, ncol, k, maxk, update_,&
integer(psb_mpk_) :: np, me, k
integer(psb_ipk_) :: err_act, iix, jjx, nrow, ncol, maxk, update_,&
& mode_, liwork, ldx
integer(psb_lpk_) :: m, n, ix, ijx
integer(psb_epk_),pointer :: iwork(:), xp(:,:)

@ -46,11 +46,11 @@
subroutine psb_escatterm(globx, locx, desc_a, info, root)
use psb_base_mod, psb_protect_name => psb_escatterm
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -63,7 +63,8 @@ subroutine psb_escatterm(globx, locx, desc_a, info, root)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, me, iroot, icomm, myrank, rootrank, iam, nlr
integer(psb_mpk_) :: np, me, iroot, icomm, myrank, rootrank, iam,&
& nlr, minfo
integer(psb_ipk_) :: ierr(5), err_act, nrow,&
& ilocx, jlocx, lda_locx, lda_globx, lock, globk, k, maxk, &
& col,pos
@ -162,13 +163,13 @@ subroutine psb_escatterm(globx, locx, desc_a, info, root)
rootrank = psb_get_mpi_rank(ctxt,iroot)
!
! This is potentially unsafe when IPK=8
! But then, IPK=8 is highly experimental anyway.
! This is potentially unsafe when PSB_IPK=8
! But then, PSB_IPK=8 is highly experimental anyway.
!
nlr = nrow
call mpi_gather(nlr,1,psb_mpi_mpk_,all_dim,&
& 1,psb_mpi_mpk_,rootrank,icomm,info)
& 1,psb_mpi_mpk_,rootrank,icomm,minfo)
info = minfo
if (iam == iroot) then
displ(1)=0
do i=2,np
@ -195,8 +196,8 @@ subroutine psb_escatterm(globx, locx, desc_a, info, root)
call mpi_gatherv(ltg,nlr,&
& psb_mpi_lpk_,l_t_g_all,all_dim,&
& displ,psb_mpi_lpk_,rootrank,icomm,info)
& displ,psb_mpi_lpk_,rootrank,icomm,minfo)
info = minfo
do col=1, k
! prepare vector to scatter
if(iam == iroot) then
@ -211,9 +212,9 @@ subroutine psb_escatterm(globx, locx, desc_a, info, root)
! scatter
call mpi_scatterv(scatterv,all_dim,displ,&
& psb_mpi_epk_,locx(1,col),nrow,&
& psb_mpi_epk_,rootrank,icomm,info)
& psb_mpi_epk_,locx(1,col),nlr,&
& psb_mpi_epk_,rootrank,icomm,minfo)
info = minfo
end do
deallocate(l_t_g_all, scatterv,stat=info)
@ -291,11 +292,11 @@ end subroutine psb_escatterm
!
subroutine psb_escatterv(globx, locx, desc_a, info, root)
use psb_base_mod, psb_protect_name => psb_escatterv
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -308,7 +309,7 @@ subroutine psb_escatterv(globx, locx, desc_a, info, root)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, iam, iroot, iiroot, icomm, myrank, rootrank, nlr
integer(psb_mpk_) :: np, iam, iroot, iiroot, icomm, myrank, rootrank, nlr, minfo
integer(psb_ipk_) :: ierr(5), err_act, nrow,&
& ilocx, jlocx, lda_locx, lda_globx, k, pos, ilx, jlx
integer(psb_lpk_) :: m, n, i, j, idx, iglobx, jglobx
@ -398,13 +399,13 @@ subroutine psb_escatterv(globx, locx, desc_a, info, root)
else
rootrank = psb_get_mpi_rank(ctxt,iroot)
!
! This is potentially unsafe when IPK=8
! But then, IPK=8 is highly experimental anyway.
! This is potentially unsafe when PSB_IPK=8
! But then, PSB_IPK=8 is highly experimental anyway.
!
nlr = nrow
call mpi_gather(nlr,1,psb_mpi_mpk_,all_dim,&
& 1,psb_mpi_mpk_,rootrank,icomm,info)
& 1,psb_mpi_mpk_,rootrank,icomm,minfo)
info = minfo
if(iam == iroot) then
displ(1)=0
do i=2,np
@ -436,8 +437,8 @@ subroutine psb_escatterv(globx, locx, desc_a, info, root)
call mpi_gatherv(ltg,nlr,&
& psb_mpi_lpk_,l_t_g_all,all_dim,&
& displ,psb_mpi_lpk_,rootrank,icomm,info)
& displ,psb_mpi_lpk_,rootrank,icomm,minfo)
info = minfo
! prepare vector to scatter
if (iam == iroot) then
do i=1,np
@ -451,9 +452,9 @@ subroutine psb_escatterv(globx, locx, desc_a, info, root)
end if
call mpi_scatterv(scatterv,all_dim,displ,&
& psb_mpi_epk_,locx,nrow,&
& psb_mpi_epk_,rootrank,icomm,info)
& psb_mpi_epk_,locx,nlr,&
& psb_mpi_epk_,rootrank,icomm,minfo)
info = minfo
deallocate(l_t_g_all, scatterv,stat=info)
if(info /= psb_success_) then
info=psb_err_from_subroutine_

@ -66,8 +66,8 @@ subroutine psb_i2halom(x,desc_a,info,jx,ik,work,tran,mode,data)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: err_act, iix, jjx, k, maxk, nrow, imode, i,&
integer(psb_mpk_) :: np, me, k
integer(psb_ipk_) :: err_act, iix, jjx, maxk, nrow, imode, i,&
& liwork,data_, ldx
integer(psb_lpk_) :: m, n, ix, ijx
integer(psb_i2pk_),pointer :: iwork(:), xp(:,:)

@ -77,8 +77,8 @@ subroutine psb_i2ovrlm(x,desc_a,info,jx,ik,work,update,mode)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: err_act, iix, jjx, nrow, ncol, k, maxk, update_,&
integer(psb_mpk_) :: np, me, k
integer(psb_ipk_) :: err_act, iix, jjx, nrow, ncol, maxk, update_,&
& mode_, liwork, ldx
integer(psb_lpk_) :: m, n, ix, ijx
integer(psb_i2pk_),pointer :: iwork(:), xp(:,:)

@ -46,11 +46,11 @@
subroutine psb_i2scatterm(globx, locx, desc_a, info, root)
use psb_base_mod, psb_protect_name => psb_i2scatterm
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -63,7 +63,8 @@ subroutine psb_i2scatterm(globx, locx, desc_a, info, root)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, me, iroot, icomm, myrank, rootrank, iam, nlr
integer(psb_mpk_) :: np, me, iroot, icomm, myrank, rootrank, iam,&
& nlr, minfo
integer(psb_ipk_) :: ierr(5), err_act, nrow,&
& ilocx, jlocx, lda_locx, lda_globx, lock, globk, k, maxk, &
& col,pos
@ -162,13 +163,13 @@ subroutine psb_i2scatterm(globx, locx, desc_a, info, root)
rootrank = psb_get_mpi_rank(ctxt,iroot)
!
! This is potentially unsafe when IPK=8
! But then, IPK=8 is highly experimental anyway.
! This is potentially unsafe when PSB_IPK=8
! But then, PSB_IPK=8 is highly experimental anyway.
!
nlr = nrow
call mpi_gather(nlr,1,psb_mpi_mpk_,all_dim,&
& 1,psb_mpi_mpk_,rootrank,icomm,info)
& 1,psb_mpi_mpk_,rootrank,icomm,minfo)
info = minfo
if (iam == iroot) then
displ(1)=0
do i=2,np
@ -195,8 +196,8 @@ subroutine psb_i2scatterm(globx, locx, desc_a, info, root)
call mpi_gatherv(ltg,nlr,&
& psb_mpi_lpk_,l_t_g_all,all_dim,&
& displ,psb_mpi_lpk_,rootrank,icomm,info)
& displ,psb_mpi_lpk_,rootrank,icomm,minfo)
info = minfo
do col=1, k
! prepare vector to scatter
if(iam == iroot) then
@ -211,9 +212,9 @@ subroutine psb_i2scatterm(globx, locx, desc_a, info, root)
! scatter
call mpi_scatterv(scatterv,all_dim,displ,&
& psb_mpi_i2pk_,locx(1,col),nrow,&
& psb_mpi_i2pk_,rootrank,icomm,info)
& psb_mpi_i2pk_,locx(1,col),nlr,&
& psb_mpi_i2pk_,rootrank,icomm,minfo)
info = minfo
end do
deallocate(l_t_g_all, scatterv,stat=info)
@ -291,11 +292,11 @@ end subroutine psb_i2scatterm
!
subroutine psb_i2scatterv(globx, locx, desc_a, info, root)
use psb_base_mod, psb_protect_name => psb_i2scatterv
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -308,7 +309,7 @@ subroutine psb_i2scatterv(globx, locx, desc_a, info, root)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, iam, iroot, iiroot, icomm, myrank, rootrank, nlr
integer(psb_mpk_) :: np, iam, iroot, iiroot, icomm, myrank, rootrank, nlr, minfo
integer(psb_ipk_) :: ierr(5), err_act, nrow,&
& ilocx, jlocx, lda_locx, lda_globx, k, pos, ilx, jlx
integer(psb_lpk_) :: m, n, i, j, idx, iglobx, jglobx
@ -398,13 +399,13 @@ subroutine psb_i2scatterv(globx, locx, desc_a, info, root)
else
rootrank = psb_get_mpi_rank(ctxt,iroot)
!
! This is potentially unsafe when IPK=8
! But then, IPK=8 is highly experimental anyway.
! This is potentially unsafe when PSB_IPK=8
! But then, PSB_IPK=8 is highly experimental anyway.
!
nlr = nrow
call mpi_gather(nlr,1,psb_mpi_mpk_,all_dim,&
& 1,psb_mpi_mpk_,rootrank,icomm,info)
& 1,psb_mpi_mpk_,rootrank,icomm,minfo)
info = minfo
if(iam == iroot) then
displ(1)=0
do i=2,np
@ -436,8 +437,8 @@ subroutine psb_i2scatterv(globx, locx, desc_a, info, root)
call mpi_gatherv(ltg,nlr,&
& psb_mpi_lpk_,l_t_g_all,all_dim,&
& displ,psb_mpi_lpk_,rootrank,icomm,info)
& displ,psb_mpi_lpk_,rootrank,icomm,minfo)
info = minfo
! prepare vector to scatter
if (iam == iroot) then
do i=1,np
@ -451,9 +452,9 @@ subroutine psb_i2scatterv(globx, locx, desc_a, info, root)
end if
call mpi_scatterv(scatterv,all_dim,displ,&
& psb_mpi_i2pk_,locx,nrow,&
& psb_mpi_i2pk_,rootrank,icomm,info)
& psb_mpi_i2pk_,locx,nlr,&
& psb_mpi_i2pk_,rootrank,icomm,minfo)
info = minfo
deallocate(l_t_g_all, scatterv,stat=info)
if(info /= psb_success_) then
info=psb_err_from_subroutine_

@ -58,10 +58,11 @@ subroutine psb_igather_vect(globx, locx, desc_a, info, iroot)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, me, root, iiroot, icomm, myrank, rootrank
integer(psb_mpk_) :: np, me, root, iiroot, icomm, myrank, rootrank, loc_rows
integer(psb_ipk_) :: ierr(5), err_act, jlx, ilx, lda_locx, lda_globx, i
integer(psb_lpk_) :: m, n, k, ilocx, jlocx, idx, iglobx, jglobx
integer(psb_ipk_), allocatable :: llocx(:)
integer(psb_mpk_), allocatable :: szs(:)
character(len=20) :: name, ch_err
name='psb_igatherv'
@ -125,32 +126,34 @@ subroutine psb_igather_vect(globx, locx, desc_a, info, iroot)
goto 9999
end if
call psb_realloc(m,globx,info)
if (info /= psb_success_) then
info=psb_err_alloc_dealloc_
call psb_errpush(info,name)
goto 9999
end if
globx(:) = izero
llocx = locx%get_vect()
do i=1,desc_a%get_local_rows()
call psb_loc_to_glob(i,idx,desc_a,info)
globx(idx) = llocx(i)
end do
llocx = locx%get_vect()
! adjust overlapped elements
do i=1, size(desc_a%ovrlap_elem,1)
if (me /= desc_a%ovrlap_elem(i,3)) then
idx = desc_a%ovrlap_elem(i,1)
call psb_loc_to_glob(idx,desc_a,info)
globx(idx) = izero
llocx(idx) = izero
end if
end do
call psb_sum(ctxt,globx(1:m),root=root)
allocate(szs(np))
loc_rows = desc_a%get_local_rows()
call psb_gather(ctxt,loc_rows,szs,root=root)
if ((me == root).or.(root == -1)) then
if (sum(szs) /= m) then
info=psb_err_internal_error_
call psb_errpush(info,name)
goto 9999
end if
call psb_realloc(m,globx,info)
if (info /= psb_success_) then
info=psb_err_alloc_dealloc_
call psb_errpush(info,name)
goto 9999
end if
end if
call psb_gatherv(ctxt,llocx(1:loc_rows),globx,szs,root=root)
call psb_erractionrestore(err_act)
return

@ -33,8 +33,8 @@
!
! Gathers a sparse matrix onto a single process.
! Two variants:
! 1. Gathers to PSB_i_SPARSE_MAT (i.e. to matrix with IPK_ indices)
! 2. Gathers to PSB_@LX@_SPARSE_MAT (i.e. to matrix with LPK_ indices)
! 1. Gathers to PSB_i_SPARSE_MAT (i.e. to matrix with PSB_IPK_ indices)
! 2. Gathers to PSB_@LX@_SPARSE_MAT (i.e. to matrix with PSB_LPK_ indices)
!
! Note: this function uses MPI_ALLGATHERV. At this time, the size of the
! resulting matrix must be within the range of 4 bytes because of the
@ -48,11 +48,12 @@ subroutine psb_isp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,keep
use psb_penv_mod
use psb_mat_mod
use psb_tools_mod
#ifdef MPI_MOD
use iso_c_binding
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
type(psb_ispmat_type), intent(inout) :: loca
@ -62,7 +63,7 @@ subroutine psb_isp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,keep
integer(psb_ipk_), intent(in), optional :: root, dupl
logical, intent(in), optional :: keepnum,keeploc
type(psb_i_coo_sparse_mat) :: loc_coo, glob_coo
type(psb_i_coo_sparse_mat), target :: loc_coo, glob_coo
integer(psb_ipk_) :: nrg, ncg, nzg, nzl
integer(psb_ipk_) :: err_act, dupl_
integer(psb_ipk_) :: ip,naggrm1,naggrp1, i, j, k
@ -156,27 +157,27 @@ subroutine psb_isp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,keep
enddo
ndx = nzbr(me+1)
if (root_ == -1) then
call mpi_allgatherv(loc_coo%val,ndx,psb_mpi_ipk_,&
call mpi_allgatherv((loc_coo%val),ndx,psb_mpi_ipk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_ipk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(locia,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((locia),ndx,psb_mpi_lpk_,&
& glbia,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(locja,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((locja),ndx,psb_mpi_lpk_,&
& glbja,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
else
call mpi_gatherv(loc_coo%val,ndx,psb_mpi_ipk_,&
call mpi_gatherv((loc_coo%val),ndx,psb_mpi_ipk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_ipk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(locia,ndx,psb_mpi_lpk_,&
& mpi_gatherv((locia),ndx,psb_mpi_lpk_,&
& glbia,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(locja,ndx,psb_mpi_lpk_,&
& mpi_gatherv((locja),ndx,psb_mpi_lpk_,&
& glbja,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
@ -231,11 +232,12 @@ subroutine psb_@LX@sp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,k
use psb_penv_mod
use psb_mat_mod
use psb_tools_mod
#ifdef MPI_MOD
use iso_c_binding
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
type(psb_ispmat_type), intent(inout) :: loca
@ -245,7 +247,7 @@ subroutine psb_@LX@sp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,k
integer(psb_ipk_), intent(in), optional :: root, dupl
logical, intent(in), optional :: keepnum,keeploc
type(psb_@LX@_coo_sparse_mat) :: loc_coo, glob_coo
type(psb_@LX@_coo_sparse_mat), target :: loc_coo, glob_coo
integer(psb_lpk_) :: nrg, ncg, nzg
integer(psb_ipk_) :: err_act, dupl_
integer(psb_ipk_) :: ip,naggrm1,naggrp1, i, j, k, nzl
@ -337,27 +339,27 @@ subroutine psb_@LX@sp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,k
enddo
ndx = nzbr(me+1)
if (root_ == -1) then
call mpi_allgatherv(loc_coo%val,ndx,psb_mpi_ipk_,&
call mpi_allgatherv((loc_coo%val),ndx,psb_mpi_ipk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_ipk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(loc_coo%ia,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((loc_coo%ia),ndx,psb_mpi_lpk_,&
& glob_coo%ia,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(loc_coo%ja,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((loc_coo%ja),ndx,psb_mpi_lpk_,&
& glob_coo%ja,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
else
call mpi_gatherv(loc_coo%val,ndx,psb_mpi_ipk_,&
call mpi_gatherv((loc_coo%val),ndx,psb_mpi_ipk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_ipk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(loc_coo%ia,ndx,psb_mpi_lpk_,&
& mpi_gatherv((loc_coo%ia),ndx,psb_mpi_lpk_,&
& glob_coo%ia,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(loc_coo%ja,ndx,psb_mpi_lpk_,&
& mpi_gatherv((loc_coo%ja),ndx,psb_mpi_lpk_,&
& glob_coo%ja,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
end if
@ -369,7 +371,7 @@ subroutine psb_@LX@sp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,k
call loc_coo%free()
!
! Is the code below safe? For very large cases
! the indices in glob_coo will overflow. But then,
! the indices in glob_coo will overflow. But then),
! for very large cases it does not make sense to
! gather the matrix on a single procecss anyway...
!
@ -403,11 +405,12 @@ subroutine psb_@LX@@LX@sp_allgather(globa, loca, desc_a, info, root, dupl,keepn
use psb_penv_mod
use psb_mat_mod
use psb_tools_mod
#ifdef MPI_MOD
use iso_c_binding
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
type(psb_@LX@spmat_type), intent(inout) :: loca
@ -417,7 +420,7 @@ subroutine psb_@LX@@LX@sp_allgather(globa, loca, desc_a, info, root, dupl,keepn
integer(psb_ipk_), intent(in), optional :: root, dupl
logical, intent(in), optional :: keepnum,keeploc
type(psb_@LX@_coo_sparse_mat) :: loc_coo, glob_coo
type(psb_@LX@_coo_sparse_mat), target :: loc_coo, glob_coo
integer(psb_lpk_) :: nrg, ncg, nzg
integer(psb_ipk_) :: err_act, dupl_
integer(psb_lpk_) :: ip,naggrm1,naggrp1, i, j, k, nzl
@ -507,27 +510,27 @@ subroutine psb_@LX@@LX@sp_allgather(globa, loca, desc_a, info, root, dupl,keepn
ndx = nzbr(me+1)
if (root_ == -1) then
call mpi_allgatherv(loc_coo%val,ndx,psb_mpi_ipk_,&
call mpi_allgatherv((loc_coo%val),ndx,psb_mpi_ipk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_ipk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(loc_coo%ia,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((loc_coo%ia),ndx,psb_mpi_lpk_,&
& glob_coo%ia,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(loc_coo%ja,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((loc_coo%ja),ndx,psb_mpi_lpk_,&
& glob_coo%ja,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
else
call mpi_gatherv(loc_coo%val,ndx,psb_mpi_ipk_,&
call mpi_gatherv((loc_coo%val),ndx,psb_mpi_ipk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_ipk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(loc_coo%ia,ndx,psb_mpi_lpk_,&
& mpi_gatherv((loc_coo%ia),ndx,psb_mpi_lpk_,&
& glob_coo%ia,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(loc_coo%ja,ndx,psb_mpi_lpk_,&
& mpi_gatherv((loc_coo%ja),ndx,psb_mpi_lpk_,&
& glob_coo%ja,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
end if

@ -58,10 +58,11 @@ subroutine psb_lgather_vect(globx, locx, desc_a, info, iroot)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, me, root, iiroot, icomm, myrank, rootrank
integer(psb_mpk_) :: np, me, root, iiroot, icomm, myrank, rootrank, loc_rows
integer(psb_ipk_) :: ierr(5), err_act, jlx, ilx, lda_locx, lda_globx, i
integer(psb_lpk_) :: m, n, k, ilocx, jlocx, idx, iglobx, jglobx
integer(psb_lpk_), allocatable :: llocx(:)
integer(psb_mpk_), allocatable :: szs(:)
character(len=20) :: name, ch_err
name='psb_lgatherv'
@ -125,32 +126,34 @@ subroutine psb_lgather_vect(globx, locx, desc_a, info, iroot)
goto 9999
end if
call psb_realloc(m,globx,info)
if (info /= psb_success_) then
info=psb_err_alloc_dealloc_
call psb_errpush(info,name)
goto 9999
end if
globx(:) = lzero
llocx = locx%get_vect()
do i=1,desc_a%get_local_rows()
call psb_loc_to_glob(i,idx,desc_a,info)
globx(idx) = llocx(i)
end do
llocx = locx%get_vect()
! adjust overlapped elements
do i=1, size(desc_a%ovrlap_elem,1)
if (me /= desc_a%ovrlap_elem(i,3)) then
idx = desc_a%ovrlap_elem(i,1)
call psb_loc_to_glob(idx,desc_a,info)
globx(idx) = lzero
llocx(idx) = lzero
end if
end do
call psb_sum(ctxt,globx(1:m),root=root)
allocate(szs(np))
loc_rows = desc_a%get_local_rows()
call psb_gather(ctxt,loc_rows,szs,root=root)
if ((me == root).or.(root == -1)) then
if (sum(szs) /= m) then
info=psb_err_internal_error_
call psb_errpush(info,name)
goto 9999
end if
call psb_realloc(m,globx,info)
if (info /= psb_success_) then
info=psb_err_alloc_dealloc_
call psb_errpush(info,name)
goto 9999
end if
end if
call psb_gatherv(ctxt,llocx(1:loc_rows),globx,szs,root=root)
call psb_erractionrestore(err_act)
return

@ -33,8 +33,8 @@
!
! Gathers a sparse matrix onto a single process.
! Two variants:
! 1. Gathers to PSB_l_SPARSE_MAT (i.e. to matrix with IPK_ indices)
! 2. Gathers to PSB_@LX@_SPARSE_MAT (i.e. to matrix with LPK_ indices)
! 1. Gathers to PSB_l_SPARSE_MAT (i.e. to matrix with PSB_IPK_ indices)
! 2. Gathers to PSB_@LX@_SPARSE_MAT (i.e. to matrix with PSB_LPK_ indices)
!
! Note: this function uses MPI_ALLGATHERV. At this time, the size of the
! resulting matrix must be within the range of 4 bytes because of the
@ -48,11 +48,12 @@ subroutine psb_lsp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,keep
use psb_penv_mod
use psb_mat_mod
use psb_tools_mod
#ifdef MPI_MOD
use iso_c_binding
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
type(psb_lspmat_type), intent(inout) :: loca
@ -62,7 +63,7 @@ subroutine psb_lsp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,keep
integer(psb_ipk_), intent(in), optional :: root, dupl
logical, intent(in), optional :: keepnum,keeploc
type(psb_l_coo_sparse_mat) :: loc_coo, glob_coo
type(psb_l_coo_sparse_mat), target :: loc_coo, glob_coo
integer(psb_ipk_) :: nrg, ncg, nzg, nzl
integer(psb_ipk_) :: err_act, dupl_
integer(psb_ipk_) :: ip,naggrm1,naggrp1, i, j, k
@ -156,27 +157,27 @@ subroutine psb_lsp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,keep
enddo
ndx = nzbr(me+1)
if (root_ == -1) then
call mpi_allgatherv(loc_coo%val,ndx,psb_mpi_lpk_,&
call mpi_allgatherv((loc_coo%val),ndx,psb_mpi_lpk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(locia,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((locia),ndx,psb_mpi_lpk_,&
& glbia,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(locja,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((locja),ndx,psb_mpi_lpk_,&
& glbja,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
else
call mpi_gatherv(loc_coo%val,ndx,psb_mpi_lpk_,&
call mpi_gatherv((loc_coo%val),ndx,psb_mpi_lpk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(locia,ndx,psb_mpi_lpk_,&
& mpi_gatherv((locia),ndx,psb_mpi_lpk_,&
& glbia,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(locja,ndx,psb_mpi_lpk_,&
& mpi_gatherv((locja),ndx,psb_mpi_lpk_,&
& glbja,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
@ -231,11 +232,12 @@ subroutine psb_@LX@sp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,k
use psb_penv_mod
use psb_mat_mod
use psb_tools_mod
#ifdef MPI_MOD
use iso_c_binding
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
type(psb_lspmat_type), intent(inout) :: loca
@ -245,7 +247,7 @@ subroutine psb_@LX@sp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,k
integer(psb_ipk_), intent(in), optional :: root, dupl
logical, intent(in), optional :: keepnum,keeploc
type(psb_@LX@_coo_sparse_mat) :: loc_coo, glob_coo
type(psb_@LX@_coo_sparse_mat), target :: loc_coo, glob_coo
integer(psb_lpk_) :: nrg, ncg, nzg
integer(psb_ipk_) :: err_act, dupl_
integer(psb_ipk_) :: ip,naggrm1,naggrp1, i, j, k, nzl
@ -337,27 +339,27 @@ subroutine psb_@LX@sp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,k
enddo
ndx = nzbr(me+1)
if (root_ == -1) then
call mpi_allgatherv(loc_coo%val,ndx,psb_mpi_lpk_,&
call mpi_allgatherv((loc_coo%val),ndx,psb_mpi_lpk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(loc_coo%ia,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((loc_coo%ia),ndx,psb_mpi_lpk_,&
& glob_coo%ia,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(loc_coo%ja,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((loc_coo%ja),ndx,psb_mpi_lpk_,&
& glob_coo%ja,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
else
call mpi_gatherv(loc_coo%val,ndx,psb_mpi_lpk_,&
call mpi_gatherv((loc_coo%val),ndx,psb_mpi_lpk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(loc_coo%ia,ndx,psb_mpi_lpk_,&
& mpi_gatherv((loc_coo%ia),ndx,psb_mpi_lpk_,&
& glob_coo%ia,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(loc_coo%ja,ndx,psb_mpi_lpk_,&
& mpi_gatherv((loc_coo%ja),ndx,psb_mpi_lpk_,&
& glob_coo%ja,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
end if
@ -369,7 +371,7 @@ subroutine psb_@LX@sp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,k
call loc_coo%free()
!
! Is the code below safe? For very large cases
! the indices in glob_coo will overflow. But then,
! the indices in glob_coo will overflow. But then),
! for very large cases it does not make sense to
! gather the matrix on a single procecss anyway...
!
@ -403,11 +405,12 @@ subroutine psb_@LX@@LX@sp_allgather(globa, loca, desc_a, info, root, dupl,keepn
use psb_penv_mod
use psb_mat_mod
use psb_tools_mod
#ifdef MPI_MOD
use iso_c_binding
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
type(psb_@LX@spmat_type), intent(inout) :: loca
@ -417,7 +420,7 @@ subroutine psb_@LX@@LX@sp_allgather(globa, loca, desc_a, info, root, dupl,keepn
integer(psb_ipk_), intent(in), optional :: root, dupl
logical, intent(in), optional :: keepnum,keeploc
type(psb_@LX@_coo_sparse_mat) :: loc_coo, glob_coo
type(psb_@LX@_coo_sparse_mat), target :: loc_coo, glob_coo
integer(psb_lpk_) :: nrg, ncg, nzg
integer(psb_ipk_) :: err_act, dupl_
integer(psb_lpk_) :: ip,naggrm1,naggrp1, i, j, k, nzl
@ -507,27 +510,27 @@ subroutine psb_@LX@@LX@sp_allgather(globa, loca, desc_a, info, root, dupl,keepn
ndx = nzbr(me+1)
if (root_ == -1) then
call mpi_allgatherv(loc_coo%val,ndx,psb_mpi_lpk_,&
call mpi_allgatherv((loc_coo%val),ndx,psb_mpi_lpk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(loc_coo%ia,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((loc_coo%ia),ndx,psb_mpi_lpk_,&
& glob_coo%ia,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(loc_coo%ja,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((loc_coo%ja),ndx,psb_mpi_lpk_,&
& glob_coo%ja,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
else
call mpi_gatherv(loc_coo%val,ndx,psb_mpi_lpk_,&
call mpi_gatherv((loc_coo%val),ndx,psb_mpi_lpk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(loc_coo%ia,ndx,psb_mpi_lpk_,&
& mpi_gatherv((loc_coo%ia),ndx,psb_mpi_lpk_,&
& glob_coo%ia,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(loc_coo%ja,ndx,psb_mpi_lpk_,&
& mpi_gatherv((loc_coo%ja),ndx,psb_mpi_lpk_,&
& glob_coo%ja,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
end if

@ -66,8 +66,8 @@ subroutine psb_mhalom(x,desc_a,info,jx,ik,work,tran,mode,data)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: err_act, iix, jjx, k, maxk, nrow, imode, i,&
integer(psb_mpk_) :: np, me, k
integer(psb_ipk_) :: err_act, iix, jjx, maxk, nrow, imode, i,&
& liwork,data_, ldx
integer(psb_lpk_) :: m, n, ix, ijx
integer(psb_mpk_),pointer :: iwork(:), xp(:,:)

@ -77,8 +77,8 @@ subroutine psb_movrlm(x,desc_a,info,jx,ik,work,update,mode)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: err_act, iix, jjx, nrow, ncol, k, maxk, update_,&
integer(psb_mpk_) :: np, me, k
integer(psb_ipk_) :: err_act, iix, jjx, nrow, ncol, maxk, update_,&
& mode_, liwork, ldx
integer(psb_lpk_) :: m, n, ix, ijx
integer(psb_mpk_),pointer :: iwork(:), xp(:,:)

@ -46,11 +46,11 @@
subroutine psb_mscatterm(globx, locx, desc_a, info, root)
use psb_base_mod, psb_protect_name => psb_mscatterm
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -63,7 +63,8 @@ subroutine psb_mscatterm(globx, locx, desc_a, info, root)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, me, iroot, icomm, myrank, rootrank, iam, nlr
integer(psb_mpk_) :: np, me, iroot, icomm, myrank, rootrank, iam,&
& nlr, minfo
integer(psb_ipk_) :: ierr(5), err_act, nrow,&
& ilocx, jlocx, lda_locx, lda_globx, lock, globk, k, maxk, &
& col,pos
@ -162,13 +163,13 @@ subroutine psb_mscatterm(globx, locx, desc_a, info, root)
rootrank = psb_get_mpi_rank(ctxt,iroot)
!
! This is potentially unsafe when IPK=8
! But then, IPK=8 is highly experimental anyway.
! This is potentially unsafe when PSB_IPK=8
! But then, PSB_IPK=8 is highly experimental anyway.
!
nlr = nrow
call mpi_gather(nlr,1,psb_mpi_mpk_,all_dim,&
& 1,psb_mpi_mpk_,rootrank,icomm,info)
& 1,psb_mpi_mpk_,rootrank,icomm,minfo)
info = minfo
if (iam == iroot) then
displ(1)=0
do i=2,np
@ -195,8 +196,8 @@ subroutine psb_mscatterm(globx, locx, desc_a, info, root)
call mpi_gatherv(ltg,nlr,&
& psb_mpi_lpk_,l_t_g_all,all_dim,&
& displ,psb_mpi_lpk_,rootrank,icomm,info)
& displ,psb_mpi_lpk_,rootrank,icomm,minfo)
info = minfo
do col=1, k
! prepare vector to scatter
if(iam == iroot) then
@ -211,9 +212,9 @@ subroutine psb_mscatterm(globx, locx, desc_a, info, root)
! scatter
call mpi_scatterv(scatterv,all_dim,displ,&
& psb_mpi_mpk_,locx(1,col),nrow,&
& psb_mpi_mpk_,rootrank,icomm,info)
& psb_mpi_mpk_,locx(1,col),nlr,&
& psb_mpi_mpk_,rootrank,icomm,minfo)
info = minfo
end do
deallocate(l_t_g_all, scatterv,stat=info)
@ -291,11 +292,11 @@ end subroutine psb_mscatterm
!
subroutine psb_mscatterv(globx, locx, desc_a, info, root)
use psb_base_mod, psb_protect_name => psb_mscatterv
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -308,7 +309,7 @@ subroutine psb_mscatterv(globx, locx, desc_a, info, root)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, iam, iroot, iiroot, icomm, myrank, rootrank, nlr
integer(psb_mpk_) :: np, iam, iroot, iiroot, icomm, myrank, rootrank, nlr, minfo
integer(psb_ipk_) :: ierr(5), err_act, nrow,&
& ilocx, jlocx, lda_locx, lda_globx, k, pos, ilx, jlx
integer(psb_lpk_) :: m, n, i, j, idx, iglobx, jglobx
@ -398,13 +399,13 @@ subroutine psb_mscatterv(globx, locx, desc_a, info, root)
else
rootrank = psb_get_mpi_rank(ctxt,iroot)
!
! This is potentially unsafe when IPK=8
! But then, IPK=8 is highly experimental anyway.
! This is potentially unsafe when PSB_IPK=8
! But then, PSB_IPK=8 is highly experimental anyway.
!
nlr = nrow
call mpi_gather(nlr,1,psb_mpi_mpk_,all_dim,&
& 1,psb_mpi_mpk_,rootrank,icomm,info)
& 1,psb_mpi_mpk_,rootrank,icomm,minfo)
info = minfo
if(iam == iroot) then
displ(1)=0
do i=2,np
@ -436,8 +437,8 @@ subroutine psb_mscatterv(globx, locx, desc_a, info, root)
call mpi_gatherv(ltg,nlr,&
& psb_mpi_lpk_,l_t_g_all,all_dim,&
& displ,psb_mpi_lpk_,rootrank,icomm,info)
& displ,psb_mpi_lpk_,rootrank,icomm,minfo)
info = minfo
! prepare vector to scatter
if (iam == iroot) then
do i=1,np
@ -451,9 +452,9 @@ subroutine psb_mscatterv(globx, locx, desc_a, info, root)
end if
call mpi_scatterv(scatterv,all_dim,displ,&
& psb_mpi_mpk_,locx,nrow,&
& psb_mpi_mpk_,rootrank,icomm,info)
& psb_mpi_mpk_,locx,nlr,&
& psb_mpi_mpk_,rootrank,icomm,minfo)
info = minfo
deallocate(l_t_g_all, scatterv,stat=info)
if(info /= psb_success_) then
info=psb_err_from_subroutine_

@ -58,10 +58,11 @@ subroutine psb_sgather_vect(globx, locx, desc_a, info, iroot)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, me, root, iiroot, icomm, myrank, rootrank
integer(psb_mpk_) :: np, me, root, iiroot, icomm, myrank, rootrank, loc_rows
integer(psb_ipk_) :: ierr(5), err_act, jlx, ilx, lda_locx, lda_globx, i
integer(psb_lpk_) :: m, n, k, ilocx, jlocx, idx, iglobx, jglobx
real(psb_spk_), allocatable :: llocx(:)
integer(psb_mpk_), allocatable :: szs(:)
character(len=20) :: name, ch_err
name='psb_sgatherv'
@ -125,32 +126,34 @@ subroutine psb_sgather_vect(globx, locx, desc_a, info, iroot)
goto 9999
end if
call psb_realloc(m,globx,info)
if (info /= psb_success_) then
info=psb_err_alloc_dealloc_
call psb_errpush(info,name)
goto 9999
end if
globx(:) = szero
llocx = locx%get_vect()
do i=1,desc_a%get_local_rows()
call psb_loc_to_glob(i,idx,desc_a,info)
globx(idx) = llocx(i)
end do
llocx = locx%get_vect()
! adjust overlapped elements
do i=1, size(desc_a%ovrlap_elem,1)
if (me /= desc_a%ovrlap_elem(i,3)) then
idx = desc_a%ovrlap_elem(i,1)
call psb_loc_to_glob(idx,desc_a,info)
globx(idx) = szero
llocx(idx) = szero
end if
end do
call psb_sum(ctxt,globx(1:m),root=root)
allocate(szs(np))
loc_rows = desc_a%get_local_rows()
call psb_gather(ctxt,loc_rows,szs,root=root)
if ((me == root).or.(root == -1)) then
if (sum(szs) /= m) then
info=psb_err_internal_error_
call psb_errpush(info,name)
goto 9999
end if
call psb_realloc(m,globx,info)
if (info /= psb_success_) then
info=psb_err_alloc_dealloc_
call psb_errpush(info,name)
goto 9999
end if
end if
call psb_gatherv(ctxt,llocx(1:loc_rows),globx,szs,root=root)
call psb_erractionrestore(err_act)
return

@ -66,8 +66,8 @@ subroutine psb_shalom(x,desc_a,info,jx,ik,work,tran,mode,data)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: err_act, iix, jjx, k, maxk, nrow, imode, i,&
integer(psb_mpk_) :: np, me, k
integer(psb_ipk_) :: err_act, iix, jjx, maxk, nrow, imode, i,&
& liwork,data_, ldx
integer(psb_lpk_) :: m, n, ix, ijx
real(psb_spk_),pointer :: iwork(:), xp(:,:)

@ -77,8 +77,8 @@ subroutine psb_sovrlm(x,desc_a,info,jx,ik,work,update,mode)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: err_act, iix, jjx, nrow, ncol, k, maxk, update_,&
integer(psb_mpk_) :: np, me, k
integer(psb_ipk_) :: err_act, iix, jjx, nrow, ncol, maxk, update_,&
& mode_, liwork, ldx
integer(psb_lpk_) :: m, n, ix, ijx
real(psb_spk_),pointer :: iwork(:), xp(:,:)

@ -46,11 +46,11 @@
subroutine psb_sscatterm(globx, locx, desc_a, info, root)
use psb_base_mod, psb_protect_name => psb_sscatterm
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -63,7 +63,8 @@ subroutine psb_sscatterm(globx, locx, desc_a, info, root)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, me, iroot, icomm, myrank, rootrank, iam, nlr
integer(psb_mpk_) :: np, me, iroot, icomm, myrank, rootrank, iam,&
& nlr, minfo
integer(psb_ipk_) :: ierr(5), err_act, nrow,&
& ilocx, jlocx, lda_locx, lda_globx, lock, globk, k, maxk, &
& col,pos
@ -162,13 +163,13 @@ subroutine psb_sscatterm(globx, locx, desc_a, info, root)
rootrank = psb_get_mpi_rank(ctxt,iroot)
!
! This is potentially unsafe when IPK=8
! But then, IPK=8 is highly experimental anyway.
! This is potentially unsafe when PSB_IPK=8
! But then, PSB_IPK=8 is highly experimental anyway.
!
nlr = nrow
call mpi_gather(nlr,1,psb_mpi_mpk_,all_dim,&
& 1,psb_mpi_mpk_,rootrank,icomm,info)
& 1,psb_mpi_mpk_,rootrank,icomm,minfo)
info = minfo
if (iam == iroot) then
displ(1)=0
do i=2,np
@ -195,8 +196,8 @@ subroutine psb_sscatterm(globx, locx, desc_a, info, root)
call mpi_gatherv(ltg,nlr,&
& psb_mpi_lpk_,l_t_g_all,all_dim,&
& displ,psb_mpi_lpk_,rootrank,icomm,info)
& displ,psb_mpi_lpk_,rootrank,icomm,minfo)
info = minfo
do col=1, k
! prepare vector to scatter
if(iam == iroot) then
@ -211,9 +212,9 @@ subroutine psb_sscatterm(globx, locx, desc_a, info, root)
! scatter
call mpi_scatterv(scatterv,all_dim,displ,&
& psb_mpi_r_spk_,locx(1,col),nrow,&
& psb_mpi_r_spk_,rootrank,icomm,info)
& psb_mpi_r_spk_,locx(1,col),nlr,&
& psb_mpi_r_spk_,rootrank,icomm,minfo)
info = minfo
end do
deallocate(l_t_g_all, scatterv,stat=info)
@ -291,11 +292,11 @@ end subroutine psb_sscatterm
!
subroutine psb_sscatterv(globx, locx, desc_a, info, root)
use psb_base_mod, psb_protect_name => psb_sscatterv
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -308,7 +309,7 @@ subroutine psb_sscatterv(globx, locx, desc_a, info, root)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, iam, iroot, iiroot, icomm, myrank, rootrank, nlr
integer(psb_mpk_) :: np, iam, iroot, iiroot, icomm, myrank, rootrank, nlr, minfo
integer(psb_ipk_) :: ierr(5), err_act, nrow,&
& ilocx, jlocx, lda_locx, lda_globx, k, pos, ilx, jlx
integer(psb_lpk_) :: m, n, i, j, idx, iglobx, jglobx
@ -398,13 +399,13 @@ subroutine psb_sscatterv(globx, locx, desc_a, info, root)
else
rootrank = psb_get_mpi_rank(ctxt,iroot)
!
! This is potentially unsafe when IPK=8
! But then, IPK=8 is highly experimental anyway.
! This is potentially unsafe when PSB_IPK=8
! But then, PSB_IPK=8 is highly experimental anyway.
!
nlr = nrow
call mpi_gather(nlr,1,psb_mpi_mpk_,all_dim,&
& 1,psb_mpi_mpk_,rootrank,icomm,info)
& 1,psb_mpi_mpk_,rootrank,icomm,minfo)
info = minfo
if(iam == iroot) then
displ(1)=0
do i=2,np
@ -436,8 +437,8 @@ subroutine psb_sscatterv(globx, locx, desc_a, info, root)
call mpi_gatherv(ltg,nlr,&
& psb_mpi_lpk_,l_t_g_all,all_dim,&
& displ,psb_mpi_lpk_,rootrank,icomm,info)
& displ,psb_mpi_lpk_,rootrank,icomm,minfo)
info = minfo
! prepare vector to scatter
if (iam == iroot) then
do i=1,np
@ -451,9 +452,9 @@ subroutine psb_sscatterv(globx, locx, desc_a, info, root)
end if
call mpi_scatterv(scatterv,all_dim,displ,&
& psb_mpi_r_spk_,locx,nrow,&
& psb_mpi_r_spk_,rootrank,icomm,info)
& psb_mpi_r_spk_,locx,nlr,&
& psb_mpi_r_spk_,rootrank,icomm,minfo)
info = minfo
deallocate(l_t_g_all, scatterv,stat=info)
if(info /= psb_success_) then
info=psb_err_from_subroutine_

@ -33,8 +33,8 @@
!
! Gathers a sparse matrix onto a single process.
! Two variants:
! 1. Gathers to PSB_s_SPARSE_MAT (i.e. to matrix with IPK_ indices)
! 2. Gathers to PSB_ls_SPARSE_MAT (i.e. to matrix with LPK_ indices)
! 1. Gathers to PSB_s_SPARSE_MAT (i.e. to matrix with PSB_IPK_ indices)
! 2. Gathers to PSB_ls_SPARSE_MAT (i.e. to matrix with PSB_LPK_ indices)
!
! Note: this function uses MPI_ALLGATHERV. At this time, the size of the
! resulting matrix must be within the range of 4 bytes because of the
@ -48,11 +48,12 @@ subroutine psb_ssp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,keep
use psb_penv_mod
use psb_mat_mod
use psb_tools_mod
#ifdef MPI_MOD
use iso_c_binding
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
type(psb_sspmat_type), intent(inout) :: loca
@ -62,7 +63,7 @@ subroutine psb_ssp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,keep
integer(psb_ipk_), intent(in), optional :: root, dupl
logical, intent(in), optional :: keepnum,keeploc
type(psb_s_coo_sparse_mat) :: loc_coo, glob_coo
type(psb_s_coo_sparse_mat), target :: loc_coo, glob_coo
integer(psb_ipk_) :: nrg, ncg, nzg, nzl
integer(psb_ipk_) :: err_act, dupl_
integer(psb_ipk_) :: ip,naggrm1,naggrp1, i, j, k
@ -156,27 +157,27 @@ subroutine psb_ssp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,keep
enddo
ndx = nzbr(me+1)
if (root_ == -1) then
call mpi_allgatherv(loc_coo%val,ndx,psb_mpi_r_spk_,&
call mpi_allgatherv((loc_coo%val),ndx,psb_mpi_r_spk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_r_spk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(locia,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((locia),ndx,psb_mpi_lpk_,&
& glbia,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(locja,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((locja),ndx,psb_mpi_lpk_,&
& glbja,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
else
call mpi_gatherv(loc_coo%val,ndx,psb_mpi_r_spk_,&
call mpi_gatherv((loc_coo%val),ndx,psb_mpi_r_spk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_r_spk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(locia,ndx,psb_mpi_lpk_,&
& mpi_gatherv((locia),ndx,psb_mpi_lpk_,&
& glbia,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(locja,ndx,psb_mpi_lpk_,&
& mpi_gatherv((locja),ndx,psb_mpi_lpk_,&
& glbja,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
@ -231,11 +232,12 @@ subroutine psb_lssp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,kee
use psb_penv_mod
use psb_mat_mod
use psb_tools_mod
#ifdef MPI_MOD
use iso_c_binding
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
type(psb_sspmat_type), intent(inout) :: loca
@ -245,7 +247,7 @@ subroutine psb_lssp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,kee
integer(psb_ipk_), intent(in), optional :: root, dupl
logical, intent(in), optional :: keepnum,keeploc
type(psb_ls_coo_sparse_mat) :: loc_coo, glob_coo
type(psb_ls_coo_sparse_mat), target :: loc_coo, glob_coo
integer(psb_lpk_) :: nrg, ncg, nzg
integer(psb_ipk_) :: err_act, dupl_
integer(psb_ipk_) :: ip,naggrm1,naggrp1, i, j, k, nzl
@ -337,27 +339,27 @@ subroutine psb_lssp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,kee
enddo
ndx = nzbr(me+1)
if (root_ == -1) then
call mpi_allgatherv(loc_coo%val,ndx,psb_mpi_r_spk_,&
call mpi_allgatherv((loc_coo%val),ndx,psb_mpi_r_spk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_r_spk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(loc_coo%ia,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((loc_coo%ia),ndx,psb_mpi_lpk_,&
& glob_coo%ia,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(loc_coo%ja,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((loc_coo%ja),ndx,psb_mpi_lpk_,&
& glob_coo%ja,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
else
call mpi_gatherv(loc_coo%val,ndx,psb_mpi_r_spk_,&
call mpi_gatherv((loc_coo%val),ndx,psb_mpi_r_spk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_r_spk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(loc_coo%ia,ndx,psb_mpi_lpk_,&
& mpi_gatherv((loc_coo%ia),ndx,psb_mpi_lpk_,&
& glob_coo%ia,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(loc_coo%ja,ndx,psb_mpi_lpk_,&
& mpi_gatherv((loc_coo%ja),ndx,psb_mpi_lpk_,&
& glob_coo%ja,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
end if
@ -369,7 +371,7 @@ subroutine psb_lssp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,kee
call loc_coo%free()
!
! Is the code below safe? For very large cases
! the indices in glob_coo will overflow. But then,
! the indices in glob_coo will overflow. But then),
! for very large cases it does not make sense to
! gather the matrix on a single procecss anyway...
!
@ -403,11 +405,12 @@ subroutine psb_lslssp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,k
use psb_penv_mod
use psb_mat_mod
use psb_tools_mod
#ifdef MPI_MOD
use iso_c_binding
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
type(psb_lsspmat_type), intent(inout) :: loca
@ -417,7 +420,7 @@ subroutine psb_lslssp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,k
integer(psb_ipk_), intent(in), optional :: root, dupl
logical, intent(in), optional :: keepnum,keeploc
type(psb_ls_coo_sparse_mat) :: loc_coo, glob_coo
type(psb_ls_coo_sparse_mat), target :: loc_coo, glob_coo
integer(psb_lpk_) :: nrg, ncg, nzg
integer(psb_ipk_) :: err_act, dupl_
integer(psb_lpk_) :: ip,naggrm1,naggrp1, i, j, k, nzl
@ -507,27 +510,27 @@ subroutine psb_lslssp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,k
ndx = nzbr(me+1)
if (root_ == -1) then
call mpi_allgatherv(loc_coo%val,ndx,psb_mpi_r_spk_,&
call mpi_allgatherv((loc_coo%val),ndx,psb_mpi_r_spk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_r_spk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(loc_coo%ia,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((loc_coo%ia),ndx,psb_mpi_lpk_,&
& glob_coo%ia,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(loc_coo%ja,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((loc_coo%ja),ndx,psb_mpi_lpk_,&
& glob_coo%ja,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
else
call mpi_gatherv(loc_coo%val,ndx,psb_mpi_r_spk_,&
call mpi_gatherv((loc_coo%val),ndx,psb_mpi_r_spk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_r_spk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(loc_coo%ia,ndx,psb_mpi_lpk_,&
& mpi_gatherv((loc_coo%ia),ndx,psb_mpi_lpk_,&
& glob_coo%ia,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(loc_coo%ja,ndx,psb_mpi_lpk_,&
& mpi_gatherv((loc_coo%ja),ndx,psb_mpi_lpk_,&
& glob_coo%ja,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
end if

@ -58,10 +58,11 @@ subroutine psb_zgather_vect(globx, locx, desc_a, info, iroot)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, me, root, iiroot, icomm, myrank, rootrank
integer(psb_mpk_) :: np, me, root, iiroot, icomm, myrank, rootrank, loc_rows
integer(psb_ipk_) :: ierr(5), err_act, jlx, ilx, lda_locx, lda_globx, i
integer(psb_lpk_) :: m, n, k, ilocx, jlocx, idx, iglobx, jglobx
complex(psb_dpk_), allocatable :: llocx(:)
integer(psb_mpk_), allocatable :: szs(:)
character(len=20) :: name, ch_err
name='psb_zgatherv'
@ -125,32 +126,34 @@ subroutine psb_zgather_vect(globx, locx, desc_a, info, iroot)
goto 9999
end if
call psb_realloc(m,globx,info)
if (info /= psb_success_) then
info=psb_err_alloc_dealloc_
call psb_errpush(info,name)
goto 9999
end if
globx(:) = zzero
llocx = locx%get_vect()
do i=1,desc_a%get_local_rows()
call psb_loc_to_glob(i,idx,desc_a,info)
globx(idx) = llocx(i)
end do
llocx = locx%get_vect()
! adjust overlapped elements
do i=1, size(desc_a%ovrlap_elem,1)
if (me /= desc_a%ovrlap_elem(i,3)) then
idx = desc_a%ovrlap_elem(i,1)
call psb_loc_to_glob(idx,desc_a,info)
globx(idx) = zzero
llocx(idx) = zzero
end if
end do
call psb_sum(ctxt,globx(1:m),root=root)
allocate(szs(np))
loc_rows = desc_a%get_local_rows()
call psb_gather(ctxt,loc_rows,szs,root=root)
if ((me == root).or.(root == -1)) then
if (sum(szs) /= m) then
info=psb_err_internal_error_
call psb_errpush(info,name)
goto 9999
end if
call psb_realloc(m,globx,info)
if (info /= psb_success_) then
info=psb_err_alloc_dealloc_
call psb_errpush(info,name)
goto 9999
end if
end if
call psb_gatherv(ctxt,llocx(1:loc_rows),globx,szs,root=root)
call psb_erractionrestore(err_act)
return

@ -66,8 +66,8 @@ subroutine psb_zhalom(x,desc_a,info,jx,ik,work,tran,mode,data)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: err_act, iix, jjx, k, maxk, nrow, imode, i,&
integer(psb_mpk_) :: np, me, k
integer(psb_ipk_) :: err_act, iix, jjx, maxk, nrow, imode, i,&
& liwork,data_, ldx
integer(psb_lpk_) :: m, n, ix, ijx
complex(psb_dpk_),pointer :: iwork(:), xp(:,:)

@ -77,8 +77,8 @@ subroutine psb_zovrlm(x,desc_a,info,jx,ik,work,update,mode)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, me
integer(psb_ipk_) :: err_act, iix, jjx, nrow, ncol, k, maxk, update_,&
integer(psb_mpk_) :: np, me, k
integer(psb_ipk_) :: err_act, iix, jjx, nrow, ncol, maxk, update_,&
& mode_, liwork, ldx
integer(psb_lpk_) :: m, n, ix, ijx
complex(psb_dpk_),pointer :: iwork(:), xp(:,:)

@ -46,11 +46,11 @@
subroutine psb_zscatterm(globx, locx, desc_a, info, root)
use psb_base_mod, psb_protect_name => psb_zscatterm
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -63,7 +63,8 @@ subroutine psb_zscatterm(globx, locx, desc_a, info, root)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, me, iroot, icomm, myrank, rootrank, iam, nlr
integer(psb_mpk_) :: np, me, iroot, icomm, myrank, rootrank, iam,&
& nlr, minfo
integer(psb_ipk_) :: ierr(5), err_act, nrow,&
& ilocx, jlocx, lda_locx, lda_globx, lock, globk, k, maxk, &
& col,pos
@ -162,13 +163,13 @@ subroutine psb_zscatterm(globx, locx, desc_a, info, root)
rootrank = psb_get_mpi_rank(ctxt,iroot)
!
! This is potentially unsafe when IPK=8
! But then, IPK=8 is highly experimental anyway.
! This is potentially unsafe when PSB_IPK=8
! But then, PSB_IPK=8 is highly experimental anyway.
!
nlr = nrow
call mpi_gather(nlr,1,psb_mpi_mpk_,all_dim,&
& 1,psb_mpi_mpk_,rootrank,icomm,info)
& 1,psb_mpi_mpk_,rootrank,icomm,minfo)
info = minfo
if (iam == iroot) then
displ(1)=0
do i=2,np
@ -195,8 +196,8 @@ subroutine psb_zscatterm(globx, locx, desc_a, info, root)
call mpi_gatherv(ltg,nlr,&
& psb_mpi_lpk_,l_t_g_all,all_dim,&
& displ,psb_mpi_lpk_,rootrank,icomm,info)
& displ,psb_mpi_lpk_,rootrank,icomm,minfo)
info = minfo
do col=1, k
! prepare vector to scatter
if(iam == iroot) then
@ -211,9 +212,9 @@ subroutine psb_zscatterm(globx, locx, desc_a, info, root)
! scatter
call mpi_scatterv(scatterv,all_dim,displ,&
& psb_mpi_c_dpk_,locx(1,col),nrow,&
& psb_mpi_c_dpk_,rootrank,icomm,info)
& psb_mpi_c_dpk_,locx(1,col),nlr,&
& psb_mpi_c_dpk_,rootrank,icomm,minfo)
info = minfo
end do
deallocate(l_t_g_all, scatterv,stat=info)
@ -291,11 +292,11 @@ end subroutine psb_zscatterm
!
subroutine psb_zscatterv(globx, locx, desc_a, info, root)
use psb_base_mod, psb_protect_name => psb_zscatterv
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -308,7 +309,7 @@ subroutine psb_zscatterv(globx, locx, desc_a, info, root)
! locals
type(psb_ctxt_type) :: ctxt
integer(psb_mpk_) :: np, iam, iroot, iiroot, icomm, myrank, rootrank, nlr
integer(psb_mpk_) :: np, iam, iroot, iiroot, icomm, myrank, rootrank, nlr, minfo
integer(psb_ipk_) :: ierr(5), err_act, nrow,&
& ilocx, jlocx, lda_locx, lda_globx, k, pos, ilx, jlx
integer(psb_lpk_) :: m, n, i, j, idx, iglobx, jglobx
@ -398,13 +399,13 @@ subroutine psb_zscatterv(globx, locx, desc_a, info, root)
else
rootrank = psb_get_mpi_rank(ctxt,iroot)
!
! This is potentially unsafe when IPK=8
! But then, IPK=8 is highly experimental anyway.
! This is potentially unsafe when PSB_IPK=8
! But then, PSB_IPK=8 is highly experimental anyway.
!
nlr = nrow
call mpi_gather(nlr,1,psb_mpi_mpk_,all_dim,&
& 1,psb_mpi_mpk_,rootrank,icomm,info)
& 1,psb_mpi_mpk_,rootrank,icomm,minfo)
info = minfo
if(iam == iroot) then
displ(1)=0
do i=2,np
@ -436,8 +437,8 @@ subroutine psb_zscatterv(globx, locx, desc_a, info, root)
call mpi_gatherv(ltg,nlr,&
& psb_mpi_lpk_,l_t_g_all,all_dim,&
& displ,psb_mpi_lpk_,rootrank,icomm,info)
& displ,psb_mpi_lpk_,rootrank,icomm,minfo)
info = minfo
! prepare vector to scatter
if (iam == iroot) then
do i=1,np
@ -451,9 +452,9 @@ subroutine psb_zscatterv(globx, locx, desc_a, info, root)
end if
call mpi_scatterv(scatterv,all_dim,displ,&
& psb_mpi_c_dpk_,locx,nrow,&
& psb_mpi_c_dpk_,rootrank,icomm,info)
& psb_mpi_c_dpk_,locx,nlr,&
& psb_mpi_c_dpk_,rootrank,icomm,minfo)
info = minfo
deallocate(l_t_g_all, scatterv,stat=info)
if(info /= psb_success_) then
info=psb_err_from_subroutine_

@ -33,8 +33,8 @@
!
! Gathers a sparse matrix onto a single process.
! Two variants:
! 1. Gathers to PSB_z_SPARSE_MAT (i.e. to matrix with IPK_ indices)
! 2. Gathers to PSB_lz_SPARSE_MAT (i.e. to matrix with LPK_ indices)
! 1. Gathers to PSB_z_SPARSE_MAT (i.e. to matrix with PSB_IPK_ indices)
! 2. Gathers to PSB_lz_SPARSE_MAT (i.e. to matrix with PSB_LPK_ indices)
!
! Note: this function uses MPI_ALLGATHERV. At this time, the size of the
! resulting matrix must be within the range of 4 bytes because of the
@ -48,11 +48,12 @@ subroutine psb_zsp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,keep
use psb_penv_mod
use psb_mat_mod
use psb_tools_mod
#ifdef MPI_MOD
use iso_c_binding
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
type(psb_zspmat_type), intent(inout) :: loca
@ -62,7 +63,7 @@ subroutine psb_zsp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,keep
integer(psb_ipk_), intent(in), optional :: root, dupl
logical, intent(in), optional :: keepnum,keeploc
type(psb_z_coo_sparse_mat) :: loc_coo, glob_coo
type(psb_z_coo_sparse_mat), target :: loc_coo, glob_coo
integer(psb_ipk_) :: nrg, ncg, nzg, nzl
integer(psb_ipk_) :: err_act, dupl_
integer(psb_ipk_) :: ip,naggrm1,naggrp1, i, j, k
@ -156,27 +157,27 @@ subroutine psb_zsp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,keep
enddo
ndx = nzbr(me+1)
if (root_ == -1) then
call mpi_allgatherv(loc_coo%val,ndx,psb_mpi_c_dpk_,&
call mpi_allgatherv((loc_coo%val),ndx,psb_mpi_c_dpk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_c_dpk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(locia,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((locia),ndx,psb_mpi_lpk_,&
& glbia,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(locja,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((locja),ndx,psb_mpi_lpk_,&
& glbja,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
else
call mpi_gatherv(loc_coo%val,ndx,psb_mpi_c_dpk_,&
call mpi_gatherv((loc_coo%val),ndx,psb_mpi_c_dpk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_c_dpk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(locia,ndx,psb_mpi_lpk_,&
& mpi_gatherv((locia),ndx,psb_mpi_lpk_,&
& glbia,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(locja,ndx,psb_mpi_lpk_,&
& mpi_gatherv((locja),ndx,psb_mpi_lpk_,&
& glbja,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
@ -231,11 +232,12 @@ subroutine psb_lzsp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,kee
use psb_penv_mod
use psb_mat_mod
use psb_tools_mod
#ifdef MPI_MOD
use iso_c_binding
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
type(psb_zspmat_type), intent(inout) :: loca
@ -245,7 +247,7 @@ subroutine psb_lzsp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,kee
integer(psb_ipk_), intent(in), optional :: root, dupl
logical, intent(in), optional :: keepnum,keeploc
type(psb_lz_coo_sparse_mat) :: loc_coo, glob_coo
type(psb_lz_coo_sparse_mat), target :: loc_coo, glob_coo
integer(psb_lpk_) :: nrg, ncg, nzg
integer(psb_ipk_) :: err_act, dupl_
integer(psb_ipk_) :: ip,naggrm1,naggrp1, i, j, k, nzl
@ -337,27 +339,27 @@ subroutine psb_lzsp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,kee
enddo
ndx = nzbr(me+1)
if (root_ == -1) then
call mpi_allgatherv(loc_coo%val,ndx,psb_mpi_c_dpk_,&
call mpi_allgatherv((loc_coo%val),ndx,psb_mpi_c_dpk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_c_dpk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(loc_coo%ia,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((loc_coo%ia),ndx,psb_mpi_lpk_,&
& glob_coo%ia,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(loc_coo%ja,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((loc_coo%ja),ndx,psb_mpi_lpk_,&
& glob_coo%ja,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
else
call mpi_gatherv(loc_coo%val,ndx,psb_mpi_c_dpk_,&
call mpi_gatherv((loc_coo%val),ndx,psb_mpi_c_dpk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_c_dpk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(loc_coo%ia,ndx,psb_mpi_lpk_,&
& mpi_gatherv((loc_coo%ia),ndx,psb_mpi_lpk_,&
& glob_coo%ia,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(loc_coo%ja,ndx,psb_mpi_lpk_,&
& mpi_gatherv((loc_coo%ja),ndx,psb_mpi_lpk_,&
& glob_coo%ja,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
end if
@ -369,7 +371,7 @@ subroutine psb_lzsp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,kee
call loc_coo%free()
!
! Is the code below safe? For very large cases
! the indices in glob_coo will overflow. But then,
! the indices in glob_coo will overflow. But then),
! for very large cases it does not make sense to
! gather the matrix on a single procecss anyway...
!
@ -403,11 +405,12 @@ subroutine psb_lzlzsp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,k
use psb_penv_mod
use psb_mat_mod
use psb_tools_mod
#ifdef MPI_MOD
use iso_c_binding
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
type(psb_lzspmat_type), intent(inout) :: loca
@ -417,7 +420,7 @@ subroutine psb_lzlzsp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,k
integer(psb_ipk_), intent(in), optional :: root, dupl
logical, intent(in), optional :: keepnum,keeploc
type(psb_lz_coo_sparse_mat) :: loc_coo, glob_coo
type(psb_lz_coo_sparse_mat), target :: loc_coo, glob_coo
integer(psb_lpk_) :: nrg, ncg, nzg
integer(psb_ipk_) :: err_act, dupl_
integer(psb_lpk_) :: ip,naggrm1,naggrp1, i, j, k, nzl
@ -507,27 +510,27 @@ subroutine psb_lzlzsp_allgather(globa, loca, desc_a, info, root, dupl,keepnum,k
ndx = nzbr(me+1)
if (root_ == -1) then
call mpi_allgatherv(loc_coo%val,ndx,psb_mpi_c_dpk_,&
call mpi_allgatherv((loc_coo%val),ndx,psb_mpi_c_dpk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_c_dpk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(loc_coo%ia,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((loc_coo%ia),ndx,psb_mpi_lpk_,&
& glob_coo%ia,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_allgatherv(loc_coo%ja,ndx,psb_mpi_lpk_,&
& mpi_allgatherv((loc_coo%ja),ndx,psb_mpi_lpk_,&
& glob_coo%ja,nzbr,idisp,&
& psb_mpi_lpk_,icomm,minfo)
else
call mpi_gatherv(loc_coo%val,ndx,psb_mpi_c_dpk_,&
call mpi_gatherv((loc_coo%val),ndx,psb_mpi_c_dpk_,&
& glob_coo%val,nzbr,idisp,&
& psb_mpi_c_dpk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(loc_coo%ia,ndx,psb_mpi_lpk_,&
& mpi_gatherv((loc_coo%ia),ndx,psb_mpi_lpk_,&
& glob_coo%ia,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
if (minfo == psb_success_) call &
& mpi_gatherv(loc_coo%ja,ndx,psb_mpi_lpk_,&
& mpi_gatherv((loc_coo%ja),ndx,psb_mpi_lpk_,&
& glob_coo%ja,nzbr,idisp,&
& psb_mpi_lpk_,root_,icomm,minfo)
end if

@ -55,12 +55,12 @@ subroutine psi_a2a_fnd_owner(idx,iprc,idxmap,info,samesize)
use psb_penv_mod
use psb_realloc_mod
use psb_indx_map_mod, psb_protect_name => psi_a2a_fnd_owner
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
integer(psb_lpk_), intent(in) :: idx(:)
@ -138,7 +138,7 @@ subroutine psi_a2a_fnd_owner(idx,iprc,idxmap,info,samesize)
call psb_errpush(psb_err_from_subroutine_,name,a_err='Allocate')
goto 9999
end if
#if defined(SERIAL_MPI)
#if defined(PSB_SERIAL_MPI)
iprc(:) = 0
#else
call mpi_allgather(idx,nv,psb_mpi_lpk_,rmtidx,nv,psb_mpi_lpk_,icomm,minfo)

@ -61,12 +61,12 @@ subroutine psi_adjcncy_fnd_owner(idx,iprc,adj,idxmap,info)
use psb_realloc_mod
use psb_timers_mod
use psb_indx_map_mod, psb_protect_name => psi_adjcncy_fnd_owner
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
integer(psb_lpk_), intent(in) :: idx(:)
@ -81,13 +81,13 @@ subroutine psi_adjcncy_fnd_owner(idx,iprc,adj,idxmap,info)
integer(psb_mpk_), allocatable :: hsz(:),hidx(:), sdidx(:), rvidx(:),&
& sdsz(:), rvsz(:), sdhd(:), rvhd(:), p2pstat(:,:)
integer(psb_mpk_) :: prc, p2ptag, iret
integer(psb_mpk_) :: icomm, minfo
integer(psb_ipk_) :: i,n_row,n_col,err_act,hsize,ip,isz,j, k,&
& last_ih, last_j, nidx, nrecv, nadj
integer(psb_mpk_) :: icomm, minfo, ip,nidx
integer(psb_ipk_) :: n_row,n_col,err_act,hsize,isz,j, k,&
& last_ih, last_j, nrecv, nadj
integer(psb_lpk_) :: mglob, ih
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: np,me
logical, parameter :: gettime=.true., debug=.false.
integer(psb_mpk_) :: np,me
logical, parameter :: debug=.false.
integer(psb_mpk_) :: xchg_alg
logical, parameter :: do_timings=.false.
integer(psb_ipk_), save :: idx_phase1=-1, idx_phase2=-1, idx_phase3=-1
@ -132,10 +132,6 @@ subroutine psi_adjcncy_fnd_owner(idx,iprc,adj,idxmap,info)
goto 9999
end if
if (gettime) then
t0 = psb_wtime()
end if
nadj = size(adj)
nidx = size(idx)
call psb_realloc(nidx,iprc,info)
@ -143,7 +139,7 @@ subroutine psi_adjcncy_fnd_owner(idx,iprc,adj,idxmap,info)
call psb_errpush(psb_err_from_subroutine_,name,a_err='psb_realloc')
goto 9999
end if
#if defined(SERIAL_MPI)
#if defined(PSB_SERIAL_MPI)
iprc(:) = 0
#else
iprc = -1
@ -180,8 +176,8 @@ subroutine psi_adjcncy_fnd_owner(idx,iprc,adj,idxmap,info)
if (do_timings) call psb_toc(idx_phase11)
if (do_timings) call psb_tic(idx_phase12)
rvidx(0) = 0
do i=0, np-1
rvidx(i+1) = rvidx(i) + rvsz(i)
do ip=0, np-1
rvidx(ip+1) = rvidx(ip) + rvsz(ip)
end do
hsize = rvidx(np)
@ -208,9 +204,9 @@ subroutine psi_adjcncy_fnd_owner(idx,iprc,adj,idxmap,info)
! Third, compute local answers
!
call idxmap%g2l(rmtidx(1:hsize),lclidx(1:hsize),info,owned=.true.)
do i=1, hsize
tproc(i) = -1
if ((0 < lclidx(i)).and. (lclidx(i) <= n_row)) tproc(i) = me
do ip=1, hsize
tproc(ip) = -1
if ((0 < lclidx(ip)).and. (lclidx(ip) <= n_row)) tproc(ip) = me
end do
if (do_timings) call psb_toc(idx_phase2)
if (do_timings) call psb_tic(idx_phase3)
@ -219,8 +215,8 @@ subroutine psi_adjcncy_fnd_owner(idx,iprc,adj,idxmap,info)
! Fourth, exchange the answers
!
! Adjust sdidx for reuse in receiving lclidx array
do i=0,np-1
sdidx(i+1) = sdidx(i) + sdsz(i)
do ip=0,np-1
sdidx(ip+1) = sdidx(ip) + sdsz(ip)
end do
call mpi_alltoallv(tproc,rvsz,rvidx,psb_mpi_ipk_,&
& lclidx,sdsz,sdidx,psb_mpi_ipk_,icomm,iret)
@ -229,10 +225,10 @@ subroutine psi_adjcncy_fnd_owner(idx,iprc,adj,idxmap,info)
! Because IPRC has been initialized to -1, the MAX operation selects
! the answers.
!
do i=0, np-1
if (sdsz(i)>0) then
do ip=0, np-1
if (sdsz(ip)>0) then
! Must be nidx == sdsz(i)
iprc(1:nidx) = max(iprc(1:nidx), lclidx(sdidx(i)+1:sdidx(i)+sdsz(i)))
iprc(1:nidx) = max(iprc(1:nidx), lclidx(sdidx(ip)+1:sdidx(ip)+sdsz(ip)))
end if
end do
if (do_timings) call psb_toc(idx_phase3)
@ -266,8 +262,8 @@ subroutine psi_adjcncy_fnd_owner(idx,iprc,adj,idxmap,info)
call mpi_alltoall(sdsz,1,psb_mpi_mpk_,&
& rvsz,1,psb_mpi_mpk_,icomm,minfo)
hidx(0) = 0
do i=0, np-1
hidx(i+1) = hidx(i) + rvsz(i)
do ip=0, np-1
hidx(ip+1) = hidx(ip) + rvsz(ip)
end do
hsize = hidx(np)
! write(0,*)me,' Check on sizes from a2a:',hsize,rvsz(:)
@ -280,22 +276,23 @@ subroutine psi_adjcncy_fnd_owner(idx,iprc,adj,idxmap,info)
call psb_errpush(psb_err_from_subroutine_,name,a_err='Allocate')
goto 9999
end if
do i = 0, np-1
if (rvsz(i)>0) then
do ip = 0, np-1
if (rvsz(ip)>0) then
! write(0,*) me, ' First receive from ',i,rvsz(i)
prc = psb_get_mpi_rank(ctxt,i)
prc = psb_get_mpi_rank(ctxt,ip)
p2ptag = psb_long_swap_tag
!write(0,*) me, ' Posting first receive from ',i,rvsz(i),prc
call mpi_irecv(rmtidx(hidx(i)+1),rvsz(i),&
call mpi_irecv(rmtidx(hidx(ip)+1),rvsz(ip),&
& psb_mpi_lpk_,prc,&
& p2ptag, icomm,rvhd(i),iret)
& p2ptag, icomm,rvhd(ip),iret)
end if
end do
if (do_timings) call psb_toc(idx_phase11)
if (do_timings) call psb_tic(idx_phase12)
do j=1, nadj
if (nidx > 0) then
prc = psb_get_mpi_rank(ctxt,adj(j))
ip = adj(j)
prc = psb_get_mpi_rank(ctxt,ip)
p2ptag = psb_long_swap_tag
!write(0,*) me, ' First send to ',adj(j),nidx, prc
call mpi_send(idx,nidx,&
@ -314,9 +311,9 @@ subroutine psi_adjcncy_fnd_owner(idx,iprc,adj,idxmap,info)
! Third, compute local answers
!
call idxmap%g2l(rmtidx(1:hsize),lclidx(1:hsize),info,owned=.true.)
do i=1, hsize
tproc(i) = -1
if ((0 < lclidx(i)).and. (lclidx(i) <= n_row)) tproc(i) = me
do ip=1, hsize
tproc(ip) = -1
if ((0 < lclidx(ip)).and. (lclidx(ip) <= n_row)) tproc(ip) = me
end do
if (do_timings) call psb_toc(idx_phase2)
if (do_timings) call psb_tic(idx_phase3)
@ -327,7 +324,8 @@ subroutine psi_adjcncy_fnd_owner(idx,iprc,adj,idxmap,info)
do j=1, nadj
!write(0,*) me, ' First send to ',adj(j),nidx
if (nidx > 0) then
prc = psb_get_mpi_rank(ctxt,adj(j))
ip = adj(j)
prc = psb_get_mpi_rank(ctxt,ip)
p2ptag = psb_int_swap_tag
!write(0,*) me, ' Posting second receive from ',adj(j),nidx, prc
call mpi_irecv(lclidx((j-1)*nidx+1),nidx, &
@ -339,12 +337,12 @@ subroutine psi_adjcncy_fnd_owner(idx,iprc,adj,idxmap,info)
!
! Fourth, send data back;
!
do i = 0, np-1
if (rvsz(i)>0) then
prc = psb_get_mpi_rank(ctxt,i)
do ip = 0, np-1
if (rvsz(ip)>0) then
prc = psb_get_mpi_rank(ctxt,ip)
p2ptag = psb_int_swap_tag
!write(0,*) me, ' Second send to ',i,rvsz(i), prc
call mpi_send(tproc(hidx(i)+1),rvsz(i),&
call mpi_send(tproc(hidx(ip)+1),rvsz(ip),&
& psb_mpi_ipk_,prc,&
& p2ptag, icomm,iret)
end if
@ -376,8 +374,8 @@ subroutine psi_adjcncy_fnd_owner(idx,iprc,adj,idxmap,info)
call mpi_alltoall(sdsz,1,psb_mpi_mpk_,&
& rvsz,1,psb_mpi_mpk_,icomm,minfo)
hidx(0) = 0
do i=0, np-1
hidx(i+1) = hidx(i) + rvsz(i)
do ip=0, np-1
hidx(ip+1) = hidx(ip) + rvsz(ip)
end do
hsize = hidx(np)
! write(0,*)me,' Check on sizes from a2a:',hsize,rvsz(:)
@ -392,12 +390,13 @@ subroutine psi_adjcncy_fnd_owner(idx,iprc,adj,idxmap,info)
end if
do j=1, nadj
!write(0,*) me, ' First send to ',adj(j),nidx
if (nidx > 0) call psb_snd(ctxt,idx(1:nidx),adj(j))
ip = adj(j)
if (nidx > 0) call psb_snd(ctxt,idx(1:nidx),ip)
end do
do i = 0, np-1
if (rvsz(i)>0) then
do ip = 0, np-1
if (rvsz(ip)>0) then
! write(0,*) me, ' First receive from ',i,rvsz(i)
call psb_rcv(ctxt,rmtidx(hidx(i)+1:hidx(i)+rvsz(i)),i)
call psb_rcv(ctxt,rmtidx(hidx(ip)+1:hidx(ip)+rvsz(ip)),ip)
end if
end do
@ -405,18 +404,18 @@ subroutine psi_adjcncy_fnd_owner(idx,iprc,adj,idxmap,info)
! Third, compute local answers
!
call idxmap%g2l(rmtidx(1:hsize),lclidx(1:hsize),info,owned=.true.)
do i=1, hsize
tproc(i) = -1
if ((0 < lclidx(i)).and. (lclidx(i) <= n_row)) tproc(i) = me
do ip=1, hsize
tproc(ip) = -1
if ((0 < lclidx(ip)).and. (lclidx(ip) <= n_row)) tproc(ip) = me
end do
!
! Fourth, send data back;
!
do i = 0, np-1
if (rvsz(i)>0) then
do ip = 0, np-1
if (rvsz(ip)>0) then
!write(0,*) me, ' Second send to ',i,rvsz(i)
call psb_snd(ctxt,tproc(hidx(i)+1:hidx(i)+rvsz(i)),i)
call psb_snd(ctxt,tproc(hidx(ip)+1:hidx(ip)+rvsz(ip)),ip)
end if
end do
!
@ -424,8 +423,9 @@ subroutine psi_adjcncy_fnd_owner(idx,iprc,adj,idxmap,info)
! answer is -1. Reuse tproc
!
do j = 1, nadj
!write(0,*) me, ' Second receive from ',adj(j), nidx
if (nidx > 0) call psb_rcv(ctxt,tproc(1:nidx),adj(j))
!write(0,*) me, ' Second receive from ',adj(j), nidx
ip = adj(j)
if (nidx > 0) call psb_rcv(ctxt,tproc(1:nidx),ip)
iprc(1:nidx) = max(iprc(1:nidx), tproc(1:nidx))
end do
case default

@ -31,7 +31,7 @@
!
subroutine psi_i_bld_glb_dep_list(ctxt,loc_dl,length_dl,c_dep_list,dl_ptr,info)
use psi_mod, psb_protect_name => psi_i_bld_glb_dep_list
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
use psb_penv_mod
@ -40,13 +40,15 @@ subroutine psi_i_bld_glb_dep_list(ctxt,loc_dl,length_dl,c_dep_list,dl_ptr,info)
use psb_desc_mod
use psb_sort_mod
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
! ....scalar parameters...
type(psb_ctxt_type), intent(in) :: ctxt
integer(psb_ipk_), intent(in) :: loc_dl(:), length_dl(0:)
integer(psb_ipk_), allocatable, intent(out) :: c_dep_list(:), dl_ptr(:)
integer(psb_ipk_), intent(in) :: loc_dl(:)
integer(psb_mpk_), intent(in) :: length_dl(0:)
integer(psb_mpk_), allocatable, intent(out) :: dl_ptr(:)
integer(psb_ipk_), allocatable, intent(out) :: c_dep_list(:)
integer(psb_ipk_), intent(out) :: info
@ -54,10 +56,11 @@ subroutine psi_i_bld_glb_dep_list(ctxt,loc_dl,length_dl,c_dep_list,dl_ptr,info)
integer(psb_ipk_) :: int_err(5)
! .....local scalars...
integer(psb_ipk_) :: i, proc,j,err_act, length, myld
integer(psb_mpk_) :: myld
integer(psb_ipk_) :: i, proc,j,err_act, length
integer(psb_ipk_) :: err
integer(psb_ipk_) :: debug_level, debug_unit
integer(psb_ipk_) :: me, np
integer(psb_mpk_) :: me, np
integer(psb_mpk_) :: icomm, minfo
logical, parameter :: dist_symm_list=.false., print_dl=.false.
character name*20

@ -67,8 +67,8 @@ subroutine psi_i_crea_index(desc_a,index_in,index_out,nxch,nsnd,nrcv,info)
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_) :: me, np, mode, err_act, dl_lda, ldl
! ...parameters...
integer(psb_ipk_), allocatable :: length_dl(:), loc_dl(:),&
& c_dep_list(:), dl_ptr(:)
integer(psb_mpk_), allocatable :: length_dl(:), dl_ptr(:)
integer(psb_ipk_), allocatable :: loc_dl(:), c_dep_list(:)
integer(psb_ipk_) :: dlmax, dlavg
integer(psb_ipk_),parameter :: root=psb_root_,no_comm=-1
integer(psb_ipk_) :: debug_level, debug_unit
@ -132,7 +132,7 @@ subroutine psi_i_crea_index(desc_a,index_in,index_out,nxch,nsnd,nrcv,info)
if (do_timings) call psb_toc(idx_phase21)
if (do_timings) call psb_tic(idx_phase22)
call psi_sort_dl(dl_ptr,c_dep_list,length_dl,ctxt,info)
call psi_i_csr_sort_dl(dl_ptr,c_dep_list,length_dl,ctxt,info)
if (info /= 0) then
write(0,*) me,trim(name),' From sort_dl ',info
end if

@ -101,14 +101,14 @@ subroutine psi_i_desc_index(desc,index_in,dep_list,&
use psb_realloc_mod
use psb_error_mod
use psb_const_mod
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
use psb_penv_mod
use psb_timers_mod
use psi_mod, psb_protect_name => psi_i_desc_index
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
@ -119,7 +119,8 @@ subroutine psi_i_desc_index(desc,index_in,dep_list,&
integer(psb_ipk_),allocatable :: desc_index(:)
integer(psb_ipk_) :: length_dl,nsnd,nrcv,info
! ....local scalars...
integer(psb_ipk_) :: j,me,np,i,proc
integer(psb_mpk_) :: me,np,proc
integer(psb_ipk_) :: j,i
! ...parameters...
type(psb_ctxt_type) :: ctxt
integer(psb_ipk_), parameter :: no_comm=-1
@ -137,7 +138,7 @@ subroutine psi_i_desc_index(desc,index_in,dep_list,&
& idxr, idxs, iszs, iszr, nesd, nerv, ixp, idx
integer(psb_mpk_) :: icomm, minfo
logical, parameter :: do_timings=.true., oldstyle=.false., debug=.false.
logical, parameter :: do_timings=.false., oldstyle=.false., debug=.false.
integer(psb_ipk_), save :: idx_phase1=-1, idx_phase2=-1, idx_phase3=-1, idx_phase4=-1
logical, parameter :: usempi=.false.
integer(psb_ipk_) :: debug_level, debug_unit

@ -54,12 +54,12 @@ subroutine psi_fnd_owner(nv,idx,iprc,desc,info)
use psb_penv_mod
use psb_realloc_mod
use psi_mod, psb_protect_name => psi_fnd_owner
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
integer(psb_ipk_), intent(in) :: nv

@ -86,12 +86,12 @@ subroutine psi_graph_fnd_owner(idx,iprc,ladj,idxmap,info)
use psb_realloc_mod
use psb_timers_mod
use psb_desc_mod, psb_protect_name => psi_graph_fnd_owner
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
integer(psb_lpk_), intent(in) :: idx(:)
@ -152,7 +152,7 @@ subroutine psi_graph_fnd_owner(idx,iprc,ladj,idxmap,info)
!
nv = size(idx)
call psb_realloc(nv,iprc,info)
#if defined(SERIAL_MPI)
#if defined(PSB_SERIAL_MPI)
iprc(:) = 0
#else
if (info == psb_success_) call psb_realloc(nv,tidx,info)
@ -237,7 +237,7 @@ subroutine psi_graph_fnd_owner(idx,iprc,ladj,idxmap,info)
! Choose a sample, should it be done in this simplistic way?
! Note: nsampl_in is a hint, not an absolute, hence nsampl_out
!
call psi_get_sample(1,idx,iprc,tidx,tsmpl,iend,nsampl_in,nsampl_out)
call psi_get_sample(ione,idx,iprc,tidx,tsmpl,iend,nsampl_in,nsampl_out)
nsampl = min(nsampl_out,nsampl_in)
if (debugsz) write(0,*) me,' From first sampling ',nsampl_in
!
@ -291,7 +291,7 @@ subroutine psi_graph_fnd_owner(idx,iprc,ladj,idxmap,info)
9999 call psb_error_handler(ctxt,err_act)
return
#if !defined(SERIAL_MPI)
#if !defined(PSB_SERIAL_MPI)
contains

@ -58,12 +58,12 @@ subroutine psi_indx_map_fnd_owner(idx,iprc,idxmap,info,adj)
use psb_penv_mod
use psb_realloc_mod
use psb_indx_map_mod, psb_protect_name => psi_indx_map_fnd_owner
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
integer(psb_lpk_), intent(in) :: idx(:)

@ -84,8 +84,8 @@ subroutine psi_i_csr_sort_dl(dl_ptr,c_dep_list,l_dep_list,ctxt,info)
use psb_sort_mod
implicit none
integer(psb_ipk_), intent(in) :: dl_ptr(0:)
integer(psb_ipk_), intent(inout) :: c_dep_list(:), l_dep_list(0:)
integer(psb_mpk_), intent(in) :: dl_ptr(0:), l_dep_list(0:)
integer(psb_ipk_), intent(inout) :: c_dep_list(:)
type(psb_ctxt_type), intent(in) :: ctxt
integer(psb_ipk_), intent(out) :: info
! Local variables

@ -44,12 +44,12 @@ subroutine psi_symm_dep_list_inrv(rvsz,adj,ctxt,info)
use psb_penv_mod
use psb_realloc_mod
use psb_indx_map_mod, psb_protect_name => psi_symm_dep_list_inrv
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
integer(psb_mpk_), intent(inout) :: rvsz(0:)
@ -123,12 +123,12 @@ subroutine psi_symm_dep_list_norv(adj,ctxt,info)
use psb_penv_mod
use psb_realloc_mod
use psb_indx_map_mod, psb_protect_name => psi_symm_dep_list_norv
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
integer(psb_ipk_), allocatable, intent(inout) :: adj(:)

@ -109,7 +109,7 @@ subroutine psi_i_xtr_loc_dl(ctxt,is_bld,is_upd,desc_str,loc_dl,length_dl,info)
! dependence list of current process
!
use psi_mod, psb_protect_name => psi_i_xtr_loc_dl
#ifdef MPI_MOD
#ifdef PSB_MPI_MOD
use mpi
#endif
use psb_penv_mod
@ -118,14 +118,15 @@ subroutine psi_i_xtr_loc_dl(ctxt,is_bld,is_upd,desc_str,loc_dl,length_dl,info)
use psb_desc_mod
use psb_sort_mod
implicit none
#ifdef MPI_H
#ifdef PSB_MPI_H
include 'mpif.h'
#endif
! ....scalar parameters...
logical, intent(in) :: is_bld, is_upd
type(psb_ctxt_type), intent(in) :: ctxt
integer(psb_ipk_), intent(in) :: desc_str(:)
integer(psb_ipk_), allocatable, intent(out) :: loc_dl(:), length_dl(:)
integer(psb_ipk_), allocatable, intent(out) :: loc_dl(:)
integer(psb_mpk_), allocatable, intent(out) :: length_dl(:)
integer(psb_ipk_), intent(out) :: info
! .....local arrays....
integer(psb_ipk_) :: int_err(5)

@ -10,13 +10,14 @@ BASIC_MODS= psb_const_mod.o psb_cbind_const_mod.o psb_error_mod.o psb_realloc_mo
auxil/psb_z_realloc_mod.o
COMMINT= penv/psi_penv_mod.o \
penv/psi_p2p_mod.o penv/psi_m_p2p_mod.o \
penv/psi_p2p_mod.o penv/psi_m_p2p_mod.o penv/psi_i2_p2p_mod.o \
penv/psi_e_p2p_mod.o \
penv/psi_s_p2p_mod.o \
penv/psi_d_p2p_mod.o \
penv/psi_c_p2p_mod.o \
penv/psi_z_p2p_mod.o \
penv/psi_collective_mod.o \
penv/psi_i2_collective_mod.o \
penv/psi_e_collective_mod.o \
penv/psi_m_collective_mod.o \
penv/psi_s_collective_mod.o \
@ -115,6 +116,7 @@ UTIL_MODS = desc/psb_desc_const_mod.o desc/psb_indx_map_mod.o\
MODULES=$(BASIC_MODS) $(SERIAL_MODS) $(UTIL_MODS)
OBJS = error.o psb_base_mod.o $(EXTRA_COBJS) cutil.o
MODDIR=../../modules
INCDIR=../../include
LIBDIR=../
CINCLUDES=-I.
FINCLUDES=$(FMFLAG)$(LIBDIR) $(FMFLAG). $(FIFLAG).
@ -122,6 +124,7 @@ FINCLUDES=$(FMFLAG)$(LIBDIR) $(FMFLAG). $(FIFLAG).
objs: $(MODULES) $(OBJS) $(MPFOBJS)
/bin/cp -p $(CPUPDFLAG) *$(.mod) $(MODDIR)
/bin/cp -p $(CPUPDFLAG) psb_config.h psb_types.h $(INCDIR)
lib: objs $(LIBDIR)/$(LIBNAME)
@ -149,23 +152,25 @@ psb_realloc_mod.o: auxil/psb_m_realloc_mod.o \
auxil/psb_c_realloc_mod.o \
auxil/psb_z_realloc_mod.o
penv/psi_p2p_mod.o: penv/psi_m_p2p_mod.o \
penv/psi_e_p2p_mod.o \
penv/psi_s_p2p_mod.o \
penv/psi_d_p2p_mod.o \
penv/psi_c_p2p_mod.o \
penv/psi_z_p2p_mod.o
penv/psi_collective_mod.o: penv/psi_e_collective_mod.o \
penv/psi_m_collective_mod.o \
penv/psi_s_collective_mod.o \
penv/psi_d_collective_mod.o \
penv/psi_c_collective_mod.o \
penv/psi_z_collective_mod.o
penv/psi_m_p2p_mod.o penv/psi_e_p2p_mod.o penv/psi_s_p2p_mod.o \
penv/psi_p2p_mod.o: penv/psi_i2_p2p_mod.o \
penv/psi_m_p2p_mod.o \
penv/psi_e_p2p_mod.o \
penv/psi_s_p2p_mod.o \
penv/psi_d_p2p_mod.o \
penv/psi_c_p2p_mod.o \
penv/psi_z_p2p_mod.o
penv/psi_collective_mod.o: penv/psi_i2_collective_mod.o \
penv/psi_e_collective_mod.o \
penv/psi_m_collective_mod.o \
penv/psi_s_collective_mod.o \
penv/psi_d_collective_mod.o \
penv/psi_c_collective_mod.o \
penv/psi_z_collective_mod.o
penv/psi_i2_p2p_mod.o penv/psi_m_p2p_mod.o penv/psi_e_p2p_mod.o penv/psi_s_p2p_mod.o \
penv/psi_d_p2p_mod.o penv/psi_c_p2p_mod.o penv/psi_z_p2p_mod.o: penv/psi_penv_mod.o
penv/psi_e_collective_mod.o penv/psi_m_collective_mod.o penv/psi_s_collective_mod.o \
penv/psi_i2_collective_mod.o penv/psi_e_collective_mod.o penv/psi_m_collective_mod.o penv/psi_s_collective_mod.o \
penv/psi_d_collective_mod.o penv/psi_c_collective_mod.o penv/psi_z_collective_mod.o: penv/psi_penv_mod.o \
penv/psi_m_p2p_mod.o penv/psi_e_p2p_mod.o penv/psi_s_p2p_mod.o \
penv/psi_d_p2p_mod.o penv/psi_c_p2p_mod.o penv/psi_z_p2p_mod.o
@ -319,7 +324,7 @@ desc/psb_hash_map_mod.o desc/psb_list_map_mod.o desc/psb_repl_map_mod.o desc/psb
desc/psb_indx_map_mod.o desc/psb_desc_const_mod.o \
auxil/psb_sort_mod.o psb_penv_mod.o
desc/psb_glist_map_mod.o: desc/psb_list_map_mod.o
desc/psb_hash_map_mod.o: desc/psb_hash_mod.o auxil/psb_sort_mod.o
desc/psb_hash_map_mod.o: desc/psb_hash_mod.o auxil/psb_sort_mod.o psb_timers_mod.o
desc/psb_gen_block_map_mod.o: desc/psb_hash_mod.o
desc/psb_hash_mod.o: psb_cbind_const_mod.o
psb_cbind_const_mod.o: psb_const_mod.o
@ -412,7 +417,8 @@ penv/psi_collective_mod.o: penv/psi_collective_mod.F90 $(BASIC_MODS)
$(FC) $(FINCLUDES) $(FDEFINES) $(FCOPT) $(EXTRA_OPT) -c $< -o $@
clean:
/bin/rm -f $(MODULES) $(OBJS) $(MPFOBJS) *$(.mod)
/bin/rm -f $(MODULES) $(OBJS) $(MPFOBJS) *$(.mod)
veryclean: clean
/bin/rm -f *.h

@ -154,30 +154,32 @@ Contains
end if
ub_ = lb_ + len-1
#if defined(PSB_OPENMP)
!$omp critical(r_m_c_rk1)
#endif
if (allocated(rrax)) then
dim = size(rrax)
lbi = lbound(rrax,1)
If ((dim /= len).or.(lbi /= lb_)) Then
Allocate(tmp(lb_:ub_),stat=info)
if (info /= psb_success_) then
err=4025
call psb_errpush(err,name, l_err=(/len*1_psb_lpk_/), &
& a_err='complex(psb_spk_)')
goto 9999
if (info == psb_success_) then
tmp(lb_:lb_-1+min(len,dim))=rrax(lbi:lbi-1+min(len,dim))
call psb_move_alloc(tmp,rrax,info)
end if
tmp(lb_:lb_-1+min(len,dim))=rrax(lbi:lbi-1+min(len,dim))
call psb_move_alloc(tmp,rrax,info)
End If
else
dim = 0
Allocate(rrax(lb_:ub_),stat=info)
if (info /= psb_success_) then
err=4025
call psb_errpush(err,name, l_err=(/len*1_psb_lpk_/), &
& a_err='complex(psb_spk_)')
goto 9999
end if
endif
#if defined(PSB_OPENMP)
!$omp end critical(r_m_c_rk1)
#endif
if (info /= psb_success_) then
err=4025
call psb_errpush(err,name, l_err=(/len*1_psb_lpk_/), &
& a_err='complex(psb_spk_)')
goto 9999
end if
if (present(pad)) then
!$omp parallel do private(i) shared(dim,len)
do i=lb_-1+dim+1,lb_-1+len
@ -239,7 +241,9 @@ Contains
goto 9999
end if
#if defined(PSB_OPENMP)
!$omp critical(r_m_c_rk2)
#endif
if (allocated(rrax)) then
dim = size(rrax,1)
lbi1 = lbound(rrax,1)
@ -248,27 +252,26 @@ Contains
If ((dim /= len1).or.(dim2 /= len2).or.(lbi1 /= lb1_)&
& .or.(lbi2 /= lb2_)) Then
Allocate(tmp(lb1_:ub1_,lb2_:ub2_),stat=info)
if (info /= psb_success_) then
err=4025
call psb_errpush(err,name, l_err=(/len1*1_psb_lpk_*len2/), &
& a_err='complex(psb_spk_)')
goto 9999
if (info == psb_success_) then
tmp(lb1_:lb1_-1+min(len1,dim),lb2_:lb2_-1+min(len2,dim2)) = &
& rrax(lbi1:lbi1-1+min(len1,dim),lbi2:lbi2-1+min(len2,dim2))
call psb_move_alloc(tmp,rrax,info)
end if
tmp(lb1_:lb1_-1+min(len1,dim),lb2_:lb2_-1+min(len2,dim2)) = &
& rrax(lbi1:lbi1-1+min(len1,dim),lbi2:lbi2-1+min(len2,dim2))
call psb_move_alloc(tmp,rrax,info)
End If
else
dim = 0
dim2 = 0
Allocate(rrax(lb1_:ub1_,lb2_:ub2_),stat=info)
if (info /= psb_success_) then
err=4025
call psb_errpush(err,name, l_err=(/len1*1_psb_lpk_*len2/), &
& a_err='complex(psb_spk_)')
goto 9999
end if
endif
#if defined(PSB_OPENMP)
!$omp end critical(r_m_c_rk2)
#endif
if (info /= psb_success_) then
err=4025
call psb_errpush(err,name, l_err=(/len1*1_psb_lpk_*len2/), &
& a_err='complex(psb_spk_)')
goto 9999
end if
if (present(pad)) then
!$omp parallel do private(i) shared(lb1_,dim,len1)
do i=lb1_-1+dim+1,lb1_-1+len1
@ -325,30 +328,33 @@ Contains
end if
ub_ = lb_ + len-1
#if defined(PSB_OPENMP)
!$omp critical(r_e_c_rk1)
#endif
if (allocated(rrax)) then
dim = size(rrax)
lbi = lbound(rrax,1)
If ((dim /= len).or.(lbi /= lb_)) Then
Allocate(tmp(lb_:ub_),stat=info)
if (info /= psb_success_) then
err=4025
call psb_errpush(err,name, e_err=(/len/), &
& a_err='complex(psb_spk_)')
goto 9999
if (info == psb_success_) then
tmp(lb_:lb_-1+min(len,dim))=rrax(lbi:lbi-1+min(len,dim))
call psb_move_alloc(tmp,rrax,info)
end if
tmp(lb_:lb_-1+min(len,dim))=rrax(lbi:lbi-1+min(len,dim))
call psb_move_alloc(tmp,rrax,info)
End If
else
dim = 0
Allocate(rrax(lb_:ub_),stat=info)
if (info /= psb_success_) then
err=4025
call psb_errpush(err,name, e_err=(/len/), &
& a_err='complex(psb_spk_)')
goto 9999
end if
endif
#if defined(PSB_OPENMP)
!$omp end critical(r_e_c_rk1)
#endif
if (info /= psb_success_) then
err=4025
call psb_errpush(err,name, e_err=(/len/), &
& a_err='complex(psb_spk_)')
goto 9999
end if
if (present(pad)) then
rrax(lb_-1+dim+1:lb_-1+len) = pad
endif
@ -407,7 +413,9 @@ Contains
goto 9999
end if
#if defined(PSB_OPENMP)
!$omp critical(r_e_c_rk2)
#endif
if (allocated(rrax)) then
dim = size(rrax,1)
lbi1 = lbound(rrax,1)
@ -416,27 +424,26 @@ Contains
If ((dim /= len1).or.(dim2 /= len2).or.(lbi1 /= lb1_)&
& .or.(lbi2 /= lb2_)) Then
Allocate(tmp(lb1_:ub1_,lb2_:ub2_),stat=info)
if (info /= psb_success_) then
err=4025
call psb_errpush(err,name, e_err=(/(len1*len2)/), &
& a_err='complex(psb_spk_)')
goto 9999
if (info == psb_success_) then
tmp(lb1_:lb1_-1+min(len1,dim),lb2_:lb2_-1+min(len2,dim2)) = &
& rrax(lbi1:lbi1-1+min(len1,dim),lbi2:lbi2-1+min(len2,dim2))
call psb_move_alloc(tmp,rrax,info)
end if
tmp(lb1_:lb1_-1+min(len1,dim),lb2_:lb2_-1+min(len2,dim2)) = &
& rrax(lbi1:lbi1-1+min(len1,dim),lbi2:lbi2-1+min(len2,dim2))
call psb_move_alloc(tmp,rrax,info)
End If
else
dim = 0
dim2 = 0
Allocate(rrax(lb1_:ub1_,lb2_:ub2_),stat=info)
if (info /= psb_success_) then
err=4025
call psb_errpush(err,name, e_err=(/(len1*len2)/), &
& a_err='complex(psb_spk_)')
goto 9999
end if
endif
#if defined(PSB_OPENMP)
!$omp end critical(r_e_c_rk2)
#endif
if (info /= psb_success_) then
err=4025
call psb_errpush(err,name, e_err=(/(len1*len2)/), &
& a_err='complex(psb_spk_)')
goto 9999
end if
if (present(pad)) then
rrax(lb1_-1+dim+1:lb1_-1+len1,:) = pad
rrax(lb1_:lb1_-1+dim,lb2_-1+dim2+1:lb2_-1+len2) = pad
@ -498,7 +505,9 @@ Contains
goto 9999
end if
#if defined(PSB_OPENMP)
!$omp critical(r_me_c_rk2)
#endif
if (allocated(rrax)) then
dim = size(rrax,1)
lbi1 = lbound(rrax,1)
@ -507,27 +516,28 @@ Contains
If ((dim /= len1).or.(dim2 /= len2).or.(lbi1 /= lb1_)&
& .or.(lbi2 /= lb2_)) Then
Allocate(tmp(lb1_:ub1_,lb2_:ub2_),stat=info)
if (info /= psb_success_) then
err=4025
call psb_errpush(err,name, e_err=(/len1*len2/), &
& a_err='complex(psb_spk_)')
goto 9999
if (info == psb_success_) then
tmp(lb1_:lb1_-1+min(len1,dim),lb2_:lb2_-1+min(len2,dim2)) = &
& rrax(lbi1:lbi1-1+min(len1,dim),lbi2:lbi2-1+min(len2,dim2))
call psb_move_alloc(tmp,rrax,info)
end if
tmp(lb1_:lb1_-1+min(len1,dim),lb2_:lb2_-1+min(len2,dim2)) = &
& rrax(lbi1:lbi1-1+min(len1,dim),lbi2:lbi2-1+min(len2,dim2))
call psb_move_alloc(tmp,rrax,info)
End If
else
dim = 0
dim2 = 0
Allocate(rrax(lb1_:ub1_,lb2_:ub2_),stat=info)
if (info /= psb_success_) then
err=4025
call psb_errpush(err,name,e_err=(/len1*len2/),&
& a_err='complex(psb_spk_)')
goto 9999
end if
endif
#if defined(PSB_OPENMP)
!$omp end critical(r_me_c_rk2)
#endif
if (info /= psb_success_) then
err=4025
call psb_errpush(err,name, e_err=(/len1*len2/), &
& a_err='complex(psb_spk_)')
goto 9999
end if
if (present(pad)) then
rrax(lb1_-1+dim+1:lb1_-1+len1,:) = pad
rrax(lb1_:lb1_-1+dim,lb2_-1+dim2+1:lb2_-1+len2) = pad
@ -589,7 +599,9 @@ Contains
goto 9999
end if
#if defined(PSB_OPENMP)
!$omp critical(r_em_c_rk2)
#endif
if (allocated(rrax)) then
dim = size(rrax,1)
lbi1 = lbound(rrax,1)
@ -598,27 +610,26 @@ Contains
If ((dim /= len1).or.(dim2 /= len2).or.(lbi1 /= lb1_)&
& .or.(lbi2 /= lb2_)) Then
Allocate(tmp(lb1_:ub1_,lb2_:ub2_),stat=info)
if (info /= psb_success_) then
err=4025
call psb_errpush(err,name, e_err=(/len1*len2/), &
& a_err='complex(psb_spk_)')
goto 9999
if (info == psb_success_) then
tmp(lb1_:lb1_-1+min(len1,dim),lb2_:lb2_-1+min(len2,dim2)) = &
& rrax(lbi1:lbi1-1+min(len1,dim),lbi2:lbi2-1+min(len2,dim2))
call psb_move_alloc(tmp,rrax,info)
end if
tmp(lb1_:lb1_-1+min(len1,dim),lb2_:lb2_-1+min(len2,dim2)) = &
& rrax(lbi1:lbi1-1+min(len1,dim),lbi2:lbi2-1+min(len2,dim2))
call psb_move_alloc(tmp,rrax,info)
End If
else
dim = 0
dim2 = 0
Allocate(rrax(lb1_:ub1_,lb2_:ub2_),stat=info)
if (info /= psb_success_) then
err=4025
call psb_errpush(err,name, e_err=(/len1*len2/), &
& a_err='complex(psb_spk_)')
goto 9999
end if
endif
#if defined(PSB_OPENMP)
!$omp end critical(r_em_c_rk2)
#endif
if (info /= psb_success_) then
err=4025
call psb_errpush(err,name, e_err=(/len1*len2/), &
& a_err='complex(psb_spk_)')
goto 9999
end if
if (present(pad)) then
rrax(lb1_-1+dim+1:lb1_-1+len1,:) = pad
rrax(lb1_:lb1_-1+dim,lb2_-1+dim2+1:lb2_-1+len2) = pad
@ -715,8 +726,6 @@ Contains
End Subroutine psb_r_e_2_c_rk1
subroutine psb_ab_cpy_c_s(vin,vout,info)
use psb_error_mod
@ -999,8 +1008,9 @@ Contains
isz = psb_size(v)
If (len > isz) Then
#if defined(OPENMP)
!$OMP CRITICAL
#if defined(PSB_OPENMP)
!$omp critical(m_sz_c_rk1)
isz = psb_size(v)
if (len > isz) then
if (present(newsz)) then
isz = max(len+1,1,newsz)
@ -1012,7 +1022,9 @@ Contains
call psb_realloc(isz,v,info,pad=pad)
end if
!$OMP END CRITICAL
if (info /= psb_success_) &
& write(0,*) 'Error from realloc ',info,len,isz
!$omp end critical(m_sz_c_rk1)
if (info /= psb_success_) then
info=psb_err_from_subroutine_
@ -1028,7 +1040,6 @@ Contains
else
isz = max(len,1,int(1.25*isz))
endif
call psb_realloc(isz,v,info,pad=pad)
end if
@ -1075,6 +1086,28 @@ Contains
end if
isz = psb_size(v)
If (len > isz) Then
#if defined(PSB_OPENMP)
!$omp critical(e_sz_c_rk1)
isz = psb_size(v)
If (len > isz) Then
if (present(newsz)) then
isz = max(len+1,1,newsz)
else if (present(addsz)) then
isz = max(len,1,isz+addsz)
else
isz = max(len,1,int(1.25*isz))
endif
call psb_realloc(isz,v,info,pad=pad)
end If
if (info /= psb_success_)&
& write(0,*) 'Error from realloc ',info,len,isz
!$omp end critical(e_sz_c_rk1)
if (info /= psb_success_) then
info=psb_err_from_subroutine_
call psb_errpush(info,name,a_err='psb_realloc')
goto 9999
End If
#else
if (present(newsz)) then
isz = max(len+1,1,newsz)
else if (present(addsz)) then
@ -1082,13 +1115,13 @@ Contains
else
isz = max(len,1,int(1.25*isz))
endif
call psb_realloc(isz,v,info,pad=pad)
if (info /= psb_success_) then
info=psb_err_from_subroutine_
call psb_errpush(info,name,a_err='psb_realloc')
goto 9999
End If
#endif
end If
call psb_erractionrestore(err_act)

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save