There are three different Open MPI choices in the list of MPI implementations to choose from in Arm® Forge when debugging or profiling for Open MPI.
Open MPI – the job is launched with a custom launch agent that, in turn, launches the Arm daemons.
Open MPI (Compatibility) – mpirun
launches the Arm daemons directly. This startup method does not take advantage of the Arm scalable tree.
Open MPI for Cray XT/XE/XK/XC – for Open MPI running on Cray XT/XE/XK/XC systems. This method is fully capable of using the Arm scalable tree infrastructure.
To launch with aprun
(instead of mpirun
), enter one of these commands:
ddt --mpi="OpenMPI (Cray XT/XE/XK)" --mpiexec aprun [arguments]
map --mpi="OpenMPI (Cray XT/XE/XK)" --mpiexec aprun [arguments]
1
.These versions of Open MPI do not work with Arm® Forge because of bugs in the Open MPI debug interface:
-O2
or -O3
optimization flags,
and PGI 19.x, 20.1 and NVIDIA HPC compilers.To resolve any of these issues, select Open MPI (Compatibility) for the MPI Implementation.
To use Open MPI versions 3.0.0 to 3.0.4 (inclusive) and Open MPI
versions 3.1.0 to 3.1.3 (inclusive) with the GNU compiler on IBM Power
systems, you must configure the Open MPI build with
CFLAGS=-fasynchronous-unwind-tables
. This fixes a startup bug where Arm® Forge is unable to step out of MPI_Init
into your main function.
The startup bug occurs because of missing debug information and optimization in the Open MPI library. If you already configure with -g
, you do not need to add this extra flag.
An example configure command is:
./configure --prefix=/software/openmpi-3.1.2 CFLAGS=-fasynchronous-unwind-tables
An alternative workaround if you do not have the option to recompile your MPI, is to select Open MPI (Compatibility) for the MPI Implementation.
This issue is fixed in later versions.