Open MPI
There are three different Open MPI choices in the list of MPI implementations to choose from in Linaro Forge when debugging or profiling for Open MPI.
- Open MPI
The job is launched with a custom launch agent that, in turn, launches the Linaro Forge daemons.
- Open MPI (Compatibility)
mpirun
launches the Linaro Forge daemons directly. This startup method does not take advantage of the Linaro Forge scalable tree.- Open MPI for Cray XT/XE/XK/XC
For Open MPI running on Cray XT/XE/XK/XC systems. This method is fully capable of using the Linaro Forge scalable tree infrastructure.
To launch with
aprun
(instead ofmpirun
), enter one of these commands:ddt --mpi="OpenMPI (Cray XT/XE/XK)" --mpiexec aprun [arguments]
map --mpi="OpenMPI (Cray XT/XE/XK)" --mpiexec aprun [arguments]
Known issues
Message queue debugging does not work with the UCX or Yalla PML, due to UCX and Yalla not storing the required information.
The version of Open MPI packaged with Ubuntu has the Open MPI debug libraries stripped. This prevents the Message Queues feature of Linaro DDT from working.
On Infiniband systems, Open MPI and CUDA can conflict in a manner that results in failure to start processes, or a failure for processes to be debugged. To enable CUDA interoperability with Infiniband, set the CUDA environment variable to
1
.
These versions of Open MPI do not work with Linaro Forge because of bugs in the Open MPI debug interface:
Open MPI 4.1.0 when compiled with
-O2
or-O3
optimization flags using NVIDIA HPC compilers.
To resolve any of these issues, select Open MPI (Compatibility) for the MPI Implementation.