Skip to content

building_running_mpi

noma edited this page Nov 8, 2019 · 6 revisions

Building and Running: MPI

The MPI backend is the most generic communication backend supported by HAM-Offload. You can use it to target any hardware, heterogeneous or homogeneous, where an MPI job can be started. Rank 0 will typically be the process with the host-role, while all other ranks act as offload targets.

It is also the backend of choice for local testing and development. Just install an OpenMPI or MPICH package on your development machine and use mpirun to test your applications. It will start local processes that communicate through shared memory.

Building

CMake should detect an existing MPI implementation automatically using its find_package() function.

mkdir build
cd build
cmake ../ham
make -j

If more than one architecture is targeted within a heterogeneous MPI job, create multiple builds using different compilers or flags as needed, see Using CMake.

Running

Simply use mpirun as for any other MPI job. Rank 0 will be the process that is your host, all other processes perform offloaded tasks. The HAM-Offload node-IDs are directly mapped to the MPI ranks.

Here some are generic examples, where mic0 would be a typical hostname for an Intel Xeon Phi (KNC) accelerator card. Different binaries occur in scenarios where nodes of an MPI job feature different CPU architectures, e.g. different phases of a supercomputer installation, or a system with Intel Xeon and Xeon Phi (KNL) nodes.

# start two processes
mpirun -n 2 <application_binary>

# this is the syntax for starting jobs with different binaries
mpirun -host localhost -n 1 <application_binary_host> : -host mic0 -n 1 <application_binary_mic>

Alternatively, the process mapping could be specified in a hostfile.

The inner_product example can be run with 2 MPI processes as follows

mpirun -n 2 -host build/inner_product_mpi

The inner_product example can be run using one host and one Xeon Phi process like this (assuming a second CMake build for the MIC architecture using the MPI backend exists in build.mic):

mpirun -n 1 -host build.host/inner_product_mpi : -n 1 -host mic0 build.mic/inner_product_mpi
Clone this wiki locally