-
Notifications
You must be signed in to change notification settings - Fork 27
How to use MPI with solveCRA
The MPI prerequisite can be installed with a couple of commands on Ubuntu. The only choice to make is between openmpi and mpich2. Most of our testing is done with openmpi but mpich should also work.
For openmpi do:
$ sudo apt-get install libopenmpi-dev openmpi-bin libhdf5-openmpi-dev
For mpich do:
$ sudo apt-get install mpich libmpich-dev libhdf5-mpich-dev
Please make sure not to install mpich and openmpi together. When both openmpi and mpich are installed strange errors will occur and selalib will not work. If you see both installed please remove both and install one.
Start downloading the last version of OpenMPI from it's official page: http://www.open-mpi.org/software/ompi
Decompress the downloaded file (should be called something like openmpi-x.x.x.tar.xxx, changing x.x.x for the version downloaded):
tar -xvf openmpi-*
Configure the installation file (making use of the superuser privileges of your system) and this task usually takes between 5 and 10 minutes.
It is necessary to add on the prefix the installation directory we want to use for OpenMPI.
The normal thing to do would be to select the next directory “/home//.openmpi”.
$ ./configure --prefix="/home/$USER/.openmpi"
Now is the time for the hard work, install it. For it, we'll make use of the “make” tool. This is a good moment for the coffee, this should take between 10 and 15 minutes, so let it work.
$ make
$ sudo make install
All that is left to do is to include the path to our path environment the path “installation_directory/bin” and to the library environment variable “installation_directory/lib/”. If your system uses bash you'll have to use the command export.
export PATH="$PATH:/home/$USER/.openmpi/bin"
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/$USER/.openmpi/lib/"
The compiled program could be launched as follows without any user provided parameters:
$ mpirun -np 2 name_of_compiled_program
Here -np or -n followed with 2 means to launch 2 processes to run the program, a bigger integer could be provided by user if more processes are desired to run with.
If user wish to use the runtime affinity of MPI the an option could be put just before the parameter for number of processes, for example the following will launch the program with 2 processes each of which will be mapped to one core:
$ mpirun --map-by core -np 2 name_of_compiled_program
It is also possible to map one process to one node in the case of launching the program over several nodes of computing centre.
More detail could be found at the following link: https://www.open-mpi.org/doc/v2.0/man1/mpirun.1.php
Detailed case for launching a MPI enabled program with different parameters and user defined arguments
Let's consider of launching the benchmark-solve-cra inside the benchmarks directory, a user first needs to move into the concerning directory with the program named benchmark-solve-cra.C then type the following command that will compile the desired program with the provided Makefile:
$ make benchmark-solve-cra
Once the comilation finishes without error, we can launch the executable with defaut parameter values and with for example 2 processes as follows:
$ mpirun -np 2 benchmark-solve-cra
And to launch the program with personalized parameter values, for example, to benchmark solving a system with size 400 x 400 filled with value of bitsize 100, it will be required to launch the program as following:
$ mpirun -np 2 benchmark-solve-cra -n 400 -b 100
Suppose you wish to launch the program with more processes on a cluster and map each process to one core, for example, to benchmark solving a system with size 400 x 400 filled with value of bitsize 100 and to map processes to 20 cores, it will be necessary to launch the program as following:
$ mpirun --map-by core -np 20 benchmark-solve-cra -n 400 -b 100
More sophisticatesly if you wish to launch the program with 20 processes and map each process to one node, for example, to benchmark solving a system with size 400 x 400 filled with value of bitsize 100 and to map processes to 20 nodes, it will be necessary to adapt the program launching as follows:
$ mpirun --map-by node -np 20 benchmark-solve-cra -n 400 -b 100
And more advanced examples could be found at the following link: https://www.open-mpi.org/doc/v2.0/man1/mpirun.1.php