-
Notifications
You must be signed in to change notification settings - Fork 12
Advanced compilation with MPI
In order to compile PETSc and run it concurrently in our MPM code, the following step-by-step commands are suggested. Note that, PETSc is only needed to run semi-implicit and fully implicit solvers. One can run the explicit solvers without installing PETSc.
First, since it is required to use the same version of OpenMPI to compile PETSc and MPM, we have to first uninstall some pre-installed libraries.
sudo apt remove libboost-all-dev
sudo apt remove libopenmpi-dev
If some files of OpenMPI remained, remove all files manually.
Then, we should re-install OpenMPI as:
- Download "openmpi-4.1.5.tar.gz" from the official site (https://www.open-mpi.org/software/ompi/v4.1/)
- Expand the compressed files
tar -zxvf openmpi-4.1.5.tar.gz
- Build OpenMPI and install (Installed to
$HOME/opt/openmpi
).
cd openmpi-4.1.5
./configure --prefix=/usr/local/openmpi-4.1.5 CC=gcc CXX=g++ FC=gfortran
make all
sudo make install
- Add the following lines to
~/.bashrc
.
MPIROOT=/usr/local/openmpi-4.1.5
PATH=$MPIROOT/bin:$PATH
LD_LIBRARY_PATH=$MPIROOT/lib:$LD_LIBRARY_PATH
MANPATH=$MPIROOT/share/man:$MANPATH
export MPIROOT PATH LD_LIBRARY_PATH MANPATH
- Reload bash.
source ~/.bashrc
- Check installation.
mpicc -v
- Download PETSc using git.
git clone https://gitlab.com/petsc/petsc.git petsc
- Configure PETSc installation.
cd petsc
./configure --with-mpi-dir=/usr/local/openmpi-4.1.5 --with-debugging=0 COPTFLAGS='-O3 -march=native -mtune=native' CXXOPTFLAGS='-O3 -march=native -mtune=native' FOPTFLAGS='-O3 -march=native -mtune=native' --download-fblaslapack=1
- Build and check PETSc.
make PETSC_DIR=/home/user/workspace/petsc PETSC_ARCH=arch-linux-c-opt all
make PETSC_DIR=/home/user/workspace/petsc PETSC_ARCH=arch-linux-c-opt check
sudo apt install libboost-all-dev
git clone https://github.com/kahip/kahip && cd kahip
sh ./compile_withcmake.sh
The Geomechanics MPM code can be compiled with MPI
to distribute the workload across compute nodes in a cluster.
Additional steps to load OpenMPI
on Fedora:
source /etc/profile.d/modules.sh
export MODULEPATH=$MODULEPATH:/usr/share/modulefiles
module load mpi/openmpi-x86_64
Compile with OpenMPI (with halo exchange):
mkdir build && cd build
export CXX_COMPILER=mpicxx
cmake -DCMAKE_BUILD_TYPE=Release -DKAHIP_ROOT=~/workspace/KaHIP/ -DHALO_EXCHANGE=On ..
make -jN
Compile with OpenMPI (with halo exchange and PETSc):
mkdir build && cd build
export PETSC_ARCH=arch-linux-c-opt
export PETSC_DIR=/workspace/petsc
export CXX_COMPILER=mpicxx
cmake -DCMAKE_BUILD_TYPE=Release -DKAHIP_ROOT=~/workspace/KaHIP/ -DHALO_EXCHANGE=On -DUSE_PETSC=On ..
make -jN
To enable halo exchange set -DHALO_EXCHANGE=On
in CMake
. Halo exchange is a better MPI communication protocol, however, use this only for a larger number of MPI tasks (e.g. > 4).
- Prerequisite packages
- Compiling MPM in Linux
- Compiling MPM in ARM-based MacOS
- Advanced compilation with MPI
- Explicit single-phase MPM
- Implicit linear and nonlinear schemes
- Semi-implicit Navier Stokes solver
- Two-phase MPM solver
- Using higher-order basis functions
- Infinitesimal strain
- Finite strain
- Fluids