In January 2023, Oscar will be migrating to use Slurm version 22.05.7.
{% hint style="info" %} Slurm version 22.05.7
- improves security and speed,
- supports boths PMI2 and PMIX, and
- provides REST APIs
- allows users to prioritize their jobs via scontrol top <job_id> {% endhint %}
While most applications will be unaffected by these changes, applications built to make use of MPI may need to be rebuilt to work properly. To help facilitate this, we are providing users who use MPI-based applications (either through Oscar's module system or built by users) with advanced access to a test cluster running the new version of Slurm. Instructions for accessing the test cluster, building MPI-based applications, and submitting MPI jobs using the new Slurm, are provided below.
Please note - some existing modules of MPI-based applications will be deprecated and removed from the system as part of this upgrade. A list of modules that will no longer be available to users following the upgrade is given at the bottom of the page.
- Request access to the Slurm 22.05.7 test cluster (email [email protected])
- Connect to Oscar via either SSH or Open OnDemand (instructions below)
- Build your application using the new MPI applications listed below
- Submit your job
{% hint style="danger" %} Users must contact [email protected] to obtain access to the test cluster in order to submit jobs using Slurm 22.05.7. {% endhint %}
- Connect to Oscar using the
ssh
command in a terminal window - From Oscar's command line, connect to the test cluster using the command
ssh node1947
- From the node1947 command line, submit your jobs (either interactive or batch) as follows:
{% tabs %} {% tab title="Interactive job" %}
- For CPU-only jobs:
interact -q image-test
- For GPU jobs:
interact -q gpu
{% endtab %}
{% tab title="Batch job" %}
Include the following line within your batch script and then submit using the sbatch
command, as usual
- For CPU-only jobs:
#SBATCH -p image-test
- For GPU jobs:
#SBATCH -p gpu
{% endtab %} {% endtabs %}
- Open a web browser and connect to poodcit2.services.brown.edu
- Login with your Oscar username and password
- Start a session using the Advanced Desktop App
- Select the gpu partition and click the launch button.
{% hint style="info" %}
- Only the Advanced Desktop App will connect to the test cluster
- The Advanced Desktop App must connect to the gpu partition {% endhint %}
{% hint style="info" %} If the "Current Module Version" for an application is blank, a new version is built for the application. {% endhint %}
Application | Current Module Version | Migrated or New Module Version |
---|---|---|
abaqus |
|
|
ambertools |
|
|
boost |
|
|
CharMM |
|
|
cp2k |
|
|
dedalus |
|
|
esmf |
|
|
fftw |
|
|
global_arrays |
|
|
gpaw |
|
|
gromacs |
|
|
hdf5 |
|
|
ior |
|
|
lammps |
|
|
meme |
|
|
Molpro |
|
|
mpi |
|
|
mpi4py |
|
|
netcdf |
|
|
netcdf4-python |
|
|
osu-mpi |
|
|
petsc |
|
|
pnetcdf |
|
|
qmcpack |
|
|
quantumespresso |
|
|
vasp |
|
|
wrf |
|
We recommend using following MPI modules to build your custom applications:
MPI | Oscar Module |
---|---|
GCC based OpenMPI | mpi/openmpi_4.0.7_gcc_10.2_slurm22 |
Intel based OpenMPI | mpi/openmpi_4.0.7_intel_2020.2_slurm22 |
MVAPICH | mpi/mvapich2-2.3.5_gcc_10.2_slurm22 |
Mellanox HPC-X | mpi/hpcx_2.7.0_gcc_10.2_slurm22 |
{% tabs %}
{% tab title="GNU Configure Example" %}
module load mpi/openmpi_4.0.7_gcc_10.2_slurm22
module load gcc/10.2 cuda/11.7.1
CC=mpicc CXX=mpicxx ./configure --prefix=/path/to/install/dir
{% endtab %}
{% tab title="CMAKE Configure Example" %}
module load mpi/openmpi_4.0.7_gcc_10.2_slurm22
module load gcc/10.2 cuda/11.7.1
cmake -DCMAKE_C_COMPILER=mpicc DCMAKE_CXX_COMPILER=mpicxx ..
{% endtab %}
{% endtabs %}
{% hint style="info" %} A new module might be available for a deprecated application module. Please search the table above to check if a new module is available for an application. {% endhint %}
Application | Deprecated Module |
---|---|
abaqus |
|
abinit |
|
abyss |
|
ambertools |
|
bagel |
|
boost |
|
cabana |
|
campari |
|
cesm |
|
cp2k |
|
dacapo |
|
dalton |
|
dice |
|
esmf |
|
fenics |
|
ffte |
|
fftw |
|
gerris |
|
global_arrays |
|
gpaw |
|
gromacs |
|
hande |
|
hdf5 |
|
hnn |
|
hoomd |
|
horovod |
|
ior |
|
lammps |
|
medea |
|
meme |
|
meshlab |
|
Molpro |
|
mpi4py |
|
multinest |
|
n2p2 |
|
namd |
|
netcdf |
|
nwchem |
|
openfoam |
|
openmpi |
|
Openmpi wth Intel compilers |
|
orca |
|
osu-mpi |
|
paraview |
|
paris |
|
petsc |
|
phyldog |
|
plumed |
|
pmclib |
|
polychord |
|
polyrate |
|
potfit |
|
prophet |
|
pstokes |
|
pymultinest |
|
qchem |
|
qmcpack |
|
quantumespresso |
|
relion |
|
rotd |
|
scalasca |
|
scorep |
|
siesta |
|
sprng |
|
su2 |
|
trilinos |
|
vtk |
|
wrf |
|