Skip to content

Commit

Permalink
update docs for perlmutter
Browse files Browse the repository at this point in the history
  • Loading branch information
hklion committed Jul 17, 2024
1 parent 86169a1 commit 9dae542
Showing 1 changed file with 56 additions and 2 deletions.
58 changes: 56 additions & 2 deletions Docs/sphinx_doc/building.rst
Original file line number Diff line number Diff line change
Expand Up @@ -139,13 +139,25 @@ Perlmutter (NERSC)

Recall the GNU Make system is best for use on large computing facility machines and production runs. With the GNU Make implementation, the build system will inspect the machine and use known compiler optimizations explicit to that machine if possible. These explicit settings are kept up-to-date by the AMReX project.

For Perlmutter at NERSC, look at the general instructions for building REMORA using GNU Make, and then you can initialize your environment by sourcing or running the `saul-env.sh` script in the `Build` directory. GNU Make may complain that it cannot find NetCDF. This is fine.
For Perlmutter at NERSC, initialize your environment by sourcing the `saul-env.sh` script in the `Build` directory. For example, from the root of the REMORA directory:

::

source Build/saul-env.sh

Then follow the general instructions for building REMORA using GNU Make.

.. note::
When building, GNU Make may complain that it cannot find NetCDF. This is fine.


##### Building for and running on GPU nodes

Then build REMORA as, for example (specify your own path to the AMReX submodule in `REMORA/Submodules/AMReX`):

::

make -j 4 COMP=gnu USE_MPI=TRUE USE_OMP=FALSE USE_CUDA=TRUE AMREX_HOME=/global/u2/d/dwillcox/dev-remora.REMORA/Submodules/AMReX USE_SUNDIALS=FALSE
make -j 4 COMP=gnu USE_MPI=TRUE USE_OMP=FALSE USE_CUDA=TRUE AMREX_HOME=/global/u2/d/dwillcox/dev-remora.REMORA/Submodules/AMReX

Finally, you can prepare your SLURM job script, using the following as a guide:

Expand Down Expand Up @@ -188,3 +200,45 @@ Finally, you can prepare your SLURM job script, using the following as a guide:
To submit your job script, do `sbatch [your job script]` and you can check its status by doing `squeue -u [your username]`.

##### Building for and running on CPU nodes

Then build REMORA as, for example (specify your own path to the AMReX submodule in `REMORA/Submodules/AMReX`):

::

make -j 4 COMP=gnu USE_MPI=TRUE USE_OMP=TRUE USE_CUDA=FALSE AMREX_HOME=/global/u2/d/dwillcox/dev-remora.REMORA/Submodules/AMReX

Finally, you can prepare your SLURM job script, using the following as a guide:

.. code:: shell
#!/bin/bash
#SBATCH -A mXXXX
#SBATCH -C cpu
#SBATCH -q regular
## the job will be named "REMORA" in the queue and will save stdout to remora_[job ID].out
#SBATCH -J REMORA
#SBATCH -o remora_%j.out
## set the max walltime
#SBATCH -t 10
## specify the number of nodes you want
#SBATCH -N 2
## we use 4 ranks per node here as an example. This may not optimize performance
#SBATCH --ntasks-per-node=4
## This configuration assigns one OpenMP thread per physical CPU core.
## For this type of thread assignment, we want 128 total threads per node, so we should
## have (OMP_NUM_THREADS * ntasks-per-node) = 128
export OMP_PROC_BIND=spread
export OMP_PLACES=threads
export OMP_NUM_THREADS=32
# the -n argument is (--ntasks-per-node) * (-N) = (number of MPI ranks per node) * (number of nodes)
srun -n 8 ./REMORA3d.gnu.x86-milan.MPI.OMP.ex inputs > test.out
To submit your job script, do `sbatch [your job script]` and you can check its status by doing `squeue -u [your username]`.

0 comments on commit 9dae542

Please sign in to comment.