diff --git a/Makefile b/Makefile
index 0c32f8c1..08767ec8 100644
--- a/Makefile
+++ b/Makefile
@@ -10,7 +10,7 @@ FLAVOUR=GNU
# MARENOSTRUM (Intel): fabric intel mkl impi hdf5 fftw szip
# SUPERMUC (Intel): fftw hdf5
# DISCOVERER:
-# GNU: hdf5/1/1.14/latest-gcc-openmpi fftw/3/latest-gcc-openmpi lapack
+# GNU: hdf5/1/1.14/1.14.0-gcc-openmpi fftw/3/latest-gcc-openmpi lapack
# Intel: hdf5/1/1.14/latest-intel-openmpi fftw/3/latest-gcc-openmpi mkl
#=======================================================================
diff --git a/docs/assets/RBC.mp4 b/docs/assets/RBC.mp4
new file mode 100644
index 00000000..e6b0ce99
Binary files /dev/null and b/docs/assets/RBC.mp4 differ
diff --git a/docs/assets/RBC_Pr10_square.jpg b/docs/assets/RBC_Pr10_square.jpg
new file mode 100644
index 00000000..98f3be98
Binary files /dev/null and b/docs/assets/RBC_Pr10_square.jpg differ
diff --git a/docs/examples/rbc.md b/docs/examples/rbc.md
index e00ddd92..806a0e53 100644
--- a/docs/examples/rbc.md
+++ b/docs/examples/rbc.md
@@ -1 +1,64 @@
-# Rayleigh-Bénard Convection
\ No newline at end of file
+# Rayleigh-Bénard Convection
+
+One of the simplest setups we can consider is the Rayleigh-Bénard problem, where fluid is contained between two stationary parallel plates which are held at fixed temperatures.
+If the lower plate is maintained at a higher temperature than the upper plate, then density differences due to temperature can drive convection in the domain.
+Stronger thermal driving, characterised by a larger Rayleigh number, drives a stronger flow.
+
+In MuRPhFi, we can take advantage of the multiple resolution grid to efficiently simulate this problem at high $Pr$.
+Since `temp` on the coarse grid and `sal` on the refined grid evolve according to the same equations (up to a different $Pr$), we can treat `sal` as $-\theta/\Delta$ for the dimensionless temperature of the system.
+We set `activeT=0` to remove the effect of `temp` on the buoyancy and run the simulation.
+
+## 2D Visualization
+
+Here, we show the results for $Ra=10^8$, $Pr=10$.
+First, we use the script `parallel_movie.py` to produce a visualisation of a vertical slice of the "temperature" field `sal`:
+
+
+
+## Statistics
+
+We can then dive into the statistics output in `means.h5` to calculate the dimensionless heat flux due to the convection, the Nusselt number $Nu$, and the Reynolds number $Re$.
+The following are produced from the script `plot_time_series.py`.
+
+## Nusselt number
+
+In Rayleigh-Bénard convection, there are a number of ways to calculate the Nusselt number, all of which should be equivalent (in a time-averaged sense) in a well-resolved, statistically steady state.
+Firstly, we can measure the conductive heat flux at the top and bottom plates:
+
+$$
+{Nu}_\mathrm{pl} = \frac{H}{\Delta} \left.\frac{\partial \theta}{\partial z} \right|_{z=0}, \qquad Nu_\mathrm{pu} = \frac{H}{\Delta} \left.\frac{\partial \theta}{\partial z} \right|_{z=H}
+$$
+
+The volume average of the heat flux will give us another definition, computed as the sum of conductive and convective contributions:
+
+$$
+Nu_\mathrm{vol} = 1 + \frac{\langle w'\theta'\rangle}{\kappa \Delta/H}
+$$
+
+Substituting this into the equations for kinetic energy $\langle|\boldsymbol{u}|^2\rangle$ and temperature variance $\langle\theta^2\rangle$ provide two further definitions, linked to the volume-averaged dissipation rates:
+
+$$
+Nu_\varepsilon = 1 + \langle \partial_i u_j \partial_i u_j \rangle / (U_f/H)^2, \qquad
+Nu_\chi = \langle |\nabla \theta|^2 \rangle / (\Delta/H)^2 .
+$$
+
+![Nusselt numbers](../figures/Nusselt.svg){ width="100%" }
+
+### Reynolds number
+
+Since there is no imposed flow and no volume-averaged mean flow in Rayleigh-Bénard convection, we use the volume-averaged kinetic energy $\mathcal{K}$ to compute a Reynolds number.
+With this approach, we can also separately compute Reynolds numbers based on the vertical kinetic energy $\langle w^2\rangle$ and on the horizontal kinetic energy $\langle u^2 + v^2\rangle$.
+
+As in the Nusselt number plot above, we appear to converge to a statistically steady state for $t \geq 75 H/U_f$ after the initial transient behaviour.
+
+![Reynolds number](../figures/Reynolds.svg){ width="100%" }
+
+## 3D Visualization
+
+Finally, we can use the saved 3D snapshots to make a volume rendering of the temperature field.
+This snapshot highlights the narrow plume structures driving the large-scale circulation at high $Pr$.
+The image has been produced using ParaView.
+
+![Volume rendering](../assets/RBC_Pr10_square.jpg){ width="100%" }
\ No newline at end of file
diff --git a/docs/figures/Nusselt.svg b/docs/figures/Nusselt.svg
new file mode 100644
index 00000000..e75ab039
--- /dev/null
+++ b/docs/figures/Nusselt.svg
@@ -0,0 +1,2485 @@
+
+
+
diff --git a/docs/figures/Reynolds.svg b/docs/figures/Reynolds.svg
new file mode 100644
index 00000000..d2f116b7
--- /dev/null
+++ b/docs/figures/Reynolds.svg
@@ -0,0 +1,1742 @@
+
+
+
diff --git a/submit_scripts/submit_discoverer.sh b/submit_scripts/submit_discoverer.sh
new file mode 100644
index 00000000..79f4e47c
--- /dev/null
+++ b/submit_scripts/submit_discoverer.sh
@@ -0,0 +1,17 @@
+#!/bin/bash
+#SBATCH --job-name="RBC_Ra1e8"
+#SBATCH --partition=cn
+# Ask for 8 nodes of 128 cores each
+#SBATCH --ntasks-per-node=128
+#SBATCH --nodes=8
+# Set time limit to 12 hours (in seconds)
+#SBATCH --time=43200
+# Ask for exclusive use of the nodes you are allocated
+#SBATCH --exclusive
+
+module purge
+module load hdf5/1/1.14/1.14.0-gcc-openmpi
+module load lapack/latest-gcc
+module load fftw/3/latest-gcc-openmpi
+
+srun ~/AFiD-MuRPhFi/afid 32 32
diff --git a/submit_scripts/submit_snellius.sh b/submit_scripts/submit_snellius.sh
new file mode 100755
index 00000000..1cbb7a1d
--- /dev/null
+++ b/submit_scripts/submit_snellius.sh
@@ -0,0 +1,23 @@
+#!/bin/bash
+# Set the simulation to run on 2 nodes from the rome partition
+#SBATCH --nodes=2
+#SBATCH --ntasks-per-node=64
+#SBATCH --partition=rome
+# Set a time limit of one hour for the job
+# MAKE SURE YOUR TIME LIMIT IN bou.in IS SMALLER THAN THIS!
+#SBATCH --time=01:00:00
+# Give the job a useful name
+#SBATCH --job-name=Ra1e8_RBC
+# Email yourself with updates
+#SBATCH --mail-user=user@myemail.nl
+#SBATCH --mail-type=ALL
+
+# Load modules for MPI and other parallel libraries
+module purge
+module load 2022
+module load foss/2022a
+module load HDF5/1.12.2-gompi-2022a
+
+# Finally, call afid, specifying a processor mesh of 16x8
+# (this assumes afid can be found in your PATH)
+srun afid 16 8
\ No newline at end of file
diff --git a/submit_scripts/submit_supermuc.sh b/submit_scripts/submit_supermuc.sh
new file mode 100755
index 00000000..df04ea48
--- /dev/null
+++ b/submit_scripts/submit_supermuc.sh
@@ -0,0 +1,29 @@
+#!/bin/bash
+# Job Name and Files (also --job-name)
+#SBATCH -J RBC_Ra1e8
+#Output and error (also --output, --error):
+#SBATCH -o ./%x.%j.out
+#SBATCH -e ./%x.%j.err
+#Initial working directory (also --chdir):
+#SBATCH -D ./
+#Notification and type
+#SBATCH --mail-type=END
+#SBATCH --mail-user=user@myemail.nl
+# Wall clock limit:
+#SBATCH --time=01:00:00
+#SBATCH --no-requeue
+#Setup of execution environment
+#SBATCH --export=NONE
+#SBATCH --get-user-env
+#SBATCH --account=myaccount
+#SBATCH --partition=micro
+#Number of nodes and MPI tasks per node:
+#SBATCH --nodes=3
+#SBATCH --ntasks=144
+#SBATCH --ear=off
+
+#Important
+module load slurm_setup
+module load hdf5 fftw
+
+mpiexec -n $SLURM_NTASKS afid 12 12