The image contains SIRF & all dependencies required by JupyterHub.
- Install the latest docker version
- (optional) For GPU support (NVIDIA CUDA on Linux or Windows Subsystem for Linux 2 only)
# CPU version
docker run --rm -it -p 9999:8888 ghcr.io/synerbi/sirf:latest
# GPU version
docker run --rm -it -p 9999:8888 --gpus all ghcr.io/synerbi/sirf:latest-gpu
Tip
docker tag | git branch/tag |
---|---|
latest , latest-gpu |
latest tag v*.*.* |
M , M.m , M.m.p , M-gpu , M.m-gpu , M.m.p-gpu |
tag vM.m.p |
edge , edge-gpu |
master |
See ghcr.io/synerbi/sirf
for a full list of tags.
The docker.yml
workflow builds & pushes all the docker tags above.
Additionally, core
& core-gpu
intermediate (cache) docker tags are built & pushed by the workflow, but are not intended for users.
The workflow will also build & test all PRs (without pushing any new image tags).
The Jupyter notebook should be accessible at http://localhost:9999.
Warning
To sync the container user & host user permissions (useful when sharing folders), use --user
and --group-add
.
docker run --rm -it -p 9999:8888 --user $(id -u) --group-add users \
-v ./devel:/home/jovyan/work \
ghcr.io/synerbi/sirf:latest
More config: https://jupyter-docker-stacks.readthedocs.io/en/latest/using/common.html#user-related-configurations.
Tip
To pass arguments to SIRF-Exercises/scripts/download_data.sh
, use the docker environment variable SIRF_DOWNLOAD_DATA_ARGS
.
docker run --rm -it -p 9999:8888 --user $(id -u) --group-add users \
-v /mnt/data:/share -e SIRF_DOWNLOAD_DATA_ARGS="-pm -D /share" \
ghcr.io/synerbi/sirf:latest
You can build custom images on top of the SIRF ones, likely needing to switch between root
and default user to install packages:
# CPU version
# FROM synerbi/sirf:latest
# GPU version
FROM synerbi/sirf:latest-gpu
USER root
RUN mamba install pytorch && fix-permissions "${CONDA_DIR}" /home/${NB_USER}
USER ${NB_UID}
To build and/or run with advanced config, it's recommended to use Docker Compose.
We use an Ubuntu 22.04 base image (optionally with CUDA GPU support for CIL GPU features), build https://github.com/jupyter/docker-stacks datascience-notebook
on top, and then install SIRF & its depdendencies.
The strategy is:
- Use either
ubuntu:latest
or a recent Ubuntu CuDNN runtime image from https://hub.docker.com/r/nvidia/cuda as base - Build https://github.com/jupyter/docker-stacks/tree/main/images/datascience-notebook on top
- Copy & run the SIRF
docker/build_*.sh
scripts - Clone the SIRF-SuperBuild & run
cmake
- Copy some example notebooks & startup scripts
All of this is done by compose.sh
.
-
Clone this repository and run the
docker/compose.sh
scriptgit clone https://github.com/SyneRBI/SIRF-SuperBuild ./SIRF-SuperBuild/docker/compose.sh -h # prints help
Tip
For example, to -b
uild the -d
evelopment (master
) branches of SIRF and its dependencies, including -g
pu support and skipping tests:
compose.sh -bdg -- --build-arg RUN_CTEST=0
Then to -r
un the container:
compose.sh -rdg
Tip
compose.sh -h # prints help
CMake build arguments (e.g. for dependency version config) are (in increasing order of precedence) found in:
../version_config.cmake
../Dockerfile
- docker-compose.*.yml files
compose.sh -- --build-arg
arguments
Useful --build-arg
s:
You can determine which version of the SIRF-SuperBuild
is built in the docker image:
compose.sh -b -- --build-arg SIRF_SB_TAG=<git ref>
By default, the CTests are run while building the docker image. Note that this takes a few minutes.
You can switch this off by setting --build-arg RUN_CTEST=0
before building the image.
ccache
is used in the container to speed up rebuilding images from scratch.
The cache is pulled from the host machine via the devel/.ccache
folder.
Building (compose.sh -b
) automatically updates the cache.
To disable updating the cache, -b
uild with -U
.
To regenerate the cache, remove it and then -b
uild with -R
.
This way, the cache will be used when you update SIRF in the container, or when you build another container.
Note that this cache is different from the "normal" ccache
of your host. (If you are only doing SIRF development, you could decide to copy that to
SIRF-SuperBuild/docker/devel/.ccache
but we will leave that up to you).
https://github.com/jupyter/docker-stacks is used to gradually build up images:
BASE_CONTAINER=nvidia/cuda:11.7.1-cudnn8-devel-ubuntu22.04
docker-stacks-foundation
->synerbi/jupyter:foundation
base-notebook
->synerbi/jupyter:base
minimal-notebook
->synerbi/jupyter:minimal
scipy-notebook
->synerbi/jupyter:scipy
datascience-notebook
->synerbi/jupyter:datascience
Dockerfile
->synerbi/jupyter:sirf
- Copy & run the SIRF
build_{gadgetron,system}.sh
scripts - Copy
/opt/SIRF-SuperBuild/{INSTALL,sources/SIRF}
directories from thesynerbi/sirf:latest
image - Install
requirements.yml
- Clone & setup https://github.com/SyneRBI/SIRF-Exercises & https://github.com/TomographicImaging/CIL-Demos
- Set some environment variables (e.g.
PYTHONPATH=/opt/SIRF-SuperBuild/INSTALL/python
,OMP_NUM_THREADS=$(( cpu_count/2 ))
)
- Copy & run the SIRF
Note
synerbi/jupyter:*
are only intermediate (cache) images not intended for users.