This repository is an example of the algorithm container with baseline models that can be submitted to the Track 1 of Shifts Challenge for the task of segmentation of white matter Multiple Sclerosis (MS) lesions on 3D FLAIR scans. The purpose of an aglorithm container is to wrap together your models and the inference code for your models such that they can be evaluated automatically on unseen inputs in the evaluation set on Grand Challenge.
This section outlines how to prepare the baseline submission for Grand Challenge. Please ensure you have the authority to build Docker images on your local system and then follow these instructions which show you how to BUILD, TEST and then EXPORT the container:
- Clone this repository (or a forked version of it) onto your local system.
- Navigate to
./Baseline/
- Ensure you can BUILD the container by running
./build.sh
. - TEST the container by running
./test.sh
and confirm you see the message Tests successfully passed.... - EXPORT the container by running
./export.sh
. - You should see a new file created called
Baseline.tar.gz
- submit this file on the submission page of the MS Lesion Segmentation track of Grand Challenge.
This section aims to explain each of the files in the ./Baseline/
directory of the repository in more detail.
model1.pth
,model2.pth
andmodel3.pth
are baseline models provided to you and trained following the official instructions at Shifts mswml.Dockerfile
points to locations of all the models being used, ensures all requirements are in place and then callsprocess.py
.requirements.txt
specifies all dependencies for the models to be used at inference time; no need to specify pytorch here as it is specified inDockerfile
.uncertainty.py
is a module containing implementations of uncertainty measures computed based on deep ensembles to be used byprocess.py
.process.py
specifies the inference of the models on each image and saves the predictions appropriately.build.sh
checks whether your system is able to build dockers.test.sh
checks ifprocess.py
is able to read in a sample image from./test/
and generate the appropriate outputs (a continuous prediction of probabilities and an equivalent uncertainty map of the same size as the input image) to match the expected output./test/expected_output.json
.export.sh
wraps the container together into a single.tar.gz
file that can then be used for submission.
If you are on this section, we assume you have trained your own models and ensured they perform well locally based on the instructions at Shifts mswml. Now, this section explains how you can edit this example algorithm container to make your own algorithm container for your models and their inference.
- Delete the existing models
model1.pth
,model2.pth
andmodel3.pth
and add your own models instead. - Edit
Dockerfile
to ensure the new models you have added are correctly linked based on the names you have used for them (see how the baseline models are linked in the file e.g.COPY --chown=algorithm:algorithm model1.pth /opt/algorithm/model1.pth
). - Update
requirements.txt
with any additional libraries needed for the inference of your models. - Now you must add your own inference code in
process.py
. You only need to edit the functiondef predict(self, *, input_image: SimpleITK.Image) -> SimpleITK.Image:
which takes in a single input image and returns a segmentation map of probabilities and an uncertainty map. You will also need to initialise your models in thedef __init__()
function. - You can now check that your model operates correctly using the BUILD, TEST and EXPORT commands.
If you are struggling with any of the above steps or need clarifications on how to use this repository, please contact Vatsal Raina ([email protected])