Skip to content

Latest commit

 

History

History
147 lines (117 loc) · 16.5 KB

README.md

File metadata and controls

147 lines (117 loc) · 16.5 KB

MIALSRTK logo


Copyright © 2016-2023 Medical Image Analysis Laboratory, University Hospital Center and University of Lausanne (UNIL-CHUV), Switzerland

This software is distributed under the open-source BSD 3-Clause License. See LICENSE file for details.


GitHub release (latest by date including pre-releases) DOI Docker Pulls Build Status CircleCI Code Coverage Documentation Status Code Quality Github All Contributors

The Medical Image Analysis Laboratory Super-Resolution ToolKit (MIALSRTK) provides a set of C++ and Python tools necessary to perform motion-robust super-resolution fetal MRI reconstruction.

The original C++ MIALSRTK library includes all algorithms and methods for brain extraction, intensity standardization, motion estimation and super-resolution. It uses the CMake build system and depends on the open-source image processing Insight ToolKit (ITK) library, the command line parser TCLAP library and OpenMP for multi-threading.

MIALSRTK has been recently extended with the pymialsrtk Python3 library following recent advances in standardization of neuroimaging data organization and processing workflows such as the Brain Imaging Data Structure (BIDS) and BIDS App standards. This library has a modular architecture built on top of the Nipype dataflow library which consists of (1) processing nodes that interface with each of the MIALSRTK C++ tools and (2) a processing pipeline that links the interfaces in a common workflow.

The processing pipeline with all dependencies including the C++ MIALSRTK tools are encapsulated in a Docker image container, which handles datasets organized following the BIDS standard and is distributed as a BIDS App @ Docker Hub. For execution on high-performance computing cluster, a Singularity image is also made freely available @ Sylabs Cloud. To facilitate the use of Docker or Singularity, pymialsrtk provides two Python commandline wrappers (mialsuperresolutiontoolkit_docker and mialsuperresolutiontoolkit_singularity) that can generate and run the appropriate command.

All these design considerations allow us not only to (1) represent the entire processing pipeline as an execution graph, where each MIALSRTK C++ tools are connected, but also to (2) provide a mecanism to record data provenance and execution details, and to (3) easily customize the BIDS App to suit specific needs as interfaces with new tools can be added with relatively little effort to account for additional algorithms.

Resources

Installation

  • Install Docker or Singularity engine

  • In a Python 3.7 environment, install pymialsrtk with pip:

    pip install pymialsrtk
    
  • You are ready to use MIALSRTK BIDS App wrappers!

Usage

mialsuperresolutiontoolkit_docker and mialsuperresolutiontoolkit_singularity python wrappers to the MIALSRTK BIDS App have the following command line arguments:

$ mialsuperresolutiontoolkit_[docker|singularity] -h

usage: mialsuperresolutiontoolkit_[docker|singularity] [-h]
                                     [--run_type {sr,preprocessing}]
                                     [--participant_label PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]]
                                     [--param_file PARAM_FILE]
                                     [--openmp_nb_of_cores OPENMP_NB_OF_CORES]
                                     [--nipype_nb_of_cores NIPYPE_NB_OF_CORES]
                                     [--memory MEMORY]
                                     [--masks_derivatives_dir MASKS_DERIVATIVES_DIR]
                                     [--labels_derivatives_dir LABELS_DERIVATIVES_DIR]
                                     [--all_outputs] [-v] [--verbose]
                                     [--track_carbon_footprint]
                                     bids_dir output_dir {participant}

Argument parser of the MIALSRTK BIDS App Python wrapper

positional arguments:
  bids_dir              The directory with the input dataset formatted
                        according to the BIDS standard.
  output_dir            The directory where the output files should be stored.
                        If you are running group level analysis this folder
                        should be prepopulated with the results of the
                        participant level analysis.
  {participant}         Level of the analysis that will be performed. Only
                        participant is available

optional arguments:
  -h, --help            show this help message and exit
  --run_type {sr,preprocessing}
                        Type of pipeline that is run. Can choose between
                        running the super-resolution pipeline (`sr`) or only
                        preprocessing (`preprocessing`).
  --participant_label PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]
                        The label(s) of the participant(s) that should be
                        analyzed. The label corresponds to
                        sub-<participant_label> from the BIDS spec (so it does
                        not include "sub-"). If this parameter is not provided
                        all subjects should be analyzed. Multiple participants
                        can be specified with a space separated list.
  --param_file PARAM_FILE
                        Path to a JSON file containing subjects' exams
                        information and super-resolution total variation
                        parameters.
  --openmp_nb_of_cores OPENMP_NB_OF_CORES
                        Specify number of cores used by OpenMP threads
                        Especially useful for NLM denoising and slice-to-
                        volume registration. (Default: 0, meaning it will be
                        determined automatically)
  --nipype_nb_of_cores NIPYPE_NB_OF_CORES
                        Specify number of cores used by the Niype workflow
                        library to distribute the execution of independent
                        processing workflow nodes (i.e. interfaces)
                        (Especially useful in the case of slice-by-slice bias
                        field correction and intensity standardization steps
                        for example). (Default: 0, meaning it will be
                        determined automatically)
  --memory MEMORY       Limit the workflow to using the amount of specified
                        memory [in gb] (Default: 0, the workflow memory
                        consumption is not limited)
  --masks_derivatives_dir MASKS_DERIVATIVES_DIR
                        Use manual brain masks found in
                        ``<output_dir>/<masks_derivatives_dir>/`` directory
  --labels_derivatives_dir LABELS_DERIVATIVES_DIR
                        Use low-resolution labelmaps found in
                        ``<output_dir>/<labels_derivatives_dir>/`` directory.
  --all_outputs         Whether or not all outputs should be kept(e.g.
                        preprocessed LR images)
  -v, --version         show program's version number and exit
  --verbose             Verbose mode
  --track_carbon_footprint
                        Track carbon footprint with `codecarbon
                        <https://codecarbon.io/>`_ and save results in a CSV
                        file called ``emissions.csv`` in the
                        ``<bids_dir>/code`` directory.

Credits

Sébastien Tourbier
Sébastien Tourbier

🎨 💻 🚇 ⚠️ 🐛 💡 📖 🤔 👀
Priscille de Dumast
Priscille de Dumast

🎨 💡 ⚠️ 💻 📖 🤔 👀
hamzake
hamzake

💡 ⚠️ 💻 📖 🤔
Thomas Sanchez
Thomas Sanchez

🐛 💻 📖 💡 🤔 🚇 👀
Hélène Lajous
Hélène Lajous

🐛 ⚠️ 👀
Patric Hagmann
Patric Hagmann

🔣 🔍
Meritxell Bach
Meritxell Bach

🔍

This project follows the all-contributors specification. Contributions of any kind welcome!