Skip to content

Commit

Permalink
Merge branch 'main' into SVRG
Browse files Browse the repository at this point in the history
  • Loading branch information
samdporter committed Sep 26, 2024
2 parents b4f1923 + 49ac42e commit e6747e8
Show file tree
Hide file tree
Showing 16 changed files with 218 additions and 50 deletions.
9 changes: 4 additions & 5 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
data/
orgdata/
orgdata
output/
tmp*/
err*.txt
Expand All @@ -13,8 +14,6 @@ __pycache__/
*.ahv
*.hv
*.v

NeuroLF_Hoffman_Dataset/
Siemens_mMR_NEMA_IQ/
Siemens_Vision600_thorax/
run.py
*.csv
# Tensorboard files
events.*
36 changes: 36 additions & 0 deletions SIRF_data_preparation/Mediso_NEMA_IQ/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# Data from the Mediso AnyScan at NPL
Initial processing steps
```sh
orgpath=/mnt/share/petric-wip/NEMA_Challenge
#orgpath=~/devel/PETRIC/orgdata/
# cd $orgpath
#wget https://nplanywhere.npl.co.uk/userportal/?v=4.6.1#/shared/public/nrKe5vWynvyMsp7m/e9e3d4b8-8f83-47be-bf2d-17a96a7a8aac
#unzip NEMA_Challenge.zip
cd ~/devel/PETRIC/data
mkdir -p Mediso_NEMA_IQ
cd Mediso_NEMA_IQ/
# trim sinograms to avoid "corner" problems in mult_factors
# TODO for next data: use 30 (as some problems remain)
for f in additive_term.hs mult_factors.hs prompts.hs; do
SSRB -t 20 $f $orgpath/sinograms/$f
done
# alternative if we don't need to trim
#cp -rp $orgpath/sinograms/* .
# get rid of NaNs
python prepare.py
# now handle VOIs
mkdir PETRIC
cp -rp $orgpath/VOIs/* PETRIC
# cp -rp $orgpath/README.md .
cd PETRIC
stir_math VOI_background.hv VOI_backgroung.hv
rm VOI_backgroung.*
rm *ahv
cd ..
python ../../SIRF_data_preparation/create_initial_images.py --template_image=PETRIC/VOI_whole_object.hv .
```
I needed to fix the OSEM header (manual):
```
!imaging modality := PT
```
python ../../SIRF_data_preparation/data_QC
11 changes: 11 additions & 0 deletions SIRF_data_preparation/Mediso_NEMA_IQ/prepare.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Set NaNs to zero in additive_term
import numpy as np

import sirf.STIR

additive = sirf.STIR.AcquisitionData('additive_term.hs')
add_arr = additive.as_array()
add_arr = np.nan_to_num(add_arr)
new_add = additive.clone()
new_add.fill(add_arr)
new_add.write('additive_term.hs')
35 changes: 35 additions & 0 deletions SIRF_data_preparation/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,3 +25,38 @@ for the Siemens mMR NEMA IQ data (on Zenodo):
- `download_Siemens_mMR_NEMA_IQ.py`: download and extract
- `prepare_mMR_NEMA_IQ_data.py`: prepare the data (prompts etc)
- `BSREM_*.py`: functions with specific settings for a particular data-set

# Steps to follow to prepare data
If starting from Siemens list-mode data and letting SIRF take care of scatter etc, check for instance [Siemens_mMR_ACR/README.md](steps for Siemens mMR ACR). If pre-prepared data are given, check that naming of all files is correct. KT normally puts all data
in `~/devel/PETRIC/data/<datasetname>` with `datasetname` following convention of `scanner_phantom-name` as others (instructions below and indeed some scripts might assume this location). Change working directory to where data sits and add PETRIC to your python-path, e.g.
```
PYTHONPATH=~/devel/PETRIC:$PYTHONPATH`
```

1. Run initial [data_QC.py](data_QC.py)
```
python -m SIRF_data_preparation.data_QC
```

2. Run [create_initial_images.py](create_initial_images.py).
```
python -m SIRF_data_preparation.create_initial_images --template_image=<some_image>
```
where the template image is one of the given VOIs (it does not matter which one, as they should all have the same geometry). (If you need to create VOIs yourself, you can use `None` or the vendor image).
3. Edit `OSEM_image.hv` to add modality, radionuclide and duration info which got lost (copy from `prompts.hs`)
4. Edit [dataset_settings.py](dataset_settings.py) for subsets (used by our reference reconstructions only, not by participants).
5. Edit [petric.py](../petric.py) for slices to use for creating figures (`DATA_SLICES`). Note that `data_QC` outputs centre-of-mass of the VOIs, which can be helpful for this.
6. Run [data_QC.py](data_QC.py) which should now make more plots. Check VOI alignment etc.
```
python -m SIRF_data_preparation.data_QC --dataset=<datasetname>
```
7. `cd ../..`
8. `python -m SIRF_data_preparation.run_OSEM <datasetname>`
9. `python -m SIRF_data_preparation.run_BSREM <datasetname>`
10. Adapt [plot_BSREM_metrics.py](plot_BSREM_metrics.py) (probably only the `<datasetname>`) and run interactively.
11. Copy the BSREM ` iter_final` to `data/<datasetname>/PETRIC/reference_image`, e.g.
```
stir_math data/<datasetname>/PETRIC/reference_image.hv output/<datasetname>/iter_final.hv
```
12. `cd data/<datasetname>; rm -f *ahv info.txt warnings.txt`, check its `README.md` etc
13. Transfer to web-server
8 changes: 4 additions & 4 deletions SIRF_data_preparation/Siemens_mMR_ACR/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ ugly and temporary
Steps to follow:
1. `python SIRF_data_preparation/Siemens_mMR_ACR/download.py`
2. `python SIRF_data_preparation/Siemens_mMR_ACR/prepare.py` (As there is no useful mumap, this will **not** do attenuation and scatter estimation)
3. `python SIRF_data_preparation/create_initial_images.py data/Siemens_mMR_ACR/final --template_image=None` (Reconstruct at full FOV size)
3. `python SIRF_data_preparation.create_initial_images data/Siemens_mMR_ACR/final --template_image=None` (Reconstruct at full FOV size)
4. `mv data/Siemens_mMR_ACR/final/OSEM_image.* data/Siemens_mMR_ACR/processing` (this is really an NAC image and ideally would be renamed)
5. `python SIRF_data_preparation/Siemens_mMR_ACR/register_mumap.py` (output is orgdata/Siemens_mMR_ACR/processing/reg_mumap.hv etc)
6. However, registration failed, so this needs a manual intervention step:
Expand All @@ -24,9 +24,9 @@ Steps to follow:
stir_math --accumulate orgdata/Siemens_mMR_ACR/processing/reg_mumap.hv orgdata/Siemens_mMR_ACR/output/ACR_hardware-to-STIR.nii.gz
```
7. 11. `python SIRF_data_preparation/run_OSEM.py Siemens_mMR_ACR`--end 200` (As there is now a useful mumap, this will now do attenuation and scatter estimation)
8. `python SIRF_data_preparation/create_initial_images.py data/Siemens_mMR_ACR --template_image=../../orgdata/Siemens_mMR_ACR/output/sampling_masks/acr-all-sampling-0-2mm_dipy.nii`
9. `python SIRF_data_preparation/data_QC.py --srcdir='data/Siemens_mMR_ACR' --transverse_slice=99`
10. edit `SIRF_data_prepatation/dataset_settings.py` for subsets etc. edit `OSEM_image.hv` to add modality, radionuclide and duration info which got lost.
8. `python -m SIRF_data_preparation.create_initial_images data/Siemens_mMR_ACR --template_image=../../orgdata/Siemens_mMR_ACR/output/sampling_masks/acr-all-sampling-0-2mm_dipy.nii`
9. `python -m SIRF_data_preparation.data_QC --srcdir='data/Siemens_mMR_ACR' --transverse_slice=99`
10. edit `petric.py` and `SIRF_data_preparation/dataset_settings.py` for subsets etc. edit `OSEM_image.hv` to add modality, radionuclide and duration info which got lost.
11. `python SIRF_data_preparation/Siemens_mMR_ACR/VOI_prep.py`
12. `python SIRF_data_preparation/run_OSEM.py Siemens_mMR_ACR`
13. `python SIRF_data_preparation/run_BSREM.py Siemens_mMR_ACR`
2 changes: 1 addition & 1 deletion SIRF_data_preparation/Siemens_mMR_ACR/prepare.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
import logging
import os

from SIRF_data_preparation.data_utilities import prepare_challenge_Siemens_data, the_data_path, the_orgdata_path
from ..data_utilities import prepare_challenge_Siemens_data, the_data_path, the_orgdata_path

scanID = 'Siemens_mMR_ACR'

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
import logging
import os

from .data_utilities import prepare_challenge_Siemens_data, the_data_path
from ..data_utilities import prepare_challenge_Siemens_data, the_data_path, the_orgdata_path

this_directory = os.path.dirname(__file__)
repo_directory = os.path.dirname(this_directory)
Expand Down
8 changes: 8 additions & 0 deletions SIRF_data_preparation/Siemens_mMR_NEMA_IQ_lowcounts/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Siemens mMR NEMA IQ data (low counts)

This relies on the "normal" count level processsing to be done first, specifically the download and the ROIs.

Steps to follow:
1. `prepare_mMR_NEMA_IQ_data.py`
2. copy VOIs
3. further steps as [normal](../README.md#steps-to-follow-to-prepare-data)
Empty file.
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
import argparse
import logging
import os

from ..data_utilities import prepare_challenge_Siemens_data, the_data_path, the_orgdata_path

this_directory = os.path.dirname(__file__)
repo_directory = os.path.dirname(this_directory)
challenge_data_path = os.path.join(repo_directory, 'data')

if __name__ == '__main__':
parser = argparse.ArgumentParser(description='SyneRBI PETRIC Siemens mMR NEMA IQ data preparation script.')

parser.add_argument('--log', type=str, default='warning')
parser.add_argument('--start', type=float, default=0)
parser.add_argument('--end', type=float, default=100)
parser.add_argument('--raw_data_path', type=str, default=None)
args = parser.parse_args()

if args.log in ['debug', 'info', 'warning', 'error', 'critical']:
level = eval(f'logging.{args.log.upper()}')
logging.basicConfig(level=level)
logging.info(f"Setting logging level to {args.log.upper()}")

start = args.start
end = args.end

if args.raw_data_path is None:
data_path = the_orgdata_path('Siemens_mMR_NEMA_IQ', 'raw', 'NEMA_IQ')
else:
data_path = args.raw_data_path

data_path = os.path.abspath(data_path)
logging.debug(f"Raw data path: {data_path}")

intermediate_data_path = the_orgdata_path('Siemens_mMR_NEMA_IQ_lowcounts', 'processing')
challenge_data_path = the_data_path('Siemens_mMR_NEMA_IQ_lowcounts')

os.makedirs(challenge_data_path, exist_ok=True)
os.chdir(challenge_data_path)
os.makedirs(intermediate_data_path, exist_ok=True)

f_template = os.path.join(data_path, 'mMR_template_span11.hs')

prepare_challenge_Siemens_data(data_path, challenge_data_path, intermediate_data_path, '20170809_NEMA_',
'60min_UCL.l.hdr', 'MUMAP_UCL.v', 'MUMAP_UCL.hv', 'UCL.n', 'norm.n.hdr', f_template,
'prompts', 'mult_factors', 'additive_term', 'randoms', 'attenuation_factor',
'attenuation_correction_factor', 'scatter', start, end)
9 changes: 4 additions & 5 deletions SIRF_data_preparation/create_initial_images.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,15 +8,15 @@
Options:
-t <template_image>, --template_image=<template_image> filename (relative to <data_path>) of image to use
for data sizes [default: PETRIC/VOI_whole_phantom.hv]
for data sizes [default: PETRIC/VOI_whole_object.hv]
-s <xy-size>, --xy-size=<xy-size> force xy-size (do not use when using VOIs as init) [default: -1]
-S <subsets>, --subsets=<subsets> number of subsets [default: 2]
-i <subiterations>, --subiterations=<subiterations> number of sub-iterations [default: 14]
"""
# Copyright 2024 Rutherford Appleton Laboratory STFC
# Copyright 2024 University College London
# Licence: Apache-2.0
__version__ = '0.1.0'
__version__ = '0.2.0'

import logging
import math
Expand Down Expand Up @@ -82,9 +82,8 @@ def compute_kappa_image(obj_fun, initial_image):
WARNING: Assumes the objective function has been set-up already
"""
# This needs SIRF 3.7. If you don't have that yet, you should probably upgrade anyway!
Hessian_row_sum = obj_fun.multiply_with_Hessian(initial_image, initial_image.allocate(1))
return (-1 * Hessian_row_sum).power(.5)
minus_Hessian_row_sum = -1 * obj_fun.multiply_with_Hessian(initial_image, initial_image.allocate(1))
return minus_Hessian_row_sum.maximum(0).power(.5)


def main(argv=None):
Expand Down
24 changes: 21 additions & 3 deletions SIRF_data_preparation/data_QC.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,17 +6,18 @@
Options:
-h, --help
--srcdir=<path> pathname (must have trailing slash!) [default: ./]
--srcdir=<path> pathname. Will default to current directory unless dataset is set
--skip_sino_profiles do not plot the sinogram profiles
--transverse_slice=<i> idx [default: -1]
--coronal_slice=<c> idx [default: -1]
--sagittal_slice=<s> idx [default: -1]
--dataset=<name> dataset name. if set, it is used to override default slices
Note that -1 one means to use middle of image
"""
# Copyright 2024 University College London
# Licence: Apache-2.0
__version__ = '0.2.0'
__version__ = '0.4.0'

import os
import os.path
Expand All @@ -29,6 +30,8 @@
from scipy import ndimage

import sirf.STIR as STIR
from SIRF_data_preparation.dataset_settings import get_settings
from SIRF_data_preparation.data_utilities import the_data_path

STIR.AcquisitionData.set_storage_scheme('memory')

Expand Down Expand Up @@ -119,7 +122,10 @@ def VOI_checks(allVOInames, OSEM_image=None, reference_image=None, srcdir='.', *
print(f"VOI {VOIname} does not exist")
continue
VOI = STIR.ImageData(filename)
COM = np.rint(ndimage.center_of_mass(VOI.as_array()))
VOI_arr = VOI.as_array()
COM = np.rint(ndimage.center_of_mass(VOI_arr))
num_voxels = VOI_arr.sum()
print(f"VOI: {VOIname}: COM (in indices): {COM} voxels {num_voxels} = {num_voxels * np.prod(VOI.spacing)} mm^3")
plt.figure()
plot_image(VOI, save_name=prefix, vmin=0, vmax=1, transverse_slice=int(COM[0]), coronal_slice=int(COM[1]),
sagittal_slice=int(COM[2]))
Expand Down Expand Up @@ -149,13 +155,25 @@ def VOI_checks(allVOInames, OSEM_image=None, reference_image=None, srcdir='.', *

def main(argv=None):
args = docopt(__doc__, argv=argv, version=__version__)
dataset = args['--dataset']
srcdir = args['--srcdir']
skip_sino_profiles = args['--skip_sino_profiles']
slices = {}
slices["transverse_slice"] = literal_eval(args['--transverse_slice'])
slices["coronal_slice"] = literal_eval(args['--coronal_slice'])
slices["sagittal_slice"] = literal_eval(args['--sagittal_slice'])

if (dataset):
if srcdir is None:
srcdir = the_data_path(dataset)
settings = get_settings(dataset)
for key in slices.keys():
if slices[key] == -1 and key in settings.slices:
slices[key] = settings.slices[key]
else:
if srcdir is None:
srcdir = os.getcwd()

if not skip_sino_profiles:
acquired_data = STIR.AcquisitionData(os.path.join(srcdir, 'prompts.hs'))
additive_term = STIR.AcquisitionData(os.path.join(srcdir, 'additive_term.hs'))
Expand Down
21 changes: 7 additions & 14 deletions SIRF_data_preparation/dataset_settings.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,12 @@
"""Settings for recon and display used in the data preparation"""

from dataclasses import dataclass

from petric import DATA_SLICES

DATA_SUBSETS = {
'Siemens_mMR_NEMA_IQ': 7, 'Siemens_mMR_NEMA_IQ_lowcounts': 7, 'Siemens_mMR_ACR': 7, 'NeuroLF_Hoffman_Dataset': 16,
'Mediso_NEMA_IQ': 12, 'Siemens_Vision600_thorax': 5}


@dataclass
class DatasetSettings:
Expand All @@ -10,16 +15,4 @@ class DatasetSettings:


def get_settings(scanID: str):
if scanID == 'Siemens_mMR_NEMA_IQ':
slices = {'transverse_slice': 72, 'coronal_slice': 109} # , 'sagittal_slice': 89}
num_subsets = 7
elif scanID == 'Siemens_mMR_ACR':
slices = {'transverse_slice': 99}
num_subsets = 7
elif scanID == 'NeuroLF_Hoffman_Dataset':
slices = {'transverse_slice': 72}
num_subsets = 16
else: # Vision
slices = {}
num_subsets = 5
return DatasetSettings(num_subsets, slices)
return DatasetSettings(DATA_SUBSETS[scanID], DATA_SLICES[scanID])
15 changes: 10 additions & 5 deletions SIRF_data_preparation/plot_BSREM_metrics.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
"""Preliminary file to check evolution of metrics as well as pass_index"""
# %%
# """Preliminary file to check evolution of metrics as well as pass_index"""
# %load_ext autoreload
# %autoreload 2
from pathlib import Path
Expand All @@ -21,7 +22,8 @@

# scanID = 'Siemens_Vision600_thorax'
# scanID = 'NeuroLF_Hoffman_Dataset'
scanID = 'Siemens_mMR_NEMA_IQ'
# scanID = 'Siemens_mMR_NEMA_IQ'
scanID = 'Mediso_NEMA_IQ'

srcdir = SRCDIR / scanID
outdir = OUTDIR / scanID
Expand Down Expand Up @@ -69,16 +71,19 @@
reference_image = STIR.ImageData(str(datadir / 'iter_final.hv'))
qm = QualityMetrics(reference_image, data.whole_object_mask, data.background_mask, tb_summary_writer=None,
voi_mask_dict=data.voi_masks)
# %% get update ("iteration") numbers from objective functions
last_iteration = int(objs[-1, 0] + .5)
iteration_interval = int(objs[-1, 0] - objs[-2, 0] + .5)
# find interval(don't use last value, as that interval can be smaller)
iteration_interval = int(objs[-2, 0] - objs[-3, 0] + .5)
if datadir1.is_dir():
last_iteration = int(objs0[-1, 0] + .5)
iteration_interval = int(objs0[-1, 0] - objs0[-2, 0] + .5) * 2
# %%
iters = range(0, last_iteration, iteration_interval)
m = get_metrics(qm, iters, srcdir=datadir)
# %%
OSEMiters = range(0, 400, 20)
OSEMobjs = read_objectives(OSEMdir)
OSEMiters = OSEMobjs[:, 0].astype(int)
OSEMm = get_metrics(qm, OSEMiters, srcdir=OSEMdir)
# %%
fig = plt.figure()
Expand Down Expand Up @@ -107,7 +112,7 @@
fig.savefig(outdir / f'{scanID}_metrics_BSREM.png')

# %%
idx = pass_index(m, numpy.array([.01, .01, .005, .005, .005]), 10)
idx = pass_index(m, numpy.array([.01, .01] + [.005 for i in range(len(data.voi_masks))]), 10)
iter = iters[idx]
print(iter)
image = STIR.ImageData(str(datadir / f"iter_{iter:04d}.hv"))
Expand Down
2 changes: 1 addition & 1 deletion SIRF_data_preparation/run_OSEM.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@
data = get_data(srcdir=srcdir, outdir=outdir)
# %%
algo = main_OSEM.Submission(data, settings.num_subsets, update_objective_interval=20)
algo.run(20, callbacks=[MetricsWithTimeout(**settings.slices, seconds=5000, outdir=outdir)])
algo.run(400, callbacks=[MetricsWithTimeout(**settings.slices, seconds=5000, outdir=outdir)])
# %%
fig = plt.figure()
data_QC.plot_image(algo.get_output(), **settings.slices)
Expand Down
Loading

0 comments on commit e6747e8

Please sign in to comment.