diff --git a/README.md b/README.md index 415fb56..cce12fd 100644 --- a/README.md +++ b/README.md @@ -2,8 +2,8 @@ [![Download the dataset from DataverseNL](https://img.shields.io/badge/download-BIDS%20dataset-9cf.svg)](https://doi.org/10.34894/R1TNL8) [![Explore the data interactively](https://img.shields.io/badge/explore-Interactive%20Web%20App-ff69b4.svg)](https://rt-me-fmri.herokuapp.com/) -[![Cite this work](https://img.shields.io/badge/cite-Data%20paper-brightgreen.svg)](https://www.biorxiv.org/content/10.1101/2020.12.07.414490v1) -[![Cite this work](https://img.shields.io/badge/cite-Methods%20paper-green.svg)]() +[![Read this work](https://img.shields.io/badge/read-Data%20paper-brightgreen.svg)](https://www.biorxiv.org/content/10.1101/2020.12.07.414490v1) +[![Read this work](https://img.shields.io/badge/read-Methods%20paper-green.svg)]() [![Reproduce figures](https://img.shields.io/badge/reproduce-Figures%20(data%20paper)-red.svg)](https://github.com/jsheunis/rt-me-fMRI/blob/master/rt-me-fmri_reproduce_dataFigures.ipynb) [![Reproduce figures](https://img.shields.io/badge/reproduce-Figures%20(methods%20paper)-orange.svg)](https://github.com/jsheunis/rt-me-fMRI/blob/master/rt-me-fmri_reproduce_methodsFigures.ipynb) [![Reproduce results](https://img.shields.io/badge/reproduce-Results%20(methods%20paper)-blueviolet.svg)](https://github.com/jsheunis/rt-me-fMRI/blob/master/rt-me-fmri_reproduce_resultsSummaries.ipynb) @@ -22,15 +22,17 @@ This repository contains descriptions, code and data related to the real-time mu [**The effects of multi-echo fMRI combination and rapid T2\*-mapping on offline and real-time BOLD sensitivity**]() Below we provide more information and instructions regarding: -- The dataset summary -- How to download the data -- How to explore the data -- How to reproduce results and figures from the publications -- Relevant software tools -- How to cite this work -- How to contribute +- [The dataset summary](#summary) +- [How to download the data](#download) +- [How to explore the data](#explore) +- [How to reproduce results and figures from the publications](#reproduce_data) +- [Relevant software tools](#software) +- [How to cite this work](#cite) +- [How to contribute](#contribute) +
+ ## Data summary The `rt-me-fMRI` dataset is a multi-echo fMRI dataset (N=28 healthy participants) with four task-based and two resting state runs that were collected, curated and made available to the research community. Its main purpose is to advance the development of methods for real-time multi-echo functional magnetic resonance imaging analysis with applications in real-time quality control, adaptive paradigms, and neurofeedback, although the variety of experimental task paradigms supports a multitude of use cases. Tasks include finger tapping, emotional face and shape matching, imagined finger tapping and imagined emotion processing. This figure summarises the collected data: @@ -41,6 +43,8 @@ The full data description is available as a [data article](https://doi.org/10.11 Several depictions of the data tree can be viewed [here](https://github.com/jsheunis/rt-me-fMRI/blob/master/data_tree.md) +
+ ## Downloading the data The `rt-me-fMRI` dataset is available for reuse for the purpose of scientific research or education in the field of functional magnetic resonance imaging. If you wish to use the data, you have to agree to the terms of a [Data Use Agreement](https://github.com/jsheunis/rt-me-fMRI/blob/master/DUA.md) when downloading the data. @@ -51,22 +55,31 @@ The dataset was collected, processed and shared in accordance with the European Much of the work that went into this administrative process has been documented as part of the output of the [Open Brain Consent](https://open-brain-consent.readthedocs.io/en/stable/gdpr/index.html) Working Group, accessible [here](10.31234/osf.io/f6mnp). + +
+ ## Exploring the data To explore the dataset's derivative measures interactively, visit [this web application](https://rt-me-fmri.herokuapp.com). It was built with Python using the [Plotly Dash framework](https://plotly.com/dash/). The open source code base is available at [this repository](https://github.com/jsheunis/rt-me-fmri-dash). +
+ ## Reproducibility: data preparation The data preparation process is [documented here](https://github.com/jsheunis/rt-me-fMRI/tree/master/data_setup). This includes code to convert neuroimaging, physiological and other data to BIDS format. +
+ ## Reproducibility: results After preprocessing and quality checking of the data (see a full description in the data article) the data were processed and analysed as described in the methods article. Because of data storage limitations, these derivative data are not shared together with the `rt-me-fMRI` dataset. However, code and instructions are provided to allow these derivative data to be reproduced. Additionally, code and instructions are provided to subsequently generate the summary data from which the results of the methods paper as well as the data underlying the Dash application are derived: -- [Code and instructions to generate derivative data](https://github.com/jsheunis/rt-me-fMRI/blob/master/rt-me-fmri_reproduce_derivativeData.ipynb) +- [Code and instructions to generate derivative data](https://github.com/jsheunis/rt-me-fMRI/blob/master/rt-me-fmri_reproduce_matlabProcessing.m) - [Code and instcurtions to generate summary data for figures and Dash application](https://github.com/jsheunis/rt-me-fMRI/blob/master/rt-me-fmri_reproduce_resultsSummaries.ipynb) +
+ ## Reproducibility: figures The following notebooks contain code and descriptions that allows figures for the data and methods articles to be reproduced: @@ -75,6 +88,8 @@ The following notebooks contain code and descriptions that allows figures for th - [Methods figures](https://github.com/jsheunis/rt-me-fMRI/blob/master/rt-me-fmri_reproduce_methodsFigures.ipynb) +
+ ## Software tools All (pre)processing and major data analysis steps for both the data article and methods article were done using the open source MATLAB-based `fMRwhy` toolbox (v0.0.1; https://github.com/jsheunis/fMRwhy), which was developed over the course of this project. `fMRwhy` has has conditional dependencies: @@ -86,6 +101,9 @@ All (pre)processing and major data analysis steps for both the data article and - TAPAS PhysIO (v3.2.0; https://github.com/translationalneuromodeling/tapas/releases/tag/v3.2.0; Kasper et al., 2017) - Raincloud plots (v1.1 https://github.com/RainCloudPlots/RainCloudPlots/releases/tag/v1.1; Allen et al., 2019). + +
+ ## Citing this work Papers, book chapters, books, posters, oral presentations, and all other presentations of results derived from the rt-me-fMRI dataset should acknowledge the origin of the data as follows: @@ -102,6 +120,7 @@ And the following citation when referring to the methods article: >Heunis, S., Breeuwer, M., Caballero-Gaudes, C., Hellrung, L., Huijbers, W., Jansen, J.F.A., Lamerichs, R., Zinger, S., Aldenkamp, A.P., 2020. The effects of multi-echo fMRI combination and rapid T2*-mapping on offline and real-time BOLD sensitivity. bioRxiv [tbd...] +
## Contributions / feedback diff --git a/matlab/fmrwhy_settings_RTME.m b/matlab/fmrwhy_settings_RTME.m deleted file mode 100644 index 4370604..0000000 --- a/matlab/fmrwhy_settings_RTME.m +++ /dev/null @@ -1,177 +0,0 @@ -% fmrwhy_settings_template: Settings for the fmrwhy_workflow_qc pipeline - -% Main data source: BIDS root folder -options.bids_dir = '/Volumes/TSM/NEUFEPME_data_BIDS'; - -% Subjects to run -options.subjects_output = 'all'; - -% Set template T1w image (set to '' if a single T1w image was collected) -options.anat_template_session = ''; - -% Set template for functional realignment purposes (if not needed, set to '') -options.template_task = 'rest'; -options.template_session = ''; -options.template_run = '1'; -options.template_echo = '2'; - -% Sequence parameters -options.TR = 2; -options.N_slices = 34; -options.Ndummies = 5; -options.Nscans = 210; -options.TE = [14 28 42]; % assume for all functional runs - -% Settings for structFunc processing - -% Settings for anatLocaliser processing -options.map_rois = 0; -%options.roi_orig_dir = '/Volumes/Stephan_WD/NEUFEPME_data_templates'; -options.roi_orig_dir = '/Users/jheunis/Desktop/NEUFEPME_data_templates'; -options.roi = struct; -% IMPORTANT: structure has to be named using the task name as in options.tasks: options.roi.(task).orig_fn -% options.roi.motor.orig_fn = {fullfile(options.roi_orig_dir, 'Left_Motor_4a_4p.nii'), -% fullfile(options.roi_orig_dir, 'Right_Motor_4a_4p.nii')}; % Raw ROI filenames - -% options.roi.motor.name = {'Left Motor', 'Right Motor'}; % For plots and strings -% options.roi.motor.desc = {'leftMotor', 'rightMotor'}; % For BIDS file naming (after normalisation to functional space) - -% options.roi.emotion.orig_fn = {fullfile(options.roi_orig_dir, 'Bilateral_Amygdala_allregions.nii'), -% fullfile(options.roi_orig_dir, 'Left_Amygdala_allregions.nii'), -% fullfile(options.roi_orig_dir, 'Right_Amygdala_allregions.nii')}; % Raw ROI filenames - -% options.roi.emotion.name = {'Bilateral Amygdala', 'Left Amygdala', 'Right Amygdala'}; % For plots and strings -% options.roi.emotion.desc = {'bilateralAmygdala', 'leftAmygdala', 'rightAmygdala'}; % For BIDS file naming (after normalisation to functional space) - -%options.roi.(task).roi_fn = ROIs in subject space (not resliced) -%options.roi.(task).rroi_fn = resliced ROIs in subject space - -% options.roi.motor.orig_fn = {fullfile(options.roi_orig_dir, 'Left_Motor_4a_4p.nii'), -% fullfile(options.roi_orig_dir, 'Right_Motor_4a_4p.nii')}; % Raw ROI filenames - -% options.roi.motor.name = {'Left Motor', 'Right Motor'}; % For plots and strings -% options.roi.motor.desc = {'leftMotor', 'rightMotor'}; % For BIDS file naming (after normalisation to functional space) - -options.roi.emotionProcessing.orig_fn = {fullfile(options.roi_orig_dir, 'Fusiform_Gyrus_allregions.nii')}; % Raw ROI filenames - -options.roi.emotionProcessing.name = {'Bilateral Fusiform Gyrus'}; % For plots and strings -options.roi.emotionProcessing.desc = {'fusiformGyrus'}; % For BIDS file naming (after normalisation to functional space) - -% Settings for basicFunc processing -options.fwhm = 7; -options.basicfunc_full = false; % if true, preprocessing will include all combinations of slice time correction, realignment and smoothing, useful for later analyses; if false, only include steps necessary for QC -options.include_stc = false; - -% Settings for generateMultRegr routine -options.confounds.include_volterra = 1; -options.confounds.include_fd = 1; -options.confounds.include_tissue = 1; -options.confounds.include_physio = 1; - -% generateMultRegr: framewise displacement -options.r = 50; % mm -options.FD_threshold = 0; % set as 0 to calculate with both standard thresholds 0.2 and 0.5 mm. - -% generateMultRegr: PhysIO -options.physio.options.cardiac_fn = ''; -options.physio.options.respiration_fn = ''; -options.physio.options.vendor = 'BIDS'; -options.physio.options.sampling_interval = 0.002; % 500 Hz ==> Philips wired acquisition -options.physio.options.align_scan = 'last'; -options.physio.options.Nslices = options.N_slices; -options.physio.options.TR = options.TR; % in seconds -options.physio.options.Ndummies = options.Ndummies; % include, even if these are not included in the fMRI timeseries data exported from the scanner -options.physio.options.Nscans = options.Nscans; -options.physio.options.onset_slice = 1; -options.physio.options.cardiac_modality = 'PPU'; -options.physio.options.output_multiple_regressors_fn = 'PhysIO_multiple_regressors.txt'; % text file name -options.physio.options.level = 0; % verbose.level = 0 ==> do not generate figure outputs -options.physio.options.fig_output_file = ''; % unnecessary if verbose.level = 0, but still initialized here - -% Settings for QC -options.theplot.intensity_scale = [-6 6]; -options.qc_overwrite_tissuecontours = true; -options.qc_overwrite_ROIcontours = true; -options.qc_overwrite_theplot = false; -options.qc_overwrite_statsoutput = true; - -% Settings for first level analysis: steps to include/exclude -options.firstlevel.tmap_montages = true; -options.firstlevel.anat_func_roi = true; - -% Settings for first level analysis: task-motor -options.firstlevel.motor.run1.sess_params.timing_units = 'secs'; -options.firstlevel.motor.run1.sess_params.timing_RT = 2; -options.firstlevel.motor.run1.sess_params.cond_names = {'FingerTapping'}; -options.firstlevel.motor.run2.sess_params.timing_units = 'secs'; -options.firstlevel.motor.run2.sess_params.timing_RT = 2; -options.firstlevel.motor.run2.sess_params.cond_names = {'MentalFingerTapping'}; - -% Settings for first level analysis: task-emotion -options.firstlevel.emotion.run1.sess_params.timing_units = 'secs'; -options.firstlevel.emotion.run1.sess_params.timing_RT = 2; -options.firstlevel.emotion.run1.sess_params.cond_names = {'Faces', 'Shapes'}; -options.firstlevel.emotion.run2.sess_params.timing_units = 'secs'; -options.firstlevel.emotion.run2.sess_params.timing_RT = 2; -options.firstlevel.emotion.run2.sess_params.cond_names = {'MentalEmotion'}; - -% Settings for plotting task conditions -onset = [11; 31; 51; 71; 91; 111; 131; 151; 171; 191]; -duration = [10; 10; 10; 10; 10; 10; 10; 10; 10; 10]; -options.firstlevel.motor.run1.plot_params.cond_onset = onset; -options.firstlevel.motor.run1.plot_params.cond_duration = duration; -options.firstlevel.motor.run2.plot_params.cond_onset = onset; -options.firstlevel.motor.run2.plot_params.cond_duration = duration; -options.firstlevel.emotion.run2.plot_params.cond_onset = onset; -options.firstlevel.emotion.run2.plot_params.cond_duration = duration; -onset = [12; 32; 52; 72; 92; 112; 132; 152; 172; 192]; -duration = [9; 9; 9; 9; 9; 9; 9; 9; 9; 9]; -options.firstlevel.emotion.run1.plot_params.cond_onset = onset; -options.firstlevel.emotion.run1.plot_params.cond_duration = duration; - -% Settings for first level analysis: glm regressors to include -options.firstlevel.glm_regressors.trans_rot = true; -options.firstlevel.glm_regressors.trans_rot_derivative1 = true; -options.firstlevel.glm_regressors.trans_rot_power2 = false; -options.firstlevel.glm_regressors.trans_rot_derivative1_power2 = false; -options.firstlevel.glm_regressors.framewise_displacement_censor02 = false; -options.firstlevel.glm_regressors.framewise_displacement_censor05 = false; -options.firstlevel.glm_regressors.dvars_censor = false; % not yet implemented -options.firstlevel.glm_regressors.std_dvars_censor = false; % not yet implemented -options.firstlevel.glm_regressors.grey_matter = false; -options.firstlevel.glm_regressors.white_matter = false; -options.firstlevel.glm_regressors.csf = true; -options.firstlevel.glm_regressors.global_signal = false; -% Order of included retroicor regressors; if 0 ==> exclude -options.firstlevel.glm_regressors.retroicor_c = 2; % cardiac, max 6 -options.firstlevel.glm_regressors.retroicor_r = 2; % respiratory, max 8 -options.firstlevel.glm_regressors.retroicor_cxr = 0; % interaction, max 4 -options.firstlevel.glm_regressors.hrv = false; -options.firstlevel.glm_regressors.rvt = false; - - -% Settings for first level analysis: task-motor -options.firstlevel.motor.run1.contrast_params.consess{1}.tcon.name = 'FingerTapping'; -options.firstlevel.motor.run1.contrast_params.consess{1}.tcon.weights = [1]; -options.firstlevel.motor.run1.contrast_params.consess{1}.tcon.sessrep = 'none'; -options.firstlevel.motor.run2.contrast_params.consess{1}.tcon.name = 'MentalFingerTapping'; -options.firstlevel.motor.run2.contrast_params.consess{1}.tcon.weights = [1]; -options.firstlevel.motor.run2.contrast_params.consess{1}.tcon.sessrep = 'none'; - -% Settings for first level analysis: task-emotion -options.firstlevel.emotion.run1.contrast_params.consess{1}.tcon.name = 'Faces'; -options.firstlevel.emotion.run1.contrast_params.consess{1}.tcon.weights = [1 0]; -options.firstlevel.emotion.run1.contrast_params.consess{1}.tcon.sessrep = 'none'; -options.firstlevel.emotion.run1.contrast_params.consess{2}.tcon.name = 'Shapes'; -options.firstlevel.emotion.run1.contrast_params.consess{2}.tcon.weights = [0 1]; -options.firstlevel.emotion.run1.contrast_params.consess{2}.tcon.sessrep = 'none'; -options.firstlevel.emotion.run1.contrast_params.consess{3}.tcon.name = 'Faces>Shapes'; -options.firstlevel.emotion.run1.contrast_params.consess{3}.tcon.weights = [1 -1]; -options.firstlevel.emotion.run1.contrast_params.consess{3}.tcon.sessrep = 'none'; -options.firstlevel.emotion.run2.contrast_params.consess{1}.tcon.name = 'MentalEmotion'; -options.firstlevel.emotion.run2.contrast_params.consess{1}.tcon.weights = [1]; -options.firstlevel.emotion.run2.contrast_params.consess{1}.tcon.sessrep = 'none'; - -%matlabbatch{1}.spm.stats.con.consess{2}.tcon.name = 'Patients > Control'; -%matlabbatch{1}.spm.stats.con.consess{2}.tcon.convec = [-1 1]; -%matlabbatch{1}.spm.stats.con.consess{2}.tcon.sessrep = 'none'; \ No newline at end of file diff --git a/matlab/fmrwhy_settings_rtmefMRI.m b/matlab/fmrwhy_settings_rtmefMRI.m index ab308a8..d87b2f5 100644 --- a/matlab/fmrwhy_settings_rtmefMRI.m +++ b/matlab/fmrwhy_settings_rtmefMRI.m @@ -1,6 +1,4 @@ -% fmrwhy_settings_template -% Template settings file for the fmrwhy_bids_workflowQC pipeline - +% Copied and altered from fmrwhy_settings_template, the template settings file for the fmrwhy_bids_workflowQC pipeline % ---------- % Section 01 diff --git a/matlab/rtme_reproduce_methodsFigures.m b/matlab/rtme_reproduce_methodsFigures.m index 08731a0..d131912 100644 --- a/matlab/rtme_reproduce_methodsFigures.m +++ b/matlab/rtme_reproduce_methodsFigures.m @@ -147,358 +147,4 @@ tsnr_vol_img = fmrwhy_util_maskImage(double(p.nii.img(:,:,:,volume_nr)), mask_img_oriented); tsnr_vol_montage = fmrwhy_util_createStatsOverlayMontage(tsnr_vol_img(:,:,slices), [], [], 9, 1, '', 'hot', 'off', 'maxwidth', [0 250], [], [], true, tsnr_vol_png); end -end - - - - - - - - - - - - -% % Load fMRwhy defaults -% options = fmrwhy_defaults; - -% % Main input: BIDS root folder -% %bids_dir = '/Users/jheunis/Desktop/sample-data/NEUFEPME_data_BIDS'; -% bids_dir = '/Users/jheunis/Desktop/NEUFEPME_data_BIDS'; -% bids_dir = '/Volumes/TSM/NEUFEPME_data_BIDS'; - -% % Setup fmrwhy BIDS-derivatuve directories on workflow level -% options = fmrwhy_defaults_setupDerivDirs(bids_dir, options); -% options.me_dir = fullfile(options.deriv_dir, 'fmrwhy-multiecho'); - -% % Grab parameters from workflow settings file -% options = fmrwhy_settings_preprocQC(bids_dir, options); - -% % Loop through subjects, sessions, tasks, runs, etc - -% subs = {'002', '003', '004', '005', '006', '007', '010', '011', '012', '013', '015', '016', '017', '018', '019', '020', '021', '022', '023', '024', '025', '026', '027', '029', '030', '031', '032'}; -% %subs = {'001'}; -% %sub = '002'; -% ses = ''; - -% % Plotting settings -% rgb_ongray = [255, 115, 236]; -% rgb_onhot = [148, 239, 255]; -% rgb_onparula = [255, 115, 236]; - - -% for s = 1:numel(subs) -% tic; -% sub = subs{s}; - -% options.sub_dir_me = fullfile(options.me_dir, ['sub-' sub]); -% options.func_dir_me = fullfile(options.sub_dir_me, 'func'); - -% % Setup fmrwhy bids directories on subject level (this copies data from bids_dir) -% options = fmrwhy_defaults_setupSubDirs(bids_dir, sub, options); - -% % Update workflow params with subject anatomical derivative filenames -% options = fmrwhy_defaults_subAnat(bids_dir, sub, options); - -% % ------- -% % STEP 2: Grab template data -% % ------- -% task = 'rest'; -% run = '1'; -% % Mask details -% masks = fmrwhy_util_loadMasks(bids_dir, sub); -% mask_fn = masks.brain_mask_fn; -% mask_img = masks.brain_mask_3D; -% I_mask = masks.brain_mask_I; -% masks_oriented = fmrwhy_util_loadOrientMasks(bids_dir, sub); -% mask_img_oriented = masks_oriented.brain_mask_3D; -% I_mask_oriented = masks_oriented.brain_mask_I; -% % Functional volume template - % template_fn = fullfile(options.sub_dir_preproc, 'func', ['sub-' sub '_task-' options.template_task '_run-' options.template_run '_space-individual_bold.nii']); - % options.template_fn = template_fn; -% % ROIs -% roi_fns = {}; -% roi_fns{1} = fullfile(options.anat_dir_preproc, ['sub-' sub '_space-individual_desc-rleftMotor_roi.nii']); -% roi_fns{2} = fullfile(options.anat_dir_preproc, ['sub-' sub '_space-individual_desc-rbilateralAmygdala_roi.nii']); -% compare_roi_txt = {'left motor cortex', 'bilateral amygdala'}; -% roi_desc_txt = {'lmotor', 'bamygdala'}; -% % Grab+load ROI image data; get ROI indices; combine ROI image data into a single overlay image -% roi_img = {}; -% I_roi = {}; -% overlay_img = zeros(size(mask_img_oriented)); -% for i = 1:numel(roi_fns) -% [p, frm, rg, dim] = fmrwhy_util_readOrientNifti(roi_fns{i}); -% roi_img{i} = fmrwhy_util_createBinaryImg(p.nii.img, 0.1); -% I_roi{i} = find(roi_img{i}(:)); -% overlay_img = overlay_img | roi_img{i}; -% end -% % Transform to MNI -% transformation_fn = options.indiv_to_mni_fn; - -% % ------- -% % STEP 3: Visualise t2star and s0 maps -% % ------- -% % t2star -% t2star_fn = fullfile(options.func_dir_me, ['sub-' sub '_task-' task '_run-' run '_desc-MEparams_t2star.nii']); -% t2star_png = fullfile(options.func_dir_me, ['sub-' sub '_task-' task '_run-' run '_desc-MEparams_t2star.png']); -% [p1, frm1, rg1, dim1] = fmrwhy_util_readOrientNifti(t2star_fn); -% t2star_img = fmrwhy_util_maskImage(p1.nii.img, mask_img_oriented); -% t2star_img(t2star_img>=500) = 0; % TODO, is this fine to do? Also, isn't this already done when estimating t2star map the first time? -% t2star_montage = fmrwhy_util_createStatsOverlayMontage(t2star_img, [], overlay_img, 9, 1, '', 'hot', 'off', 'max', [0 120], [], rgb_onhot, true, t2star_png); -% % S0 -% s0_fn = fullfile(options.func_dir_me, ['sub-' sub '_task-' task '_run-' run '_desc-MEparams_s0.nii']); -% s0_png = fullfile(options.func_dir_me, ['sub-' sub '_task-' task '_run-' run '_desc-MEparams_s0.png']); -% [p2, frm2, rg2, dim2] = fmrwhy_util_readOrientNifti(s0_fn); -% s0_img = fmrwhy_util_maskImage(p2.nii.img, mask_img_oriented); -% s0_montage = fmrwhy_util_createStatsOverlayMontage(s0_img, [], overlay_img, 9, 1, '', 'parula', 'off', 'max', [0 7000], [], rgb_onparula, true, s0_png); - -% % ------- -% % STEP 4: For all tasks and runs, generate ME-related images -% % ------- -% tasks = {'rest', 'motor', 'emotion'}; -% runs = {'1', '2'}; -% combined_str = {'Echo 2', 'T2star', 'tSNR', 'TE', 'T2starFIT', 't2sfit'}; -% roi_text = {'', 'left motor cortex', 'bilateral amygdala'}; -% task_names = {'rest', 'Right finger tapping', 'Hariri task'} -% toTransform_fns = {}; -% saveAs_transform_fns = {}; -% count = 0; - -% for t = 1:numel(tasks) -% task = tasks{t}; -% for r = 1:numel(runs) -% run = runs{r}; - -% % For template task and run -% if strcmp(task, 'rest') == 1 && strcmp(run, '1') == 1 -% disp('------------') -% disp(['Task: ' task '; Run: ' run]) -% disp('------------') -% % Grab filenames for bold -% bold_fns = {}; -% bold_fns{1} = fullfile(options.func_dir_preproc, ['sub-' sub '_task-' task '_run-' run '_echo-1_desc-rapreproc_bold.nii']); -% bold_fns{2} = fullfile(options.func_dir_preproc, ['sub-' sub '_task-' task '_run-' run '_echo-2_desc-rapreproc_bold.nii']); -% bold_fns{3} = fullfile(options.func_dir_preproc, ['sub-' sub '_task-' task '_run-' run '_echo-3_desc-rapreproc_bold.nii']); -% % use arbitrary volume number -% volume_nr = 5; -% % Create image outputs for template run multi-echo bold data -% bold_pngs = {}; -% for i = 1:numel(bold_fns) -% [dir_name, file_name, ext] = fileparts(bold_fns{i}); -% bold_pngs{i} = fullfile(options.func_dir_me, [file_name '.png']); -% if ~exist(bold_pngs{i}, 'file') -% [p, frm, rg, dim] = fmrwhy_util_readOrientNifti(bold_fns{i}); -% bold_img = fmrwhy_util_maskImage(double(p.nii.img(:,:,:,volume_nr)), mask_img_oriented); -% bold_montage = fmrwhy_util_createStatsOverlayMontage(bold_img, [], overlay_img, 9, 1, '', 'gray', 'off', 'max', [], [], rgb_ongray, false, bold_pngs{i}); -% end -% end -% % Create image outputs for template run multi-echo tsnr data -% tsnr_fns = {}; -% tsnr_pngs = {}; -% for i = 1:numel(bold_fns) -% [dir_name, file_name, ext] = fileparts(bold_fns{i}); -% tsnr_fns{i} = fullfile(options.func_dir_me, [file_name ext]); -% tsnr_fns{i} = strrep(tsnr_fns{i}, 'bold.nii', 'tsnr.nii'); -% tsnr_pngs{i} = strrep(tsnr_fns{i}, '.nii', '.png'); -% if ~exist(tsnr_pngs{i}, 'file') -% disp(['File exists: ' tsnr_pngs{i}]) -% [p, frm, rg, dim] = fmrwhy_util_readOrientNifti(tsnr_fns{i}); -% tsnr_img = fmrwhy_util_maskImage(double(p.nii.img), mask_img_oriented); -% tsnr_montage = fmrwhy_util_createStatsOverlayMontage(tsnr_img, [], overlay_img, 9, 1, '', 'hot', 'off', 'max', [0 250], [], rgb_onhot, true, tsnr_pngs{i}); -% end -% end - -% % setup filenames for eventual transformations to MNI space -% toTransform_fns = [toTransform_fns, tsnr_fns]; -% save_rafunctional1_fn = fullfile(options.func_dir_me, ['sub-' sub '_task-rest_run-1_echo-1_space-MNI152_desc-rapreproc_tsnr.nii']); -% save_rafunctional2_fn = fullfile(options.func_dir_me, ['sub-' sub '_task-rest_run-1_echo-2_space-MNI152_desc-rapreproc_tsnr.nii']); -% save_rafunctional3_fn = fullfile(options.func_dir_me, ['sub-' sub '_task-rest_run-1_echo-3_space-MNI152_desc-rapreproc_tsnr.nii']); -% run1_saveAs_fns = {save_rafunctional1_fn, save_rafunctional2_fn, save_rafunctional3_fn}; -% saveAs_transform_fns = [saveAs_transform_fns, run1_saveAs_fns]; - -% % Nothing to do for combined timeseries of task rest run 1 (they dont exist) -% disp('------------') -% disp(['Skipping Combined files for: Task = ' task '; Run = ' run]) -% disp('------------') -% continue; -% end - -% disp('------------') -% disp(['Task: ' task '; Run: ' run]) -% disp('------------') - -% % ------- -% % STEP 4.1: Single volume outputs -% % ------- -% % Output: -% % - Montages of single volumes of: echo 1, 2, 3 -% % - Montages of single combined volumes of combined timeseries using methods: 1, 2, 3, 4 -% % ------- -% % Grab filenames for bold and combined -% bold_fns = {}; -% bold_fns{1} = fullfile(options.func_dir_preproc, ['sub-' sub '_task-' task '_run-' run '_echo-1_desc-rapreproc_bold.nii']); -% bold_fns{2} = fullfile(options.func_dir_preproc, ['sub-' sub '_task-' task '_run-' run '_echo-2_desc-rapreproc_bold.nii']); -% bold_fns{3} = fullfile(options.func_dir_preproc, ['sub-' sub '_task-' task '_run-' run '_echo-3_desc-rapreproc_bold.nii']); -% bold_combined_fns{1} = fullfile(options.func_dir_me, ['sub-' sub '_task-' task '_run-' run '_desc-combinedMEt2star_bold.nii']); -% bold_combined_fns{2} = fullfile(options.func_dir_me, ['sub-' sub '_task-' task '_run-' run '_desc-combinedMEtsnr_bold.nii']); -% bold_combined_fns{3} = fullfile(options.func_dir_me, ['sub-' sub '_task-' task '_run-' run '_desc-combinedMEte_bold.nii']); -% bold_combined_fns{4} = fullfile(options.func_dir_me, ['sub-' sub '_task-' task '_run-' run '_desc-combinedMEt2starFIT_bold.nii']); -% fit_fn = fullfile(options.func_dir_me, ['sub-' sub '_task-' task '_run-' run '_desc-t2starFIT_bold.nii']); -% main_fns = [bold_fns{2}, bold_combined_fns, fit_fn]; -% % use arbitrary volume number -% volume_nr = 5; -% % Create image outputs for original multi-echo data -% bold_pngs = {}; -% for i = 1:numel(bold_fns) -% [dir_name, file_name, ext] = fileparts(bold_fns{i}); -% bold_pngs{i} = fullfile(options.func_dir_me, [file_name '.png']); -% if ~exist(bold_pngs{i}, 'file') -% [p, frm, rg, dim] = fmrwhy_util_readOrientNifti(bold_fns{i}); -% bold_img = fmrwhy_util_maskImage(double(p.nii.img(:,:,:,volume_nr)), mask_img_oriented); -% bold_montage = fmrwhy_util_createStatsOverlayMontage(bold_img, [], overlay_img, 9, 1, '', 'gray', 'off', 'max', [], [], rgb_ongray, false, bold_pngs{i}); -% end -% end -% % Create image outputs for combined multi-echo data -% combined_pngs = {}; -% for i = 1:numel(bold_combined_fns) -% combined_pngs{i} = strrep(bold_combined_fns{i}, '.nii', '.png'); -% if ~exist(combined_pngs{i}, 'file') -% [p, frm, rg, dim] = fmrwhy_util_readOrientNifti(bold_combined_fns{i}); -% combined_img = fmrwhy_util_maskImage(double(p.nii.img(:,:,:,volume_nr)), mask_img_oriented); -% combined_montage = fmrwhy_util_createStatsOverlayMontage(combined_img, [], overlay_img, 9, 1, '', 'gray', 'off', 'max', [], [], rgb_ongray, false, combined_pngs{i}); -% end -% end -% % Create image outputs for combined multi-echo data -% fit_png = strrep(fit_fn, '.nii', '.png'); -% if ~exist(fit_png, 'file') -% [p, frm, rg, dim] = fmrwhy_util_readOrientNifti(fit_fn); -% fit_img = fmrwhy_util_maskImage(double(p.nii.img(:,:,:,volume_nr)), mask_img_oriented); -% fit_montage = fmrwhy_util_createStatsOverlayMontage(fit_img, [], overlay_img, 9, 1, '', 'gray', 'off', 'max', [], [], rgb_ongray, false, fit_png); -% end - -% % ------- -% % STEP 4.2: tSNR outputs -% % ------- -% % Output: -% % - Montage of tSNR: middle echo and combined 1, 2, 3 -% % - Montage of percentage signal change in tsnr: combined 1, 2, 3 vs middle echo -% % - Raincloud plots of tsnr and percentage signal change in tsnr (tsnr-brain, tsnr-roi, psc,-roi) -% % ------- -% % Grab filenames for tsnr files -% tsnr_fns = {}; -% tsnr_fns{1} = fullfile(options.func_dir_me, ['sub-' sub '_task-' task '_run-' run '_echo-2_desc-rapreproc_tsnr.nii']); -% tsnr_fns{2} = fullfile(options.func_dir_me, ['sub-' sub '_task-' task '_run-' run '_desc-combinedMEt2star_tsnr.nii']); -% tsnr_fns{3} = fullfile(options.func_dir_me, ['sub-' sub '_task-' task '_run-' run '_desc-combinedMEtsnr_tsnr.nii']); -% tsnr_fns{4} = fullfile(options.func_dir_me, ['sub-' sub '_task-' task '_run-' run '_desc-combinedMEte_tsnr.nii']); -% tsnr_fns{5} = fullfile(options.func_dir_me, ['sub-' sub '_task-' task '_run-' run '_desc-combinedMEt2starFIT_tsnr.nii']); -% tsnr_fns{6} = fullfile(options.func_dir_me, ['sub-' sub '_task-' task '_run-' run '_desc-t2starFIT_tsnr.nii']); -% % Create filenames for tsnr and percdiff pngs; and add filenames for tranforms to cell array -% tsnr_pngs = {}; -% percdiff_pngs = {}; -% distr_png = fullfile(options.func_dir_me, ['sub-' sub '_task-' task '_run-' run '_desc-tsnrPercdiffRainclouds.png']); -% for i = 1:numel(tsnr_fns) -% tsnr_pngs{i} = strrep(tsnr_fns{i}, '.nii', '.png'); -% if i > 1 -% percdiff_pngs{i-1} = strrep(tsnr_fns{i}, '_tsnr.nii', '_percdiff.png'); -% end -% % transform filenames -% toTransform_fns = [toTransform_fns, {tsnr_fns{i}}]; -% saveAs_transform_fn = strrep(tsnr_fns{i}, '_desc-', '_space-MNI152_desc-'); -% saveAs_transform_fns = [saveAs_transform_fns, {saveAs_transform_fn}]; -% end -% % Call function to calculate and output all comparisons, montages, raincloud plots, etc -% fmrwhy_util_compareTSNRrt(tsnr_fns, mask_fn, roi_fns, compare_roi_txt , tsnr_pngs, percdiff_pngs, distr_png); - -% % TODO: FIX THE CODE IN fmrwhy_util_compareTSNRrt WHERE THERE ARE TESTS TO SEE IF PNG EXIST BEFORE SAVING IT - -% % ------- -% % STEP 4.3: ROI Timeseries plot outputs -% % ------- -% smooth_fns = {}; -% for i = 1:numel(main_fns) -% smooth_fns{i} = strrep(main_fns{i}, 'desc-', 'desc-s'); -% end -% if ~strcmp(task, 'rest') - -% if strcmp(task, 'motor') -% roi_fn = roi_fns{1}; -% else -% roi_fn = roi_fns{2}; -% end - -% for p = 1:numel(smooth_fns) -% functional_fn = smooth_fns{p}; -% tsnr_fn = tsnr_fns{p}; -% if p == 1 -% [dir_name, file_name, ext] = fileparts(smooth_fns{p}); -% tmp_fn = fullfile(options.func_dir_me, [file_name ext]); -% saveAs_fn = strrep(tmp_fn, '_bold.nii', '_tsplot.png'); -% else -% saveAs_fn = strrep(smooth_fns{p}, '_bold.nii', '_tsplot.png'); -% end -% task_info.TR = options.firstlevel.(task).(['run' run]).sess_params.timing_RT; -% task_info.onsets = options.firstlevel.(task).(['run' run]).plot_params.cond_onset; -% task_info.durations = options.firstlevel.(task).(['run' run]).plot_params.cond_duration; -% task_info.precision = 1; -% % TODO: FIX THE CODE IN fmrwhy_util_thePlotROI WHERE THERE ARE TESTS TO SEE IF PNG EXIST BEFORE SAVING IT -% % TODO: ALSO FIX CODE BELOW -% % if ~exist(saveAs_fn, 'file') -% trace_info = []; -% fmrwhy_util_thePlotROI(functional_fn, mask_fn, roi_fn, task_info, trace_info, saveAs_fn) -% % else -% % disp(['File already exists: ' saveAs_fn]) -% % end -% end -% end -% end -% end - -% % ------- -% % STEP 5: Warping -% % ------- -% % Output: -% % - tSNR images warped to MNI152 space -% % ------- -% fmrwhy_batch_normaliseWrite(toTransform_fns, transformation_fn, template_fn, saveAs_transform_fns) - -% % ------- -% % STEP 6: Delineate tSNR values per tissue type and ROI -% % ------- -% % - TSVs with tsnr values extracted per tissue mask (GM, WM, CSF, whole brain) and ROI -% for i = 1:numel(toTransform_fns) -% [p_tsnr, frm, rg, dim] = fmrwhy_util_readOrientNifti(toTransform_fns{i}); -% tsnr_img = p_tsnr.nii.img(:); -% for j = 1:4 -% vals = tsnr_img(masks_oriented.([masks_oriented.field_names{j} '_mask_I'])); -% tsnr_output_fn = strrep(toTransform_fns{i}, '_tsnr.nii', ['_' masks_oriented.field_names{j} 'tsnr.tsv']); -% temp_txt_fn = strrep(tsnr_output_fn, '.tsv', '_temp.txt'); -% data_table = array2table(vals,'VariableNames', {'tsnr'}); -% writetable(data_table, temp_txt_fn, 'Delimiter','\t'); -% [status, msg, msgID] = movefile(temp_txt_fn, tsnr_output_fn); -% end - -% for k = 1:numel(roi_fns) -% [p, frm, rg, dim] = fmrwhy_util_readOrientNifti(roi_fns{k}); -% roi_img = fmrwhy_util_createBinaryImg(p.nii.img, 0.1); -% roi_img_2D = roi_img(:); -% I_roi = find(roi_img_2D); -% masked_tsnr_img = fmrwhy_util_maskImage(p_tsnr.nii.img, roi_img); - -% for j = 1:4 -% overlap = masks_oriented.([masks_oriented.field_names{j} '_mask_2D']) & roi_img_2D; -% vals = tsnr_img(find(overlap)); -% tsnr_output_fn = strrep(toTransform_fns{i}, '_tsnr.nii', ['_' roi_desc_txt{k} masks_oriented.field_names{j} 'tsnr.tsv']); -% temp_txt_fn = strrep(tsnr_output_fn, '.tsv', '_temp.txt'); -% data_table = array2table(vals,'VariableNames', {'tsnr'}); -% writetable(data_table, temp_txt_fn, 'Delimiter','\t'); -% [status, msg, msgID] = movefile(temp_txt_fn, tsnr_output_fn); -% end -% end -% end -% toc; -% end - - - - +end \ No newline at end of file diff --git a/matlab/rtme_workflow_matlab.m b/rt-me-fmri_reproduce_matlabProcessing.m similarity index 75% rename from matlab/rtme_workflow_matlab.m rename to rt-me-fmri_reproduce_matlabProcessing.m index 59a7da2..3a5b0da 100644 --- a/matlab/rtme_workflow_matlab.m +++ b/rt-me-fmri_reproduce_matlabProcessing.m @@ -1,10 +1,11 @@ -% rtme_workflow_matlab +%% rt-me-fmri_reproduce_matlabProcessing.m -% The main script detailing the order in which all data were processed, and providing the code and instructions with which to do so. - -% Not meant to be executed fully as a standalone script; rather each step should be done individually, manually, and chronologically - -% Preceded by data preparation scripts (see relevant jupyter notebook) +% This is the main script detailing the order in which all data were processed, +% and providing the code and instructions with which to do so. +% This script is not meant to be executed fully as a standalone script, +% rather each step should be done individually, manually, and chronologically. +% This script is preceded by data preparation scripts (see relevant jupyter notebook) +% This script requires fMRwhy and its dependencies to be installed % ------- @@ -13,8 +14,9 @@ % ------- % ------- -% see fmrwhy_settings_RTME in this repo -settings_fn = '/Users/jheunis/Documents/PYTHON/rtme-fMRI/matlab/fmrwhy_settings_RTME.m'; +% A settings file is required with prefilled details pertaining to the dataset and analysis +% see fmrwhy_settings_rtmefMRI.m in this repo +settings_fn = '<...>/matlab/fmrwhy_settings_rtmefMRI.m'; % ------------------------------ @@ -23,6 +25,7 @@ % ------------------------------ % ------------------------------ +% Run minimal preprocessing steps and data quality processing % see fmrwhy_bids_workflowQC in fMRwhy fmrwhy_bids_workflowQC(settings_fn); @@ -33,7 +36,6 @@ % ------------------------------ % ------------------------------ - % ---------------------------------- % STEP 1 - fmrwhy_workflow_offlineME % ---------------------------------- @@ -47,7 +49,7 @@ % - Prepare template data and run FIT multi-echo combination % 1.5) Calculate tSNR for each combined timeseries % 1.6) Smooth each combined timeseries, for later analysis purposes -fmrwhy_workflow_offlineME +fmrwhy_workflow_offlineME; % ---------------------------------------- % STEP 2 - fmrwhy_workflow_offlineMEreport @@ -60,8 +62,8 @@ % - Carpet plots for ROIs (fmrwhy_util_thePlotROI) % 2.3) Normalise all tSNR images to MNI: fmrwhy_batch_normaliseWrite % 2.4) Delineate tSNR values per tissue type and ROI ==> output TSV files -fmrwhy_workflow_offlineMEreport -rtme_script_generateMEtSNRtsvFiles % (need to check if this is necessary, perhaps does stuff thats already incorporated into fmrwhy_workflow_offlineMEreport) +fmrwhy_workflow_offlineMEreport; +rtme_script_generateMEtSNRtsvFiles; % (need to check if this is necessary, perhaps does stuff thats already incorporated into fmrwhy_workflow_offlineMEreport) % --------------------------------- % STEP 3 - Subject level statistical analysis @@ -97,27 +99,13 @@ fmrwhy_script_neufepDetermineROImetrics; fmrwhy_script_neufepOfflineTCNR; - - % ------------------------------------------- -% STEP 5 - real-time Processing steps +% STEP 5 - Real-time Processing steps % ------------------------------------------- fmrwhy_script_neufepRTME; % which runs: 1, 2, 3 fmrwhy_script_rtme_initShort; %1 fmrwhy_script_rtmeShort; %2 fmrwhy_script_rtme_postprocessShort; %3 (saves many tsv files) fmrwhy_script_neufepRealtimeTCNR; % saves more tsv files -%fmrwhy_script_MEgenerateRTCNRtsvFiles % in old folder, so probably not needed, but double check to make sure. -fmrwhy_script_copytcnrFiles % check if needed (perhaps replace with jupyter notebook content) - - - -% Generates figures for methods article: these are duplicates (or close to it); decide which one is the correct one. -fmrwhy_workflow_rtmeFigures; -rtme_reproduce_methodsFigures - -% --------------------------------- -% STEP X - Group level statistical analysis -% --------------------------------- - - +fmrwhy_script_copytcnrFiles; % check if needed (perhaps replace with jupyter notebook content) +rtme_reproduce_methodsFigures; % Generates figures for methods article \ No newline at end of file diff --git a/rt-me-fmri_reproduce_methodsFigures.ipynb.invalid b/rt-me-fmri_reproduce_methodsFigures.ipynb.invalid deleted file mode 100644 index e69de29..0000000