Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add parameter space visualization & data to documentation #552

Closed
3 tasks
handwerkerd opened this issue Mar 23, 2020 · 20 comments
Closed
3 tasks

Add parameter space visualization & data to documentation #552

handwerkerd opened this issue Mar 23, 2020 · 20 comments
Labels
documentation issues related to improving documentation for the project enhancement issues describing possible enhancements to the project good first issue issues that we think are good for new contributors. Equivalent to "very low" effort.

Comments

@handwerkerd
Copy link
Member

handwerkerd commented Mar 23, 2020

Summary

A very frequent question from people who are new to multi-echo fMRI are what compromises one needs to make during acquisition to get multi-echo data. I collected some information from a single scanner & pulse sequence to help people understand the compromises better. The goal will be to add text and visualizations to the documentation that will help explain these data.

Additional Detail

  • The data are currently on this spreadsheet
  • The included figure is a draft visualization of these data for 3mm^3 voxels. The left column of plots is if you want to just cover cortex (95mm of slices) and the right column will definitely include the full brain including cerebellum (125mm of slices) Each row has a different amount of in-slice acceleration and each line is multi-slice acceleration. This shows how many realistic 3 echo sequences can be acquired in less than 2s and how 1s TR is not unreasonable.
    MultiEchoParamPsacePlots_fixedSlices_draftfigure

Next Steps

  • Add a better version of the above figure for 3mm^3 and 2.5mm^3 data to the documentation, with accompanying text. The figures can either be generated from the spreadsheet or saved as static images. Either way, the code for figure generation should be included
  • Use these data to make a rough parameter calculator so that someone can directly see the relationship between in-slice acquisition grid, acceleration, slice coverage, and TR.
  • Collect more information from different pulse sequences and scanners, such as 1mm^3 data on a 7T MRI.

@handwerkerd is planning to eventually do this, but it could also be a good first issue for a new contributor.

@handwerkerd handwerkerd added enhancement issues describing possible enhancements to the project good first issue issues that we think are good for new contributors. Equivalent to "very low" effort. documentation issues related to improving documentation for the project labels Mar 23, 2020
@emdupre
Copy link
Member

emdupre commented May 7, 2020

@jsheunis has a fantastic Dash app for real-time fMRI that could be used as a framework for summarizing the parameter space from publications that @dowdlelt has been collecting.

I'd love to see this move forward, personally. It'd be great to get feedback from others here, both on the direction as well as specific next steps !

@CesarCaballeroGaudes
Copy link
Contributor

This would be a fantastic idea!! First we will have to decide the fields for categorization and ranges for those where parameters might be continuous, for instance TR <= 1 sec, 1 sec < TR <= 2 sec, TR > 2 sec, and similarly for in-plane resolution or slice thickness. But @dowdlelt is the best person to speak up!!

@dowdlelt
Copy link
Collaborator

dowdlelt commented May 7, 2020

I think this is a wonderful idea - combining what has been done - which is necessarily a non-systematic sampling of parameter space with @handwerkerd's data on what is possible gives a more complete answer to the question of "what can I do if I use multiecho?"

I assume the categorization step could be post data entry, and flexible. TRs have very rarely been sub-second (or even sub 2 seconds) but I am behind on data entry and new studies would be using more advanced methods. Total coverage (as estimated by # slices * slice thickness) would likely be a great way to have categories.

One big important step is what other fields are wanted? I've added more and more since I started the process but input from others is desirable. I immediately see a need to add phase-encode direction, for example and slice thickness (I just have x y z). This prevents reading and rereading.

@jsheunis
Copy link
Contributor

jsheunis commented May 8, 2020

Hi everyone.

I started playing with the parameter space data to get a better understanding. I reproduced @handwerkerd's figures (slightly different) in a notebook that you can access on my fork here (in the examples folder). Here's a link to a Binder environment. And here's the figure from the notebook:

me_parameter_space_3x3x3

I'll update this notebook in the next while to add the 2.5mmˆ3 data, and also to include some interactive plotly graphs similar to the ones I used in the Dash app. That is, if you think this would be a useful next step?

Once we have a good understanding of what would be nice to plot, I could start with a version of a Dash app for the data above as well as @dowdlelt's current data from ME articles. Then that can be updated as new parameters or categories are added.

@handwerkerd
Copy link
Member Author

Here's another visualization I put in my OHBM 2020 symposium talk. The full talk is at: https://doi.org/10.6084/m9.figshare.12649250.v1
image

@jsheunis
Copy link
Contributor

Hi everyone. I finally got round to adding more interactive plots. My goal was to first use jupyter-dash (before creating a full Dash app) to prototype a few options for illustrating/browsing the parameter space data. I did this for:

  1. @handwerkerd's parameter space data linked above, for which I previously created some matplotlib figures. Here's a short gif of the first attempt (sorry for the bad quality):

interactive_dash_plot

  1. @dowdlelt's curated study data, using a similar design as I did for my rtfMRI-methods site. Here are some gifs:

ezgif com-video-to-gif

ezgif com-video-to-gif (2)

ezgif com-video-to-gif (1)

You can have a look at the respective notebooks here:

  1. https://github.com/jsheunis/tedana/blob/viz/parameter-space/examples/plot_parameter_space.ipynb
  2. https://github.com/jsheunis/tedana/blob/viz/parameter-space/examples/jupyter_dash_study_parameter_space_viz.ipynb

Or you can check them out in binder.

Note that I didn't do much to clean up the study parameter space data. I think that will need some work, perhaps to discretize some values/parameters, in order to improve the visualizations. It would be great to get some feedback on these new additions, and to hear if you have suggested plots or features to add, before we build this into a Dash app.

Also tagging @emdupre @tsalo @CesarCaballeroGaudes.

@dowdlelt
Copy link
Collaborator

Little heart is insufficient, I think this is really incredible and I've been slack on revisiting it. I'm going to try and go through my spreadsheet - first: to figure out if I have all of the information columns that are useful (i.e. maybe I should add voxel volume?) and then second: to decide on what values (TR in s or ms? just has to be consistent) and what to do about missing/incomplete data. Once I have an idea of the columns it should have, I'll put them here for feedback. Then, the onerous task of filling in all of the studies. But one thing at a time.

@jsheunis
Copy link
Contributor

Thanks for the feedback. I was/am bit worried that this idea might be a bit "gadgety", but if there's real interest in taking this further that's great.

When you look at the columns to include/exclude, it might be useful as a start to look at the ones that I dropped or included in this notebook.

For missing data in my previous methods review, I usually just gave it a value of DNR ("did not report"), which will then also show up on the bar plots.

@handwerkerd
Copy link
Member Author

@jsheunis I want to apologize for taking so long to respond to this. This is really great work. I feel like the "gadgety" aspects of this are nice because people really will want to interact with this information. We don't want to add too many dependencies & this adds plotly, dash, & Jupiter-dash. That said, we've informally made a decision that we can add a few more dependencies to the documentation than the functional code, so it should be possible to find a way forward.

Specifically for the parameter space, one thing on my to-do list is to fill in values for uncollected data. I mapped enough of the possible parameter space for it to be possible to calculate many additional combinations. It would be nice to do some of those calculations under-the-hood for interactive visualizations.

I know your work has been sitting dormant for a while waiting for our replies so I'd like to know what feedback or help you'd want next. If useful, I'd be happy to find a time to chat.

@jsheunis
Copy link
Contributor

@handwerkerd no worries, we've all had a lot on our plates for the past months.

I think a few decisions need to be made in terms of the implementation. E.g. should the visualisations all be part of a standalone web-app, or rather individual visualisations embedded into the documentation. And if the latter, what would be good practice, e.g. via iframes or another method. I haven't looked into this much, but if I know the constraints I can start digging a bit.

Then the content itself should be updated to a state that you are happy with (similar with @dowdlelt's list of studies and their parameters). I am not sure what exactly you mean with "calculations under-the-hood", but if you can outline them in more detail, I can implement it.

Lastly, I think it would be good if you/others could provide feedback on the current visualisations, specifically commenting if more/different graphs should be added or if they should be presented in a different way.

If a call is necessary to go through all of this, I'm happy to chat.

@handwerkerd
Copy link
Member Author

I'm going to loop in @jbteves and @tsalo to see if they have any thoughts on what we'd want directly in the documentation vs a standalone web-app.

As for calculations under-the-hood, I tested and wrote down parameters that run for a bunch of TRs, TEs, number of echoes etc. Given all I recorded, there are many other parameter sets that we can easily derive. For example, for a given in-slice acceleration, SMS acceleration, and voxel size/grid, we know the first echo time and can calculate the echo spacing and the single shot acquisition time. With that info, we can linearly calculate the TR for different numbers of echoes and slices. We can share that info in the plots.

I'll need to think a bit more about detailed visualization improvements.
given we know the echo spacing and single shot acquisition time for each volume acquisition

@jbteves
Copy link
Collaborator

jbteves commented Oct 22, 2020

I have no strong feelings either way. The pros of it being integrated into our docs are obvious: they're right there and helpful. The con is that it takes work both on the part of @jsheunis and on maintainers to get it merged in. So if you're really wanting to contribute this to the docs, we can work on that, but if you're wanting to offer it as a quick solution and move on (understandably), it's probably better as a standalone app.

@jsheunis
Copy link
Contributor

Sorry for my long silence (life...)

Thanks for the comments @jbteves and @handwerkerd. I propose the following:

  • I'll build the standalone app
  • Once that exists, I can test what it would practically mean to merge it with the current documentation. Then we can judge whether it ends up being too cumbersome to maintain long term or not.
  • Once the standalone app has the intended functionality, I'll tag @handwerkerd and @dowdlelt for updates to the parameter space and study data, respectively.

@jsheunis
Copy link
Contributor

jsheunis commented Feb 19, 2021

Hi everyone, @jbteves, @dowdlelt, @emdupre, @handwerkerd, @tsalo

The standalone app is now running at: https://me-fmri-overview.herokuapp.com/

The repository is available here: https://github.com/jsheunis/me-fmri-overview

I borrowed the blue colour from the home page of the tedana report. The tables are a bit bulky and there are some styling aspects of the site and visualizations that can improve. Otherwise it's pretty much the same content and components as were demonstrated in the Jupyter notebooks previously, only now it's all together in a standalone app.

If you think it's useful, I could condense the separate pages on this app into one page (probably with tabs) which could then be embedded into an iframe in the tedana docs. The alternative easier option is just to add a link to the standalone app in the tedana docs.

Please send feedback and feel free to make suggestions or changes to the code/data. I'm happy to add anyone who's interested as a collaborator on the repo.

@emdupre emdupre mentioned this issue Jun 11, 2021
3 tasks
@jsheunis
Copy link
Contributor

@emdupre Should I add a PR to reference this in the docs? Perhaps at the bottom of the resources page?

@dowdlelt
Copy link
Collaborator

That reminds me - likely time to add a few more papers to this list. Not to mention, fill in the details on some of these.

Are folks happy with the column labels on the actual sheet? @jsheunis how specific do those need to be? And if I change them does that break the integration? As in, I have # Slices - not an ideal variable name, so maybe N_Slices is better? Also, I'd assume no spaces is preferred?

@jsheunis
Copy link
Contributor

I think I had to wrangle the data a bit to get it into a format that I was comfortable working with, and I think this included updating the column names. You can see the current column names that I'm working with in this CSV file

If you change those column names, the corresponding keys/values have to be changed where they are used on the various pages, otherwise things will break.

We can probably add some code that automatically reads column names from some source file, once we have cleaned/reordered the data into a state that we are fine with maintaining.

@dowdlelt
Copy link
Collaborator

Alright, that all makes sense, I'll leave column names as they are at the moment, though I may need to make some edits to make sure they all match across pages. Let me know if there are any other details you would like to see extracted from papers, otherwise I'll just leave it alone. I think one useful thing would be comma separating out the Paper Focus column (so a study with multiple things could be pain, resting state) so folks might have an opportunity to filter and see what people in their field do.

@emdupre
Copy link
Member

emdupre commented Jun 16, 2021

@emdupre Should I add a PR to reference this in the docs? Perhaps at the bottom of the resources page?

Yes, please !!

Would also be great to have information there on how to add more papers, if that's something we're OK with opening up more broadly !

@tsalo
Copy link
Member

tsalo commented Jul 12, 2021

This should have been closed by #745. I guess "addresses X" doesn't auto link?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation issues related to improving documentation for the project enhancement issues describing possible enhancements to the project good first issue issues that we think are good for new contributors. Equivalent to "very low" effort.
Projects
None yet
Development

No branches or pull requests

7 participants