-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add parameter space visualization & data to documentation #552
Comments
@jsheunis has a fantastic Dash app for real-time fMRI that could be used as a framework for summarizing the parameter space from publications that @dowdlelt has been collecting. I'd love to see this move forward, personally. It'd be great to get feedback from others here, both on the direction as well as specific next steps ! |
This would be a fantastic idea!! First we will have to decide the fields for categorization and ranges for those where parameters might be continuous, for instance TR <= 1 sec, 1 sec < TR <= 2 sec, TR > 2 sec, and similarly for in-plane resolution or slice thickness. But @dowdlelt is the best person to speak up!! |
I think this is a wonderful idea - combining what has been done - which is necessarily a non-systematic sampling of parameter space with @handwerkerd's data on what is possible gives a more complete answer to the question of "what can I do if I use multiecho?" I assume the categorization step could be post data entry, and flexible. TRs have very rarely been sub-second (or even sub 2 seconds) but I am behind on data entry and new studies would be using more advanced methods. Total coverage (as estimated by # slices * slice thickness) would likely be a great way to have categories. One big important step is what other fields are wanted? I've added more and more since I started the process but input from others is desirable. I immediately see a need to add phase-encode direction, for example and slice thickness (I just have x y z). This prevents reading and rereading. |
Hi everyone. I started playing with the parameter space data to get a better understanding. I reproduced @handwerkerd's figures (slightly different) in a notebook that you can access on my fork here (in the examples folder). Here's a link to a Binder environment. And here's the figure from the notebook: I'll update this notebook in the next while to add the 2.5mmˆ3 data, and also to include some interactive plotly graphs similar to the ones I used in the Dash app. That is, if you think this would be a useful next step? Once we have a good understanding of what would be nice to plot, I could start with a version of a Dash app for the data above as well as @dowdlelt's current data from ME articles. Then that can be updated as new parameters or categories are added. |
Here's another visualization I put in my OHBM 2020 symposium talk. The full talk is at: https://doi.org/10.6084/m9.figshare.12649250.v1 |
Hi everyone. I finally got round to adding more interactive plots. My goal was to first use jupyter-dash (before creating a full Dash app) to prototype a few options for illustrating/browsing the parameter space data. I did this for:
You can have a look at the respective notebooks here:
Or you can check them out in binder. Note that I didn't do much to clean up the study parameter space data. I think that will need some work, perhaps to discretize some values/parameters, in order to improve the visualizations. It would be great to get some feedback on these new additions, and to hear if you have suggested plots or features to add, before we build this into a Dash app. Also tagging @emdupre @tsalo @CesarCaballeroGaudes. |
Little heart is insufficient, I think this is really incredible and I've been slack on revisiting it. I'm going to try and go through my spreadsheet - first: to figure out if I have all of the information columns that are useful (i.e. maybe I should add voxel volume?) and then second: to decide on what values (TR in s or ms? just has to be consistent) and what to do about missing/incomplete data. Once I have an idea of the columns it should have, I'll put them here for feedback. Then, the onerous task of filling in all of the studies. But one thing at a time. |
Thanks for the feedback. I was/am bit worried that this idea might be a bit "gadgety", but if there's real interest in taking this further that's great. When you look at the columns to include/exclude, it might be useful as a start to look at the ones that I dropped or included in this notebook. For missing data in my previous methods review, I usually just gave it a value of DNR ("did not report"), which will then also show up on the bar plots. |
@jsheunis I want to apologize for taking so long to respond to this. This is really great work. I feel like the "gadgety" aspects of this are nice because people really will want to interact with this information. We don't want to add too many dependencies & this adds plotly, dash, & Jupiter-dash. That said, we've informally made a decision that we can add a few more dependencies to the documentation than the functional code, so it should be possible to find a way forward. Specifically for the parameter space, one thing on my to-do list is to fill in values for uncollected data. I mapped enough of the possible parameter space for it to be possible to calculate many additional combinations. It would be nice to do some of those calculations under-the-hood for interactive visualizations. I know your work has been sitting dormant for a while waiting for our replies so I'd like to know what feedback or help you'd want next. If useful, I'd be happy to find a time to chat. |
@handwerkerd no worries, we've all had a lot on our plates for the past months. I think a few decisions need to be made in terms of the implementation. E.g. should the visualisations all be part of a standalone web-app, or rather individual visualisations embedded into the documentation. And if the latter, what would be good practice, e.g. via iframes or another method. I haven't looked into this much, but if I know the constraints I can start digging a bit. Then the content itself should be updated to a state that you are happy with (similar with @dowdlelt's list of studies and their parameters). I am not sure what exactly you mean with "calculations under-the-hood", but if you can outline them in more detail, I can implement it. Lastly, I think it would be good if you/others could provide feedback on the current visualisations, specifically commenting if more/different graphs should be added or if they should be presented in a different way. If a call is necessary to go through all of this, I'm happy to chat. |
I'm going to loop in @jbteves and @tsalo to see if they have any thoughts on what we'd want directly in the documentation vs a standalone web-app. As for calculations under-the-hood, I tested and wrote down parameters that run for a bunch of TRs, TEs, number of echoes etc. Given all I recorded, there are many other parameter sets that we can easily derive. For example, for a given in-slice acceleration, SMS acceleration, and voxel size/grid, we know the first echo time and can calculate the echo spacing and the single shot acquisition time. With that info, we can linearly calculate the TR for different numbers of echoes and slices. We can share that info in the plots. I'll need to think a bit more about detailed visualization improvements. |
I have no strong feelings either way. The pros of it being integrated into our docs are obvious: they're right there and helpful. The con is that it takes work both on the part of @jsheunis and on maintainers to get it merged in. So if you're really wanting to contribute this to the docs, we can work on that, but if you're wanting to offer it as a quick solution and move on (understandably), it's probably better as a standalone app. |
Sorry for my long silence (life...) Thanks for the comments @jbteves and @handwerkerd. I propose the following:
|
Hi everyone, @jbteves, @dowdlelt, @emdupre, @handwerkerd, @tsalo The standalone app is now running at: https://me-fmri-overview.herokuapp.com/ The repository is available here: https://github.com/jsheunis/me-fmri-overview I borrowed the blue colour from the home page of the tedana report. The tables are a bit bulky and there are some styling aspects of the site and visualizations that can improve. Otherwise it's pretty much the same content and components as were demonstrated in the Jupyter notebooks previously, only now it's all together in a standalone app. If you think it's useful, I could condense the separate pages on this app into one page (probably with tabs) which could then be embedded into an iframe in the tedana docs. The alternative easier option is just to add a link to the standalone app in the tedana docs. Please send feedback and feel free to make suggestions or changes to the code/data. I'm happy to add anyone who's interested as a collaborator on the repo. |
@emdupre Should I add a PR to reference this in the docs? Perhaps at the bottom of the resources page? |
That reminds me - likely time to add a few more papers to this list. Not to mention, fill in the details on some of these. Are folks happy with the column labels on the actual sheet? @jsheunis how specific do those need to be? And if I change them does that break the integration? As in, I have # Slices - not an ideal variable name, so maybe N_Slices is better? Also, I'd assume no spaces is preferred? |
I think I had to wrangle the data a bit to get it into a format that I was comfortable working with, and I think this included updating the column names. You can see the current column names that I'm working with in this CSV file If you change those column names, the corresponding keys/values have to be changed where they are used on the various pages, otherwise things will break. We can probably add some code that automatically reads column names from some source file, once we have cleaned/reordered the data into a state that we are fine with maintaining. |
Alright, that all makes sense, I'll leave column names as they are at the moment, though I may need to make some edits to make sure they all match across pages. Let me know if there are any other details you would like to see extracted from papers, otherwise I'll just leave it alone. I think one useful thing would be comma separating out the Paper Focus column (so a study with multiple things could be pain, resting state) so folks might have an opportunity to filter and see what people in their field do. |
Yes, please !! Would also be great to have information there on how to add more papers, if that's something we're OK with opening up more broadly ! |
This should have been closed by #745. I guess "addresses X" doesn't auto link? |
Summary
A very frequent question from people who are new to multi-echo fMRI are what compromises one needs to make during acquisition to get multi-echo data. I collected some information from a single scanner & pulse sequence to help people understand the compromises better. The goal will be to add text and visualizations to the documentation that will help explain these data.
Additional Detail
Next Steps
@handwerkerd is planning to eventually do this, but it could also be a good first issue for a new contributor.
The text was updated successfully, but these errors were encountered: