Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Viewer updated for FSL 5.0.10 #28

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

TomMaullin
Copy link
Collaborator

@TomMaullin TomMaullin commented Oct 5, 2018

This PR aims to address any issues caused by displaying the new export from FSL 5.0.10 (see here).

@pep8speaks
Copy link

Hello @TomMaullin! Thanks for submitting the PR.

@TomMaullin
Copy link
Collaborator Author

TomMaullin commented Oct 5, 2018

In this PR I have made the following changes:

  • A few of the tests slice images and peak/cluster tables had changed slightly (although not much). I believe this was probably an error due the old exporter, which used to rethreshold all data at p>0.001 when running the call to cluster (see here).
  • Added a few basic tests for fsl_con_f_multiple, looking for contrast names and slice images in the output.
  • Deleted the test test_con_vec_image. This test only tests spm output so shouldn't have been affected by this update. Visually the image of the SPM contrast vector doesn't appear to have changed but it now seems that on each rerun the encoding for this image is slightly different. I'm not sure why this is but my best guess is that maybe the tools used to create this image have been updated at some point since our last PR and therefore the tests here are no longer stable.

@TomMaullin
Copy link
Collaborator Author

Hi @cmaumet ,

This PR is now ready for review and all remaining tests pass! Please let me know what you think of the changes I have made!

@cmaumet
Copy link
Member

cmaumet commented Oct 5, 2018

Hi @TomMaullin, thanks for this update!
A few comments:

A few of the tests slice images and peak/cluster tables had changed slightly (although not much). I believe this was probably an error due the old exporter, which used to rethreshold all data at p>0.001 when running the call to cluster (see here).

We should always be cautious when updating the ground truth of a test and make sure that we understand why this has to be done. I'm not convinced that the "rethreshold" is the reason here. Looking back at the original code (thanks a lot for pointing to it in your comment) the "rethreshold" was only used to compute the cluster label maps image and it was liberal and applied to the excursion set. To me this was a trick to get the cluster label maps (that should have been documented) but it does not affect the results.

Added a few basic tests for fsl_con_f_multiple, looking for contrast names and slice images in the output.

Excellent! Thanks!

Deleted the test test_con_vec_image.

Okay. This could be related to Tom's comment the other day about how the contrast vector looking blurry. Can you double check what tool is used? This might help us understanding better what is happening here.

@TomMaullin
Copy link
Collaborator Author

TomMaullin commented Oct 5, 2018

We should always be cautious when updating the ground truth of a test and make sure that we understand why this has to be done. I'm not convinced that the "rethreshold" is the reason here. Looking back at the original code (thanks a lot for pointing to it in your comment) the "rethreshold" was only used to compute the cluster label maps image and it was liberal and applied to the excursion set. To me this was a trick to get the cluster label maps (that should have been documented) but it does not affect the results.

Hmm yes, this would make sense as, for two of the tests which failed, all that had changed were the cluster labels (see clusTabExtract and peakTabExtract here).

Three other tests did fail and I apologize because I misunderstood the cause of one of them before. This was a test which looked for the contrast name (e.g. tone contrast vs baseline) and then checking whether the slice image following the first line containing the name matched what was expected. However, the dataset which errored was fsl_gamma_basis which contained an F contrast. As of the F contrast update, the F contrasts are now named <contrast 1 name> & <contrast 3 name> &... etc (e.g. tone contrast vs baseline &...) and therefore the test was getting confused and looking at the F contrast instead of the T contrast.

The only other tests which failed were fsl_thr_clustfwep05 and fsl_contrast_mask. Both of these seem to have failed on the slice images and I am unsure why... looking at the images they both look reasonable and I have having no problems reproducing the same images again and again but I am unsure why they had changed since the last PR.

image

image

Okay. This could be related to Tom's comment the other day about how the contrast vector looking blurry. Can you double check what tool is used? This might help us understanding better what is happening here.

These images were created using matplotlib by the python FSL nidmresults viewer (see here). These were different contrast images to those Tom mentioned as those were the matlab SPM nidmresults viewer generated contrast images. I am unsure why they no longer give the same image on each run but looking at them they seem correct (see below).

image
image

@TomMaullin
Copy link
Collaborator Author

Hi @cmaumet ,

I was just wondering is this is now okay to merge? I am not sure what else can be done about the 2 above tests which I could not find the cause for the changes for.

@cmaumet
Copy link
Member

cmaumet commented Oct 20, 2018

Hi @TomMaullin! Sorry for the delayed reply.

Hmm yes, this would make sense as, for two of the tests which failed, all that had changed were the cluster labels (see clusTabExtract and peakTabExtract here).

We had updated the code to make sure that cluster labels would remain the same (incf-nidash/nidmresults-fsl#134) and in fact it was the case when testing in incf-nidash/nidmresults-fsl#134. Do you have any sense of why cluster ids are changing here? I am wondering if it simply could be the case that in FSL 5.0.10 the cluster ids changed compared to the version we had before? Can you check or do you have another possible explanation?

The only other tests which failed were fsl_thr_clustfwep05 and fsl_contrast_mask. Both of these seem to have failed on the slice images and I am unsure why... looking at the images they both look reasonable and I have having no problems reproducing the same images again and again but I am unsure why they had changed since the last PR.

It sounds like we do not have a strong assumption as to what should be the ground truth here. My take on this is that we should remove the test on the hexadecimal values of the images rather than modifying the ground truth to match the current output. Does that sound reasonable?

@cmaumet
Copy link
Member

cmaumet commented Nov 12, 2018

@TomMaullin: Have you had the time to look into this since we talk last week?

I've just checked and I can confirm that the peaks reported by FSL have changed between the fsl-dev and FSL 5.0.10:

New version of nidmresults-examples (with FSL 5.0.10):

$ git branch
* master
$ git pull upstream master 
Already up to date.
$ open fsl_default/cluster_zstat1.html 

screen shot 2018-11-12 at 14 31 14

Old version of nidmresults-examples (with our custom FSL-dev):

$ git fetch upstream 
$ git checkout -b fsl_dev upstream/fsl_dev 
Checking out files: 100% (2891/2891), done.
Filtering content: 100% (615/615), 692.15 MiB | 10.94 MiB/s, done.
Branch 'fsl_dev' set up to track remote branch 'fsl_dev' from 'upstream'.
Switched to a new branch 'fsl_dev'
$ open fsl_default/cluster_zstat1.html 

screen shot 2018-11-12 at 14 36 36

This means that the excursion sets are slightly different and it explains both the change in the table coordinates and the change in the slice images.

Are you okay with this explanation?

If you are, I suggest that we:

  1. update the ground truth in a separate pull request to make it match our updated data from https://github.com/incf-nidash/nidmresults-examples.
  2. rebase code in this PR and check that it matches the updated ground truth (with no further changes).

Will you have time to dedicate to this? Or should I get started with 1.?

@TomMaullin
Copy link
Collaborator Author

Hi @cmaumet ,

Hmm okay yes! I will do this and ping you when it's done!

@TomMaullin
Copy link
Collaborator Author

Hi @cmaumet ,

Just to confirm, did you mean the ground truth used in the tests on this repository or on nidmresults-fsl?

@cmaumet
Copy link
Member

cmaumet commented Nov 19, 2018

Yes, that's right, I meant ground truth of the viewer. Unless it also impacts the ground truth we have for nidmresults-fsl?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants