Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

questions and issues regarding training with nnUNetPlannerResEncL on BraTS2023 #2653

Open
neuronflow opened this issue Dec 23, 2024 · 2 comments
Assignees

Comments

@neuronflow
Copy link

neuronflow commented Dec 23, 2024

I am trying to train nnUNet on BraTS 2023 data (1251 exams).
The idea is to train on all cases and then evaluate on a separate test set.

With the standard 3d_fullres config, everything works without trouble.
However, nnUNet advertises I should use the residual encoders, so I try:

nnUNetv2_train 1337 3d_fullres all -p nnUNetPlannerResEncL -num_gpus 1

Notably, the training reaches pseudo dice 1 for some channels and .999 for others (can this flavor of nnUNet overfit the BraTS training set really so well?)

After training, I tried to run inference on our test set. Therefore, I used:

nnUNetv2_predict -i /input/imagesTs -o /output/predictions -d 1337 -f all -c 3d_fullres -p nnUNetResEncUNetLPlans 

For this to work, I had to manually copy some .json files. Am I using the wrong command here?

The resulting segmentations are terrible:
image

@sten2lu
Copy link
Contributor

sten2lu commented Dec 30, 2024

Hi @neuronflow,

First off, it is not the intended behavior that json files need to be copied around. Could you please give me information which json files you had to copy manually and also into which places?

Secondly, the residual encoder nnU-Net can overfit onto the data much easier than the standard model.

To get a better understanding what might be going on, please could you do the following:

  1. Give me more details with regard to the dataset you use for testing the model. Is it also the BraTS dataset? And can you provide performance metrics s.a. mean dice/ dice per class etc?
  2. Send me the progress.png of your training located inside the nnUNet_results folder.

Best regards,

Carsten

@neuronflow
Copy link
Author

neuronflow commented Dec 30, 2024

Dear @sten2lu,

thanks for looking into this.

I am testing on the non-public BraTS validation set. Similar distribution to the training set. Due to the heavy over-segmentation, the mean dice scores are very low, between 5-10%, depending on the class.

I copied the plans.json

Screenshot 2024-12-30 at 16 06 28

The progress:
progress

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants