You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to train nnUNet on BraTS 2023 data (1251 exams).
The idea is to train on all cases and then evaluate on a separate test set.
With the standard 3d_fullres config, everything works without trouble.
However, nnUNet advertises I should use the residual encoders, so I try:
nnUNetv2_train 1337 3d_fullres all -p nnUNetPlannerResEncL -num_gpus 1
Notably, the training reaches pseudo dice 1 for some channels and .999 for others (can this flavor of nnUNet overfit the BraTS training set really so well?)
After training, I tried to run inference on our test set. Therefore, I used:
First off, it is not the intended behavior that json files need to be copied around. Could you please give me information which json files you had to copy manually and also into which places?
Secondly, the residual encoder nnU-Net can overfit onto the data much easier than the standard model.
To get a better understanding what might be going on, please could you do the following:
Give me more details with regard to the dataset you use for testing the model. Is it also the BraTS dataset? And can you provide performance metrics s.a. mean dice/ dice per class etc?
Send me the progress.png of your training located inside the nnUNet_results folder.
I am testing on the non-public BraTS validation set. Similar distribution to the training set. Due to the heavy over-segmentation, the mean dice scores are very low, between 5-10%, depending on the class.
I am trying to train nnUNet on BraTS 2023 data (1251 exams).
The idea is to train on all cases and then evaluate on a separate test set.
With the standard
3d_fullres
config, everything works without trouble.However, nnUNet advertises I should use the residual encoders, so I try:
Notably, the training reaches pseudo dice 1 for some channels and .999 for others (can this flavor of nnUNet overfit the BraTS training set really so well?)
After training, I tried to run inference on our test set. Therefore, I used:
For this to work, I had to manually copy some .json files. Am I using the wrong command here?
The resulting segmentations are terrible:
The text was updated successfully, but these errors were encountered: