You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, thank you for your contributions in the field of semi-supervised contrastive learning. I have gained a lot from your work!
However, during the reproduction process, I encountered some issues. When I attempted to apply the original model to the polyp dataset (kvasir-seg), I found that the performance was not ideal, with a dice score of only around 23. The log files also indicated that the dataloader filtered out many images with sizes not meeting the default trainsize.
Initially, I suspected the issue might be with the filters in data.py, so I commented out the filtering operations. This improved the training dice value to around 33, but it's clearly not reflective of the true capabilities of the model.
Next, I tried another polyp dataset, colon, and trained the original model. The best dice during training was over 85, which is quite impressive.
I then switched to the kvasir-instrument dataset mentioned in the original paper, using the original model parameters for training. However, the results were not satisfactory, similar to kvasir-seg, with a dice score of only around 20 during training.
What's strange is that the image sizes for all three datasets don't match the trainsize. I'm wondering if it's necessary to modify the default value of trainsize to try and improve experimental results?
This process has left me quite perplexed. I look forward to your response and greatly appreciate your assistance.
The text was updated successfully, but these errors were encountered:
i get the same error on ubuntu but get correct result on window, i think it from pytorch version.
Anyway , i change object loader
#self.images = sorted(self.images)
#self.gts = sorted(self.gts)
move them to
image_lader
train_images = sorted(train_images)
train_gts= sorted(train_gts)
then ,problem soved. but i m not sure about i will still successful data augmentation
Hello, thank you for your contributions in the field of semi-supervised contrastive learning. I have gained a lot from your work!
However, during the reproduction process, I encountered some issues. When I attempted to apply the original model to the polyp dataset (kvasir-seg), I found that the performance was not ideal, with a dice score of only around 23. The log files also indicated that the dataloader filtered out many images with sizes not meeting the default trainsize.
Initially, I suspected the issue might be with the filters in data.py, so I commented out the filtering operations. This improved the training dice value to around 33, but it's clearly not reflective of the true capabilities of the model.
Next, I tried another polyp dataset, colon, and trained the original model. The best dice during training was over 85, which is quite impressive.
I then switched to the kvasir-instrument dataset mentioned in the original paper, using the original model parameters for training. However, the results were not satisfactory, similar to kvasir-seg, with a dice score of only around 20 during training.
What's strange is that the image sizes for all three datasets don't match the trainsize. I'm wondering if it's necessary to modify the default value of trainsize to try and improve experimental results?
This process has left me quite perplexed. I look forward to your response and greatly appreciate your assistance.
The text was updated successfully, but these errors were encountered: