-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
accuracy is much lower #2
Comments
Hi, @luhc15 , I'm also trying to use these codes in voc dataset, could you please share your test codes? Thanks a lot! |
Hi guys. I've never evaluated my converted model. One thing I found is that mean values in demo.py are incorrect; slightly different from the ones given by the authors. For precise evaluation, please refer to the original Matlab code in the link below and the Section 5.3 "PASCAL VOC 2012" in the paper. Have you already tried multi scale testing? |
@Littlebelly , you can reference to |
@kazuto1011 I use the voc2007 val as my test dataset, I tried multiscale but got worse result, may be I should check the test details. |
@luhc15 @kazuto1011 Looking forward to your reply! |
Thank you for reporting the results! I believe any layers are not skipped and instead suspect slight differences between the models like an interpolation way. Maybe we should compare the intermediate values of Caffe and PyTorch. |
I use your codes generate gray images and use the original matlab scripts contained in caffe version to evaluate. And I also tried test data on the pascal voc server, it reached 80.77 miou, which is much lower than the performance of pspnet on the leaderboard. |
Thanks for your excellent work.But I also found the accuracy problem. I test some realword images for the voc modle,the result of the converted pytorch modle is some little worse than the original caffe modle for almost all images。The test code and the input image resolution is same。 |
when I convert the voc101 model to pytorch version , I test on VOC2012 val.txt, but the mean IoU is 79.6% , much lower than the author given which is 85.41%, is there any other details I ignored
The text was updated successfully, but these errors were encountered: