-
Notifications
You must be signed in to change notification settings - Fork 175
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bad results(Not bad now) #12
Comments
Is scipy's misc.imresize(img, self.input_shape) really exactly the same as matlabs imresize(img,[new_rows new_cols],'bilinear'); |
You could save the intermediate activations from Keras and compare them with the Caffe. There are a few lines for this in utils.py I forget but they may have used a sliding window for evaluation as opposed to our rescaling which distorts the aspect ratio. |
Looking into this further, there are minor differences between the python and Matlab image resizing I did some experiments and even for a toy example nothing equals the matlab default matlab_imresize_AA
@hujh14 you mentioned somewhere that you already checked the activations. Do yo remember up to which point? |
@hujh14 Unfortunately, i cannot run the original code on a 8GB 1080 GTX, not even with batch size 1 due to insufficient memory. Did you manage to compile the original code with CuDNN support? or do you have 12GB cards? |
@jmtatsch I have card same as yours, and its works well, takes about 3.5 gb of memory |
@Vladkryvoruchko are you sure it works with the cityscapes model? it is much larger than both others... which CUDA, CuDNN? |
@jmtatsch . Oh... I wrote about ADE :) |
Without flipped evaluation
Just 2.2 % missing :) |
Okay, with flipped evaluation:
Still 1,7% missing. Will look into multi-scale evaluation. |
@ jmtatsch, would you mind explain what do you mean by sliced prediction? I am working on the same problem:( |
@leinxx by sliced prediction I mean cutting the image into 4x2 overlapping 713x713 slices, forwarding them though the network and reassembling the 2048x1024 predictions from them. Please let us know if fix further issues and get closer to the published results... |
@jmtatsch thanks a lot. I have learned much from your replication. Would you explain why sliced prediction improve the result so much? and what is flipped evaluation (do you mean flip over the image during training for data augumentation)? Could you provide more details about your training setting? e.g. batch_size, epoch, etc. |
@wtliao Unfortunately, I did not (yet) train these weights myself. |
@jmtatsch Thanks for these posts! They were really helpful! 👍 |
Hi @jmtatsch, I found in your code the kernel size and stride size of the pyramid pooling module are set by (10xlevel, 10xlevel). It is the right size for VOC and ade20k with input size (473, 473). However, when using input size (713, 713) as in cityspace case, the size obtained by (10xlevel, 10xlevel) is not identical to the original code. |
@scenarios Good catch, I will fix this in #30 and reevaluate. |
Hi, What are the results do you get after the last changes? Do you get similar results to the paper, or still they are worse? |
@jmtatsch.Did you train PSPnet on any dataset? |
No, But some train code was merged recently, so someone should have.
… Am 24.02.2018 um 10:54 schrieb shipeng ***@***.***>:
@jmtatsch.Did you train PSPnet on any dataset?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Hi. Thanks for your excellent work, could you please tell me how do you get the evaluation results? Are you using the scripts in the cityscapes repo? THank you! |
Hello, I would also like to know how to get the evaluation results. I tried using the cityscapes scripts, but I am getting this error when using seg_read images as my input: Traceback (most recent call last): |
Although the converted weights produce plausible predictions,
they are not yet up to the published results of the PSPNet paper.
Current results on cityscapes validation set:
Accuracy of the published code on several validation/testing sets according to the author:
So we are still missing 79.70 - 62.60 = 17.10 % IoU
Does anyone have an idea where we lose that accuracy?
The text was updated successfully, but these errors were encountered: