-
-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Different image resolutions train/evaluate #50
Comments
Could you do something like: Train
Test / Predict
Would something like this work? |
Hello, I have a similar question and your answer about patches doesn't seem to fit what's described in the paper In the paper (and also in this video https://www.youtube.com/watch?v=nkNjjjRxHkU) ? The authors clearly state that because the model is fully convolutionnal (and even the latent conditioning part is FC ), the model can be applied "out of the box" on full resolution image while being trained on small (256,256) We tried naively to load your pretrained (256,256) model and apply it on larger images of sizes (512,512) which unfortunately doesn't work: we get an error in the forward pass of the Sampler because of sizes mismatch.
Do you think we can fix the problem and modify your pretrained model to apply it on larger image ? Thank you |
Hi, in the mean time, I think I have found the solution to this issue. Should I open a merge request to integrate it? Best, |
Hi, Thank you very much for your answer. Thank you |
Hi, if a pull request is not possible or takes to long, I will be really interested in having your solution. Thank you |
Hi,
thank you for the nice repo!
I have a question regarding the image dimensions. From a talk about the paper, I heard that it is possible to train on smaller crops (256x256) and then during test time using larger image resolutions (such as the entire UK/US dataset). How can I understand this in practice? How would I train the model using smaller images and then produce large maps once trained?
Thank you! :)
The text was updated successfully, but these errors were encountered: