Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Different image resolutions train/evaluate #50

Open
matzech opened this issue Mar 11, 2023 · 5 comments
Open

Different image resolutions train/evaluate #50

matzech opened this issue Mar 11, 2023 · 5 comments
Labels
enhancement New feature or request

Comments

@matzech
Copy link

matzech commented Mar 11, 2023

Hi,

thank you for the nice repo!

I have a question regarding the image dimensions. From a talk about the paper, I heard that it is possible to train on smaller crops (256x256) and then during test time using larger image resolutions (such as the entire UK/US dataset). How can I understand this in practice? How would I train the model using smaller images and then produce large maps once trained?

Thank you! :)

@matzech matzech added the enhancement New feature or request label Mar 11, 2023
@peterdudfield
Copy link
Contributor

Could you do something like:

Train

  • use smaller images and validate the model on these smaller images as well

Test / Predict

  • Split the large image into smaller ones, the same size as used in training
  • Run several smaller images through model
  • Stitch the results together

Would something like this work?

@Cirrusfloccus31
Copy link

Cirrusfloccus31 commented Feb 26, 2024

Hello,

I have a similar question and your answer about patches doesn't seem to fit what's described in the paper

In the paper (and also in this video https://www.youtube.com/watch?v=nkNjjjRxHkU) ? The authors clearly state that because the model is fully convolutionnal (and even the latent conditioning part is FC ), the model can be applied "out of the box" on full resolution image while being trained on small (256,256)

We tried naively to load your pretrained (256,256) model and apply it on larger images of sizes (512,512) which unfortunately doesn't work: we get an error in the forward pass of the Sampler because of sizes mismatch.

Traceback (most recent call last):
  File "/home/germainh/dgmr/openclimatefix/bin/inference.py", line 19, in <module>
    out = inference(x)
  File "/home/germainh/dgmr/openclimatefix/bin/inference.py", line 15, in inference
    out = model(input) 
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/germainh/dgmr/openclimatefix/dgmr/dgmr.py", line 121, in forward
    x = self.generator(x)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/germainh/dgmr/openclimatefix/dgmr/generators.py", line 221, in forward
    x = self.sampler(conditioning_states, latent_dim)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/germainh/dgmr/openclimatefix/dgmr/generators.py", line 162, in forward
    hidden_states = self.convGRU1(hidden_states, init_states[3])
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/germainh/dgmr/openclimatefix/dgmr/layers/ConvGRU.py", line 97, in forward
    output, hidden_state = self.cell(x[step], hidden_state)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/germainh/dgmr/openclimatefix/dgmr/layers/ConvGRU.py", line 61, in forward
    xh = torch.cat([x, prev_state], dim=1)
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 8 but got size 16 for tensor number 1 in the list.

Do you think we can fix the problem and modify your pretrained model to apply it on larger image ?

Thank you

@matzech
Copy link
Author

matzech commented Feb 26, 2024

Hi,

in the mean time, I think I have found the solution to this issue. Should I open a merge request to integrate it?

Best,
Matthias

@Cirrusfloccus31
Copy link

Hi,

Thank you very much for your answer.
We would be really interested in having your solution!
A merge request would be great if it is possible.

Thank you

@Cirrusfloccus31
Copy link

Hi,

if a pull request is not possible or takes to long, I will be really interested in having your solution.

Thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants