You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The DAT model can be very heavy, even on a 3090, when a lots of images needs to be upscalled. Is there any chance you could implements multi-gpu in order for a second card to be active ?
I used pytorch lightning to implement the multi-gpu before, instead of torch multi-gpu you said. The challenge would be lied on the sync for the degradation model. However, I have to say that I also only use one 24 GB memory GPU to train the DAT model. You need to decrease the batch size such that it is trainable.
The DAT model can be very heavy, even on a 3090, when a lots of images needs to be upscalled. Is there any chance you could implements multi-gpu in order for a second card to be active ?
I have no clue how to use torch multi-gpu myself.
Thanks.
The text was updated successfully, but these errors were encountered: