You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As similarly noted in /issues/410 - I've noticed that for multi-GPU runs, only one GPU seems to be processing at a time, while the memory is split between all of them. In my runs, a 3000x2000 pixel image between 4 GPUs, it only swaps which GPU is active only a handful of times.
How feasible would it be to have a swap space in system memory (or even pagefile) for multiple 'virtual' GPUs? The single active 'virtual GPU' has its VRAM loaded in from swap while those layers are being updated, and swapped back out to make room for the next 'virtual GPU's memory.
A single high-VRAM GPU and additional Optane/NVMe/RAM is much easier to get than multiple high VRAM cards. Is there anything technically speaking about neural-style that would prevent this?
The text was updated successfully, but these errors were encountered:
As similarly noted in /issues/410 - I've noticed that for multi-GPU runs, only one GPU seems to be processing at a time, while the memory is split between all of them. In my runs, a 3000x2000 pixel image between 4 GPUs, it only swaps which GPU is active only a handful of times.
How feasible would it be to have a swap space in system memory (or even pagefile) for multiple 'virtual' GPUs? The single active 'virtual GPU' has its VRAM loaded in from swap while those layers are being updated, and swapped back out to make room for the next 'virtual GPU's memory.
A single high-VRAM GPU and additional Optane/NVMe/RAM is much easier to get than multiple high VRAM cards. Is there anything technically speaking about neural-style that would prevent this?
The text was updated successfully, but these errors were encountered: