-
Notifications
You must be signed in to change notification settings - Fork 174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Understanding Multidevice Strategy #81
Comments
@gateway The multidevice strategy just simply splits the model layers across different devices. The layer order in a model dictates which layers go on which device, based on your specified Edit: This may help: https://www.reddit.com/r/deepdream/comments/dnsg65/multigpu_strategy/ |
Was trying this on two Google A100s in their cloud. Devices list below. Used the following parameters: It's not clear to me what I am doing wrong here. Could you please help? I ran some other code to validate that these CUDA devices could be detected by PyTorch. `+-----------------------------------------------------------------------------+ +-----------------------------------------------------------------------------+ |
@IridiumMaster The |
Thanks kindly, that worked very well for me. |
@ProGamerGov In neural-style-pt/examples/scripts/starry_stanford_bigger.sh you aren't using the multiple GPU setting for the lower-resolution images. Is that because there is no benefit (in speed or memory) from splitting the layers over multiple GPUs at those lower resolutions? I'm just trying to get an understand of when multiple GPUs would be best used. |
@robertgoacher Using multiple GPUs can be a bit slower than using a single GPU. Also, there's a small increase in memory that results from using multiple GPUs as well I think. |
@ProGamerGov Thank you so much for your reply; I really appreciate it. I think I understand this now...but please correct me if I'm wrong. So you need to use the multiple GPU strategy for high-resolution style transfers because individual GPUs don't normally have enough memory to do the inference? If you have a GPU with lots of memory (for example a NVIDIA A100 GPU with 40GB of memory) you might be able to complete a render at a high resolution on that GPU without needing to use the multiple GPU strategy? But if you do need to use multiple GPUs you can split the processing (and therefore the memory usage) over multiple GPUs but there will be a decrease in speed and an increase in memory usage from using that strategy? |
I have been trying to figure out how to max out both of my gpu's that are in my system.
GPU 0 has the most memory,
I'm trying to understand the -multidevice_strategy, how many layers are their.. its not very clear to me what would be the best for 2 gpus, one with more memory than the other.. or at least a starting off point..
I have just tried the value of 20 and this was the result..
The text was updated successfully, but these errors were encountered: