-
Notifications
You must be signed in to change notification settings - Fork 27.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[docs] MPS #28016
[docs] MPS #28016
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice - thanks for reworking and tidying up!
Note: Most of the strategies introduced in the [single GPU section](perf_train_gpu_one) (such as mixed precision training or gradient accumulation) and [multi-GPU section](perf_train_gpu_many) are generic and apply to training models in general so make sure to have a look at it before diving into this section. | ||
<Tip warning={true}> | ||
|
||
Some PyTorch operations are not implemented in MPS yet and will throw an error. To avoid this, you should set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU kernels instead (you'll still see a `UserWarning`). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a way to have trainer just use the CPU entirely and ignore the MPS backend?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can set use_cpu=True
here, but cc'ing @pacman100 who'll know more about it 🙂
use_cpu: bool = field( |
2. Distributed setups `gloo` and `nccl` are not working with `mps` device. | ||
This means that currently only single GPU of `mps` device type can be used. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this no longer the case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe its still true, I didn't see mps
among the supported backends for torch.distributed
(included in the second to last paragraph of the new doc)
* mps docs * toctree
* mps docs * toctree
As a part of a larger effort to clean up the
Trainer
API docs in #27986, this PR moves the Trainer for accelerated PyTorch training on Mac section to the currently empty Training on Specialized Hardware page.Other updates include rewriting it a bit so it doesn't sound like it's copied directly from the blog post and removing the link to the paywalled article for setup 🙂