-
Notifications
You must be signed in to change notification settings - Fork 196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
integrate new differentiable fp8 conversion funcs into Float8NoCompileLinear #1496
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/1496
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 49373f1 with merge base d42a382 (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
…eLinear ghstack-source-id: 8f945be34787c3a15fa886826c8261bc1116539c ghstack-comment-id: 2569853141 Pull Request resolved: pytorch#1496
return grad_input, grad_weight, None, None, None | ||
|
||
|
||
class matmul_with_args_in_fp8(torch.autograd.Function): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
delete this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
No description provided.