-
Notifications
You must be signed in to change notification settings - Fork 196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix bug in tl.store mask for kernel _to_fp8_row_major_t_and_non_t #1516
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/1516
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New FailureAs of commit 96ee5ee with merge base f86fda9 (): NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
looks like there is a gap in test coverage, is it doable to add a test to this PR which fails before the fix and passes after the fix? |
ah thanks for the reminder, just updated the e2e training test with a new test case with large inputs, which I've confirmed fails without this change and passes with this change. |
Discovered this bug while prototyping torchtitan integration with float8nocompile in pytorch/torchtitan#778, the symptom was the grads went to NaN during training.
The issue was a mismatch between how output offsets for writing the transposed tensor are calculated, versus the offsets used in creating the output mask. Output offsets for writing were correct but the offsets used in the output mask were backwards accidentally.