-
Notifications
You must be signed in to change notification settings - Fork 5
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[train] fix param tensor references (#906)
Remove the `.detach()` call when storing the tensor which doesn't have the consteval trace. This makes the tensor in our runtime, and its analagous tensor in torch, share the same reference. So, any changes in the underlaying data will be reflected in both our runtime and torch. Before this change, the tensors were not referencing the same object, but they still shared the data - that is the reason why the optimizer on cpu (torch) worked even before. However, if we would change the underlaying data of our tensor (via `our_tensor.data = new_data`), as is the case with running optimizer on the device, this would not reflect on the original tensor (in torch). Add assertions to make sure we are sharing the same tensor with torch. Also, fixes an issue with the registering the hooks for multiple gradients (lambda was taking the gradient id by ref instead of by value).
- Loading branch information
Showing
2 changed files
with
29 additions
and
20 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters