Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
In the original code there were fragments that did not allow a user to run the code via CPU processing. In the new code I have added a simple Torch pattern as described in their migration guide:
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
With this addition added to each A2CU and A3CU class instantiations it allows a user to not have to pass any device information if they do not want or know how to. The code will auto detect and run the code as expected.
I also have made a few edits to the README file to make the install dependencies more verbose as I detected the need for SentencePiece was required due to the T5 Tokenizer class you implemented. I also added a few edits to make the sample Python code inserts provided to be valid in compilation when running (missing some commas).
Important:: I did not make any changes to the logic of your solution. I only added simplicity for users who just wish to download and run the code without knowledge of Torch under their belts to know CPU and GPU specifics.