Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[core][compiled graphs] Support arbitrary torch.dtypes when passing through shared memory #48957

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

stephanie-wang
Copy link
Contributor

@stephanie-wang stephanie-wang commented Nov 27, 2024

Why are these changes needed?

Some torch.dtypes don't have a numpy equivalent. We use numpy to store the tensor data zero-copy in the object store. To support these tensors, we first view the array with a common dtype (uint8), and then view as a np array. During deserialization, we use another view back to the original dtype.

Closes #48141.

Signed-off-by: Stephanie Wang <[email protected]>
Signed-off-by: Stephanie Wang <[email protected]>
# Numpy does not have an equivalent dtype for all torch dtypes, so
# instead of casting directly to numpy, we first use a view with a
# common dtype and then view as numpy array.
return (tensor.view(torch.uint8).numpy(), tensor.dtype)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what will happen if tensor contains a value greater than 255?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it will be broken down into multiple unit8 -- the underlying values don't change

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[core][compiled graphs] Support all torch.dtypes for tensors sent through shared memory channels
3 participants