We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some of the most popular models provide weights in bfloat16, which unfortunately can not load on CPU because Matmul::eval_cpu only supports float32.
Matmul::eval_cpu
I know CPU support is not on priority, but it would be great if my code can run on other platforms than mac arm64 even being very slow.
The text was updated successfully, but these errors were encountered:
maybe this can be also interesting to look at https://github.com/microsoft/BitNet
Sorry, something went wrong.
Are there plans for supporting integer tensors in tensordot/matmul?
We're not opposed to having integer support for matmul, but it's not an active priority at the moment.
No branches or pull requests
Some of the most popular models provide weights in bfloat16, which unfortunately can not load on CPU because
Matmul::eval_cpu
only supports float32.I know CPU support is not on priority, but it would be great if my code can run on other platforms than mac arm64 even being very slow.
The text was updated successfully, but these errors were encountered: