Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Matmul for CPU #1511

Open
zcbenz opened this issue Oct 22, 2024 · 3 comments
Open

[Feature] Matmul for CPU #1511

zcbenz opened this issue Oct 22, 2024 · 3 comments
Labels
enhancement New feature or request

Comments

@zcbenz
Copy link
Contributor

zcbenz commented Oct 22, 2024

Some of the most popular models provide weights in bfloat16, which unfortunately can not load on CPU because Matmul::eval_cpu only supports float32.

I know CPU support is not on priority, but it would be great if my code can run on other platforms than mac arm64 even being very slow.

@awni awni added the enhancement New feature or request label Oct 22, 2024
@thegodone
Copy link

maybe this can be also interesting to look at https://github.com/microsoft/BitNet

@polvalente
Copy link

Are there plans for supporting integer tensors in tensordot/matmul?

@awni
Copy link
Member

awni commented Nov 25, 2024

We're not opposed to having integer support for matmul, but it's not an active priority at the moment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants