-
Notifications
You must be signed in to change notification settings - Fork 10.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
musa: enable building fat binaries, enable unified memory, and disable Flash Attention on QY1 (MTT S80) #9526
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
ngxson
reviewed
Sep 18, 2024
yeahdongcn
force-pushed
the
upstream_master
branch
2 times, most recently
from
September 18, 2024 09:54
98bf998
to
abbc1f2
Compare
slaren
reviewed
Sep 18, 2024
slaren
reviewed
Sep 18, 2024
yeahdongcn
force-pushed
the
upstream_master
branch
from
September 19, 2024 01:21
abbc1f2
to
df79623
Compare
4 tasks
slaren
approved these changes
Sep 19, 2024
slaren
reviewed
Sep 20, 2024
yeahdongcn
force-pushed
the
upstream_master
branch
from
September 22, 2024 05:12
df79623
to
de1a9b9
Compare
yeahdongcn
changed the title
musa: enable building fat binaries, enable VMM support, and disable Flash Attention on QY1 (MTT S80)
musa: enable building fat binaries, enable unified memory, and disable Flash Attention on QY1 (MTT S80)
Sep 22, 2024
Signed-off-by: Xiaodong Ye <[email protected]>
…_mat_batched_cublas Signed-off-by: Xiaodong Ye <[email protected]>
Signed-off-by: Xiaodong Ye <[email protected]>
Signed-off-by: Xiaodong Ye <[email protected]>
yeahdongcn
force-pushed
the
upstream_master
branch
from
September 22, 2024 11:50
de1a9b9
to
0fb0b4e
Compare
slaren
approved these changes
Sep 22, 2024
dsx1986
pushed a commit
to dsx1986/llama.cpp
that referenced
this pull request
Oct 29, 2024
…e Flash Attention on QY1 (MTT S80) (ggerganov#9526) * mtgpu: add mp_21 support Signed-off-by: Xiaodong Ye <[email protected]> * mtgpu: disable flash attention on qy1 (MTT S80); disable q3_k and mul_mat_batched_cublas Signed-off-by: Xiaodong Ye <[email protected]> * mtgpu: enable unified memory Signed-off-by: Xiaodong Ye <[email protected]> * mtgpu: map cublasOperation_t to mublasOperation_t (sync code to latest) Signed-off-by: Xiaodong Ye <[email protected]> --------- Signed-off-by: Xiaodong Ye <[email protected]>
arthw
pushed a commit
to arthw/llama.cpp
that referenced
this pull request
Nov 15, 2024
…e Flash Attention on QY1 (MTT S80) (ggerganov#9526) * mtgpu: add mp_21 support Signed-off-by: Xiaodong Ye <[email protected]> * mtgpu: disable flash attention on qy1 (MTT S80); disable q3_k and mul_mat_batched_cublas Signed-off-by: Xiaodong Ye <[email protected]> * mtgpu: enable unified memory Signed-off-by: Xiaodong Ye <[email protected]> * mtgpu: map cublasOperation_t to mublasOperation_t (sync code to latest) Signed-off-by: Xiaodong Ye <[email protected]> --------- Signed-off-by: Xiaodong Ye <[email protected]>
arthw
pushed a commit
to arthw/llama.cpp
that referenced
this pull request
Nov 18, 2024
…e Flash Attention on QY1 (MTT S80) (ggerganov#9526) * mtgpu: add mp_21 support Signed-off-by: Xiaodong Ye <[email protected]> * mtgpu: disable flash attention on qy1 (MTT S80); disable q3_k and mul_mat_batched_cublas Signed-off-by: Xiaodong Ye <[email protected]> * mtgpu: enable unified memory Signed-off-by: Xiaodong Ye <[email protected]> * mtgpu: map cublasOperation_t to mublasOperation_t (sync code to latest) Signed-off-by: Xiaodong Ye <[email protected]> --------- Signed-off-by: Xiaodong Ye <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR enables building fat binaries for both QY1 (mp_21, MTT S80) and QY2 (mp_22, MTT S4000), and enables unified memory (#8035). However, due to a known issue when compiling Flash Attention on QY1, it has been explicitly disabled.
Testing done
make GGML_MUSA=1
-> passedran
tinyllama
model on MTT S80 and MTT S4000 (w/woGGML_CUDA_ENABLE_UNIFIED_MEMORY=1
) -> passed./tests/test-backend-ops
-> passed full log