Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SnapKV_Cache support added #34710

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open

Conversation

shirinyamani
Copy link

What does this PR do?

This PR adds the implementation of SnapKV Cache paper to cache_utils.py file. Also adds the changes in flash_attention2 in llama_modeling to reflect the initialization of the SnapKV cache.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@LysandreJik
Copy link
Member

Pinging the cache gang @zucchini-nlp @ArthurZucker @gante

Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey! super interested in having new kv cache implementations, but we need to make them compatible with the current API! Tha kv length is contained in the cache_positions which is what should be used! No modeling changes should be required!

@shirinyamani
Copy link
Author

@ArthurZucker Thanks alot for your response! So do you think if I remove the modeling changes then the SnapKV class is fine to be added to the cache_utils.py ? , couldn't find the cache_positions you are referring to in the files (cache n llama)!

@ArthurZucker
Copy link
Collaborator

Yes !

cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position}
cache positions are here.
Overall we should not need modeling changes!

@ArthurZucker
Copy link
Collaborator

But with #35235 you can maybe find a better way to support this by creating a new attn_implementation and add it to the mapping? 🤗

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants