-
Notifications
You must be signed in to change notification settings - Fork 27.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache: dynamic cache with cross attention and UMT5 Cache
support
#28185
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
@@ -240,41 +248,71 @@ def compute_bias(self, query_length, key_length, device=None): | |||
values = values.permute([2, 0, 1]).unsqueeze(0) # shape (1, num_heads, query_length, key_length) | |||
return values | |||
|
|||
def _prepare_key_values( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This abstraction does not look particularly useful here. However, for models with multiple attention implementations, this abstraction is useful: all attention implementations can share it!
(e.g. in Bart the benefits are clear)
@@ -481,6 +501,7 @@ class UMT5PreTrainedModel(PreTrainedModel): | |||
supports_gradient_checkpointing = True | |||
_no_split_modules = ["UMT5Block"] | |||
_keep_in_fp32_modules = ["wo"] | |||
_supports_cache_class = True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This enables the test_new_cache_format
test -> converting back and forth between the new cache and the legacy cache with cross attention is tested
@@ -560,6 +560,27 @@ def test_training_gradient_checkpointing_use_reentrant_false(self): | |||
@require_sentencepiece | |||
@require_tokenizers | |||
class Umt5IntegrationTest(unittest.TestCase): | |||
def test_generation(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensures there is no regression from main
I've double-checked that we get the same values in main
. I've also checked the results with and without cache, in both main
and this PR.
# If we are about to go beyond the maximum cache length, we need to crop the input attention mask. | ||
if ( | ||
max_cache_length is not None | ||
and decoder_attention_mask is not None | ||
and cache_length + decoder_input_ids.shape[1] > max_cache_length | ||
): | ||
decoder_attention_mask = decoder_attention_mask[:, -max_cache_length:] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logic copied from llama + sink cache -> this makes the model ready for caches like sink cache
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#27931 will shamble things up 👿
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
What does this PR do?
#28065 was becoming messy due to all Bart "copied from" dependencies, so this PR is a tiny version of it.
This PR:
DynamicCacheWithCrossAttention
, which expandsDynamicCache
[cache object equivalent to the previouspast_key_values
input/output] with the ability to hold a cross-attention cache. This design was intentional: most LLMs (and now even multimodel models) tend to be decoder-only, so this separation will keep the cache class for decoder-only models simpler. It also enables us to be more strict -- in Cache:Bart
and related architectures supportCache
objects #28065 I've caught an unintended cache deletion in Whisper thanks to the increased specificity!Cache
support tomodeling_umt5.py
, which is a form to test whetherDynamicCacheWithCrossAttention
is equivalent to the previous cache. These changes are the equivalent of the modeling changes in Generate: NewCache
abstraction and Attention Sinks support #26681, but for encoder-decoder models.Local tests run:
RUN_SLOW=1 py.test tests/models/umt5/test_modeling_umt5.py -vv
[Note: adds a test to ensure we keep the same results as inmain
]