Skip to content

Commit

Permalink
improve bart
Browse files Browse the repository at this point in the history
  • Loading branch information
patrickvonplaten committed Oct 10, 2023
1 parent 045f183 commit 5a82297
Showing 1 changed file with 0 additions and 1 deletion.
1 change: 0 additions & 1 deletion src/transformers/models/bart/modeling_bart.py
Original file line number Diff line number Diff line change
Expand Up @@ -420,7 +420,6 @@ def forward(

return attn_output, attn_weights, past_key_value

# Copied from transformers.models.llama.modeling_llama.LlamaFlashAttention2._flash_attention_forward
def _flash_attention_forward(
self, query_states, key_states, value_states, padding_mask, query_length, causal=True, dropout=0.0, softmax_scale=None
):
Expand Down

0 comments on commit 5a82297

Please sign in to comment.