Skip to content

Commit

Permalink
Clearer error for SDPA when explicitely requested (huggingface#28006)
Browse files Browse the repository at this point in the history
* clearer error for sdpa

* better message
  • Loading branch information
fxmarty authored and AjayP13 committed Jan 22, 2024
1 parent 3dcd758 commit fe33b10
Showing 1 changed file with 3 additions and 2 deletions.
5 changes: 3 additions & 2 deletions src/transformers/modeling_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -1540,8 +1540,9 @@ def _check_and_enable_sdpa(cls, config, hard_check_only: bool = False) -> Pretra
if hard_check_only:
if not cls._supports_sdpa:
raise ValueError(
f"{cls.__name__} does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet. Please open an issue on GitHub to "
"request support for this architecture: https://github.com/huggingface/transformers/issues/new"
f"{cls.__name__} does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet."
" Please request the support for this architecture: https://github.com/huggingface/transformers/issues/28005. If you believe"
' this error is a bug, please open an issue in Transformers GitHub repository and load your model with the argument `attn_implementation="eager"` meanwhile. Example: `model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="eager")`'
)
if not is_torch_sdpa_available():
raise ImportError(
Expand Down

0 comments on commit fe33b10

Please sign in to comment.