-
Notifications
You must be signed in to change notification settings - Fork 27.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Whisper Beam Search doesn't work #33445
Comments
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
Hey @Nik-Kras, indeed that's an important issue to solve! We are working on multiple fixes on Transformers Whisper integration (see #34135, #34111 and #33512). This issue requires a bug on the |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
System Info
Who can help?
@ylacombe @eustlb
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
sequences_scores
in the Whisper beam search output #32970 (it allows to output sequence_score)Expected behavior
Together with @ylacombe we identified that after Pull Request #30984 Whisper Beam Search generation doesn't work as intended.
See more detailed discussion on Pull Request #32970
The code above must return 5 unique hypotheses due to the core principle of the Beam Search - to select
num_beams
best tokens in a top_k sampling fashion. Instead, we are getting the same results with the highest probability. See below for how Beam Search used to work in version v4.25.1 and how it works now.transformers v4.25.1
transformers v4.44.1 + My Fix from #32970
@ylacombe has found the bug in _expand_variables_for_generation function.
The function artificially expands the batch size to
num_return_sequences
, which causes an issue when this expanded batch size is passed toGenerationMixin.generate
. Specifically, ifbatch_size=5
andnum_return_sequences > 1
, the model generatesbatch_size * num_beams
beams but retains only the most probable beam for each element of the original batch.Impact
This bug results in the
num_return_sequences
parameter not being compatible with both short-form and long-form generation. Users expecting multiple return sequences will only receive the most probable sequence, which may not meet the intended use case.cc @eustlb
The text was updated successfully, but these errors were encountered: