-
Notifications
You must be signed in to change notification settings - Fork 27.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix continue_final_message for image-text-to-text chat templates #34236
Fix continue_final_message for image-text-to-text chat templates #34236
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks! Maybe also cc @Rocketknight1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, LGTM too!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool! Can we have a small test please? 🤗
Added one test for llava processor :). I could add one for every vlms processor that use chat template, but as they all use the same underlying |
2fa12a4
to
64523c2
Compare
64523c2
to
afc298f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks 😉
…gingface#34236) * fix continue_final_message for vlms * Add one test for vlms continue_final_message chat template
…gingface#34236) * fix continue_final_message for vlms * Add one test for vlms continue_final_message chat template
What does this PR do?
The
content
field for an image-text-to-text model is a list, which is not currently taken into account whencontinue_final_message
is set to True in tokenization_utils_base.Split from image-text-to-text PR
Reproduce error:
@zucchini-nlp @ArthurZucker