-
Notifications
You must be signed in to change notification settings - Fork 27.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
apply_chat_template method not working correctly for llama 3 tokenizer #33091
Comments
cc @Rocketknight1 our chat template expert |
cc @yonigottesman who wrote the original PR at #30650, do you have any idea what the issue here could be? If you don't have time to investigate this right now, let me know and I'll take over. |
related to this and #1620 |
Got it - I wouldn't make a workaround in the template itself, because you'll need to remove the workaround again once the underlying |
for anyone struggling with the same issue atm, I created a temporary workaround for my usecase
|
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
Closing as the issue was fixed by #1640 ! |
Hi @ArthurZucker , could you point to the commit/PR in which it was fixed? |
oups sorry it's a fix in tokenizers |
Updated huggingface/tokenizers#1640 |
System Info
transformers
version: 4.44.1Who can help?
@ArthurZucker
I noticed that the apply_chat_template for the PreTrainedTokenizerBase class does not work correctly when return_assistant_tokens_mask=True. We would expect to get back a list of indices for each example where 1 indicates the token is part of an assistant message and 0 otherwise. This is the case for the Llama 2 tokenizer for example. I am sharing a minimal example to reproduce this issue.
Looking deeper into the apply_chat_template method it seems the issue is related to the char_to_token method of the tokenizers.Embedding class and could be related to the fact that the Llama 3 tokenizer was trained with tiktoken as opposed to sentencepiece.
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
Executing the steps to get the assistant mask in the apply chat template method shows that the char_to_token method of the tokenizers. Embedding class seems to be not working correctly.
Expected behavior
If we assume that the entire chat is 10 characters and the assistant tokens occur at indices 4-6 and 8-9 we would have an expected output that looks like this
[0, 0, 0, 1, 1, 1, 0, 1, 1, 0]
The actual output for the llama 3 tokenizer is always all 0s
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
The text was updated successfully, but these errors were encountered: