-
Notifications
You must be signed in to change notification settings - Fork 27.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tokenizer does not split text according to newly added input tokens #35447
Labels
Comments
jiongjiongli
added a commit
to jiongjiongli/transformers
that referenced
this issue
Dec 29, 2024
…g to newly added input tokens The root reason is Trie.split method didn't ignore partial match that should be removed
jiongjiongli
changed the title
LlamaTokenizer does not split text according to newly added input tokens
Tokenizer does not split text according to newly added input tokens
Dec 29, 2024
jiongjiongli
added a commit
to jiongjiongli/transformers
that referenced
this issue
Dec 29, 2024
…newly added input tokens The root reason is Trie.split method didn't ignore partial match that should be removed
jiongjiongli
added a commit
to jiongjiongli/transformers
that referenced
this issue
Dec 29, 2024
…newly added input tokens The root reason is Trie.split method didn't ignore partial match that should be removed Add test case to token split
jiongjiongli
added a commit
to jiongjiongli/transformers
that referenced
this issue
Dec 29, 2024
…newly added input tokens The root reason is Trie.split method didn't ignore partial match that should be removed Add test case to token split
jiongjiongli
added a commit
to jiongjiongli/transformers
that referenced
this issue
Dec 31, 2024
…newly added input tokens The root reason is Trie.split method didn't ignore partial match that should be removed Add test case to token split
jiongjiongli
added a commit
to jiongjiongli/transformers
that referenced
this issue
Jan 1, 2025
…newly added input tokens The root reason is Trie.split method didn't ignore partial match that should be removed Add test case to token split
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
System Info
transformers
version: 4.47.1Who can help?
@ArthurZucker and @itazap
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
Repro Steps:
Repro Code:
Expected behavior
The tokenizer should split "read" into ['r', 'e', 'ad'] since "e" is now a token.
The text was updated successfully, but these errors were encountered: