Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Llama-3.2 offset-mapping needs fixing #1688

Open
kyrawilson opened this issue Nov 26, 2024 · 1 comment
Open

Llama-3.2 offset-mapping needs fixing #1688

kyrawilson opened this issue Nov 26, 2024 · 1 comment

Comments

@kyrawilson
Copy link

kyrawilson commented Nov 26, 2024

Very similar to the issues here (#1553, #1517), but for the newest Llama models the offset mappings are also incorrect.

The expected output (from Gemma 2-2b-Instruct as an example) should look similar to this:

>>> from transformers import AutoTokenizer
>>> tok_gemma = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
>>> tok_gemma(['This is an example.'], return_offsets_mapping=True)
{'input_ids': [[2, 1596, 603, 671, 3287, 235265]], 'attention_mask': [[1, 1, 1, 1, 1, 1]], 'offset_mapping': [[(0, 0), (0, 4), (4, 7), (7, 10), (10, 18), (18, 19)]]} 

The current output looks like this (key differences are in 'offset_mapping'):

>>> tok_llama = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-1B-Instruct") 
>>> tok_llama(['This is an example.'], return_offsets_mapping=True)
{'input_ids': [[128000, 2028, 374, 459, 3187, 13]], 'attention_mask': [[1, 1, 1, 1, 1, 1]], 'offset_mapping': [[(0, 0), (0, 0), (4, 4), (7, 7), (10, 10), (18, 18)]]}
@ArthurZucker
Copy link
Collaborator

Hey! Sorry but I cannot reproduce, here is what I get:

 {'input_ids': [[128000, 2028, 374, 459, 3187, 13]], 'attention_mask': [[1, 1, 1, 1, 1, 1]], 'offset_mapping': [[(0, 0), (0, 4), (4, 7), (7, 10), (10, 18), (18, 19)]]}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants