Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

build_labels includes masked image tokens? #46

Open
RealNicolasBourbaki opened this issue Jun 22, 2023 · 3 comments
Open

build_labels includes masked image tokens? #46

RealNicolasBourbaki opened this issue Jun 22, 2023 · 3 comments

Comments

@RealNicolasBourbaki
Copy link

RealNicolasBourbaki commented Jun 22, 2023

Hi Authors,

in these lines, the function build_labels masked all the labels in positions up to the seq length of the embeddings. What differences would it make if one just use the caption?

To be more specific, now the code build a label with first part of the sequence (which has sequence length the same as the image) all set to -100, then the second part would be the actual text labels. Why would we need all the -100s? Why couldn't we just use text label ids?

Thanks a lot!

@RealNicolasBourbaki RealNicolasBourbaki changed the title build_labels include image tokens? build_labels includes masked image tokens? Jun 22, 2023
@CoEich
Copy link
Collaborator

CoEich commented Aug 3, 2023

Hi,

positions with index -100 get ignored by default when using torch cross entropy loss. We don't want a loss on these positions because the model is supposed to predict the text.

Best,

Constantin

@RealNicolasBourbaki
Copy link
Author

Hi,

positions with index -100 get ignored by default when using torch cross entropy loss. We don't want a loss on these positions because the model is supposed to predict the text.

Best,

Constantin

Yeah I get it. But why mask at all? Why not just cut that part out and only use the text that you are predicting?

@CoEich
Copy link
Collaborator

CoEich commented Aug 4, 2023

I guess you are right, one could also truncate the logits and labels from the left for the amount of image positions.

At least we need some logic to mask the right-padding: since the amount of right-padding varies per batch item we cannot simply cut this part without breaking up the batch which would be awkward. Using the same function to mask the image positions seems like the most clean solution to me.

In the end it is a matter of taste :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants