-
Notifications
You must be signed in to change notification settings - Fork 379
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open-LLaMA-3B results are much worse than reported in this repo #68
Comments
It seems that the anli_* and truthfulqa_mc are similar. But the rest is -20% worse. I'm wondering the results reported in this repo for hellaswag and ARC_* are few-shot = 0 or not? |
Everything reported here is zero shot. Did you turn off the fast tokenizer when evaluating? There is a bug in the recent release of transformers library which causes the auto converted tokenizer to output different tokens than the original tokenizer. Therefore, when evaluating OpenLLaMA, you need to turn off the fast tokenizer. |
Is that bug still there? I thought I read somewhere that it got fixed. |
@buzzCraft It got fixed in the main branch of transformers but there hasn't been a release with that fix yet |
@young-geng ok,since we are on the bleeding edge of the llm field, I usually go with the dev branch. I also want to thank you and the team for the amazing work you have done. ❤️ |
The text was updated successfully, but these errors were encountered: