-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmarks - Add LLaMA-2 Models #668
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pls use python3 setup.py lint
to check the format and run python3 setup.py format
to format and code
@abuccts can I get access to the unit test logs. |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #668 +/- ##
==========================================
+ Coverage 85.58% 85.61% +0.03%
==========================================
Files 98 99 +1
Lines 7046 7165 +119
==========================================
+ Hits 6030 6134 +104
- Misses 1016 1031 +15
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
LGTM, thanks! Please fix the UT failures with Python 1.10. And since the CUDA tests are running on K80 which is very old GPU, we can skip the "cuda-init-test", and just make sure "cpu-unit-test" can pass.
|
tokenizers Rust:cargo issue: huggingface/tokenizers#1691 |
Added llama benchmark - training and inference in accordance with the existing pytorch models implementation like gpt2, lstm etc.