Skip to content

Commit

Permalink
testing of auto-eval with Llama 3.2 successful
Browse files Browse the repository at this point in the history
Signed-off-by: aasavari <[email protected]>
  • Loading branch information
adkakne committed Sep 26, 2024
1 parent f266c3c commit f134058
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion evals/metrics/auto_eval/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ AutoEval can run in 3 evaluation modes -
- To launch HF endpoint on Gaudi2, please follow the 2-step instructions here - [tgi-gaudi](https://github.com/huggingface/tgi-gaudi).
- Pass your endpoint url as `model_name` argument.
2. `evaluation_mode="openai"` uses openai backend.
- Please set your `OPEN_API_KEY` and your choice of model as `model_name` argument.
- Please set your `openai_key` and your choice of model as `model_name` argument.
3. `evaluation_mode="local"` uses your local hardware.
- Set `hf_token` argument and set your favourite open-source model in `model_name` argument.
- GPU usage will be prioritized after checking it's availability. If GPU is unavailable, the model will run on CPU.
Expand Down
2 changes: 1 addition & 1 deletion tests/test_auto_eval.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
import os
import unittest

from evals.evaluation.auto_eval import AutoEvaluate
from evals.metrics.auto_eval import AutoEvaluate

host_ip = os.getenv("host_ip", "localhost")
port = os.getenv("port", "8008")
Expand Down

0 comments on commit f134058

Please sign in to comment.