Skip to content

Commit

Permalink
update install method (#24)
Browse files Browse the repository at this point in the history
  • Loading branch information
chensuyue authored May 31, 2024
1 parent 4bdf490 commit b563b38
Showing 1 changed file with 11 additions and 1 deletion.
12 changes: 11 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,21 @@
Evaluation, benchmark, and scorecard, targeting for performance on throughput and latency, accuracy on popular evaluation harness, safety, and hallucination

## Installation
```shell

- Install from Pypi

```bash
pip install opea-eval
```

- Build from Source

```bash
git clone https://github.com/opea-project/GenAIEval
cd GenAIEval
pip install -e .
```

## Evaluation
### lm-evaluation-harness
For evaluating the models on text-generation tasks, we follow the [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/) and provide the command line usage and function call usage. Over 60 standard academic benchmarks for LLMs, with hundreds of [subtasks and variants](https://github.com/EleutherAI/lm-evaluation-harness/tree/v0.4.2/lm_eval/tasks) implemented, such as `ARC`, `HellaSwag`, `MMLU`, `TruthfulQA`, `Winogrande`, `GSM8K` and so on.
Expand Down

0 comments on commit b563b38

Please sign in to comment.