forked from EleutherAI/lm-evaluation-harness
-
Notifications
You must be signed in to change notification settings - Fork 3
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
feat(dependencies): Include evaluate metrics in lm-evaluation-harness…
… for offline usage Metrics taken from https://github.com/opendatahub-io/hf-evaluate/tree/odh-2.22
- Loading branch information
Showing
225 changed files
with
17,546 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,119 @@ | ||
--- | ||
title: Accuracy | ||
emoji: 🤗 | ||
colorFrom: blue | ||
colorTo: red | ||
sdk: gradio | ||
sdk_version: 3.19.1 | ||
app_file: app.py | ||
pinned: false | ||
tags: | ||
- evaluate | ||
- metric | ||
description: >- | ||
Accuracy is the proportion of correct predictions among the total number of cases processed. It can be computed with: | ||
Accuracy = (TP + TN) / (TP + TN + FP + FN) | ||
Where: | ||
TP: True positive | ||
TN: True negative | ||
FP: False positive | ||
FN: False negative | ||
--- | ||
|
||
# Metric Card for Accuracy | ||
|
||
|
||
## Metric Description | ||
|
||
Accuracy is the proportion of correct predictions among the total number of cases processed. It can be computed with: | ||
Accuracy = (TP + TN) / (TP + TN + FP + FN) | ||
Where: | ||
TP: True positive | ||
TN: True negative | ||
FP: False positive | ||
FN: False negative | ||
|
||
|
||
## How to Use | ||
|
||
At minimum, this metric requires predictions and references as inputs. | ||
|
||
```python | ||
>>> accuracy_metric = evaluate.load("accuracy") | ||
>>> results = accuracy_metric.compute(references=[0, 1], predictions=[0, 1]) | ||
>>> print(results) | ||
{'accuracy': 1.0} | ||
``` | ||
|
||
|
||
### Inputs | ||
- **predictions** (`list` of `int`): Predicted labels. | ||
- **references** (`list` of `int`): Ground truth labels. | ||
- **normalize** (`boolean`): If set to False, returns the number of correctly classified samples. Otherwise, returns the fraction of correctly classified samples. Defaults to True. | ||
- **sample_weight** (`list` of `float`): Sample weights Defaults to None. | ||
|
||
|
||
### Output Values | ||
- **accuracy**(`float` or `int`): Accuracy score. Minimum possible value is 0. Maximum possible value is 1.0, or the number of examples input, if `normalize` is set to `True`. A higher score means higher accuracy. | ||
|
||
Output Example(s): | ||
```python | ||
{'accuracy': 1.0} | ||
``` | ||
|
||
This metric outputs a dictionary, containing the accuracy score. | ||
|
||
|
||
#### Values from Popular Papers | ||
|
||
Top-1 or top-5 accuracy is often used to report performance on supervised classification tasks such as image classification (e.g. on [ImageNet](https://paperswithcode.com/sota/image-classification-on-imagenet)) or sentiment analysis (e.g. on [IMDB](https://paperswithcode.com/sota/text-classification-on-imdb)). | ||
|
||
|
||
### Examples | ||
|
||
Example 1-A simple example | ||
```python | ||
>>> accuracy_metric = evaluate.load("accuracy") | ||
>>> results = accuracy_metric.compute(references=[0, 1, 2, 0, 1, 2], predictions=[0, 1, 1, 2, 1, 0]) | ||
>>> print(results) | ||
{'accuracy': 0.5} | ||
``` | ||
|
||
Example 2-The same as Example 1, except with `normalize` set to `False`. | ||
```python | ||
>>> accuracy_metric = evaluate.load("accuracy") | ||
>>> results = accuracy_metric.compute(references=[0, 1, 2, 0, 1, 2], predictions=[0, 1, 1, 2, 1, 0], normalize=False) | ||
>>> print(results) | ||
{'accuracy': 3.0} | ||
``` | ||
|
||
Example 3-The same as Example 1, except with `sample_weight` set. | ||
```python | ||
>>> accuracy_metric = evaluate.load("accuracy") | ||
>>> results = accuracy_metric.compute(references=[0, 1, 2, 0, 1, 2], predictions=[0, 1, 1, 2, 1, 0], sample_weight=[0.5, 2, 0.7, 0.5, 9, 0.4]) | ||
>>> print(results) | ||
{'accuracy': 0.8778625954198473} | ||
``` | ||
|
||
|
||
## Limitations and Bias | ||
This metric can be easily misleading, especially in the case of unbalanced classes. For example, a high accuracy might be because a model is doing well, but if the data is unbalanced, it might also be because the model is only accurately labeling the high-frequency class. In such cases, a more detailed analysis of the model's behavior, or the use of a different metric entirely, is necessary to determine how well the model is actually performing. | ||
|
||
|
||
## Citation(s) | ||
```bibtex | ||
@article{scikit-learn, | ||
title={Scikit-learn: Machine Learning in {P}ython}, | ||
author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V. | ||
and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P. | ||
and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and | ||
Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.}, | ||
journal={Journal of Machine Learning Research}, | ||
volume={12}, | ||
pages={2825--2830}, | ||
year={2011} | ||
} | ||
``` | ||
|
||
|
||
## Further References |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,106 @@ | ||
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
"""Accuracy metric.""" | ||
|
||
import datasets | ||
from sklearn.metrics import accuracy_score | ||
|
||
import evaluate | ||
|
||
|
||
_DESCRIPTION = """ | ||
Accuracy is the proportion of correct predictions among the total number of cases processed. It can be computed with: | ||
Accuracy = (TP + TN) / (TP + TN + FP + FN) | ||
Where: | ||
TP: True positive | ||
TN: True negative | ||
FP: False positive | ||
FN: False negative | ||
""" | ||
|
||
|
||
_KWARGS_DESCRIPTION = """ | ||
Args: | ||
predictions (`list` of `int`): Predicted labels. | ||
references (`list` of `int`): Ground truth labels. | ||
normalize (`boolean`): If set to False, returns the number of correctly classified samples. Otherwise, returns the fraction of correctly classified samples. Defaults to True. | ||
sample_weight (`list` of `float`): Sample weights Defaults to None. | ||
Returns: | ||
accuracy (`float` or `int`): Accuracy score. Minimum possible value is 0. Maximum possible value is 1.0, or the number of examples input, if `normalize` is set to `True`.. A higher score means higher accuracy. | ||
Examples: | ||
Example 1-A simple example | ||
>>> accuracy_metric = evaluate.load("accuracy") | ||
>>> results = accuracy_metric.compute(references=[0, 1, 2, 0, 1, 2], predictions=[0, 1, 1, 2, 1, 0]) | ||
>>> print(results) | ||
{'accuracy': 0.5} | ||
Example 2-The same as Example 1, except with `normalize` set to `False`. | ||
>>> accuracy_metric = evaluate.load("accuracy") | ||
>>> results = accuracy_metric.compute(references=[0, 1, 2, 0, 1, 2], predictions=[0, 1, 1, 2, 1, 0], normalize=False) | ||
>>> print(results) | ||
{'accuracy': 3.0} | ||
Example 3-The same as Example 1, except with `sample_weight` set. | ||
>>> accuracy_metric = evaluate.load("accuracy") | ||
>>> results = accuracy_metric.compute(references=[0, 1, 2, 0, 1, 2], predictions=[0, 1, 1, 2, 1, 0], sample_weight=[0.5, 2, 0.7, 0.5, 9, 0.4]) | ||
>>> print(results) | ||
{'accuracy': 0.8778625954198473} | ||
""" | ||
|
||
|
||
_CITATION = """ | ||
@article{scikit-learn, | ||
title={Scikit-learn: Machine Learning in {P}ython}, | ||
author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V. | ||
and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P. | ||
and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and | ||
Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.}, | ||
journal={Journal of Machine Learning Research}, | ||
volume={12}, | ||
pages={2825--2830}, | ||
year={2011} | ||
} | ||
""" | ||
|
||
|
||
@evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION) | ||
class Accuracy(evaluate.Metric): | ||
def _info(self): | ||
return evaluate.MetricInfo( | ||
description=_DESCRIPTION, | ||
citation=_CITATION, | ||
inputs_description=_KWARGS_DESCRIPTION, | ||
features=datasets.Features( | ||
{ | ||
"predictions": datasets.Sequence(datasets.Value("int32")), | ||
"references": datasets.Sequence(datasets.Value("int32")), | ||
} | ||
if self.config_name == "multilabel" | ||
else { | ||
"predictions": datasets.Value("int32"), | ||
"references": datasets.Value("int32"), | ||
} | ||
), | ||
reference_urls=["https://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html"], | ||
) | ||
|
||
def _compute(self, predictions, references, normalize=True, sample_weight=None): | ||
return { | ||
"accuracy": float( | ||
accuracy_score(references, predictions, normalize=normalize, sample_weight=sample_weight) | ||
) | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,6 @@ | ||
import evaluate | ||
from evaluate.utils import launch_gradio_widget | ||
|
||
|
||
module = evaluate.load("accuracy") | ||
launch_gradio_widget(module) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,2 @@ | ||
git+https://github.com/huggingface/evaluate@{COMMIT_PLACEHOLDER} | ||
scikit-learn |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,133 @@ | ||
--- | ||
title: BERT Score | ||
emoji: 🤗 | ||
colorFrom: blue | ||
colorTo: red | ||
sdk: gradio | ||
sdk_version: 3.19.1 | ||
app_file: app.py | ||
pinned: false | ||
tags: | ||
- evaluate | ||
- metric | ||
description: >- | ||
BERTScore leverages the pre-trained contextual embeddings from BERT and matches words in candidate and reference | ||
sentences by cosine similarity. | ||
It has been shown to correlate with human judgment on sentence-level and system-level evaluation. | ||
Moreover, BERTScore computes precision, recall, and F1 measure, which can be useful for evaluating different language | ||
generation tasks. | ||
See the project's README at https://github.com/Tiiiger/bert_score#readme for more information. | ||
--- | ||
|
||
# Metric Card for BERT Score | ||
|
||
## Metric description | ||
|
||
BERTScore is an automatic evaluation metric for text generation that computes a similarity score for each token in the candidate sentence with each token in the reference sentence. It leverages the pre-trained contextual embeddings from [BERT](https://huggingface.co/bert-base-uncased) models and matches words in candidate and reference sentences by cosine similarity. | ||
|
||
Moreover, BERTScore computes precision, recall, and F1 measure, which can be useful for evaluating different language generation tasks. | ||
|
||
## How to use | ||
|
||
BERTScore takes 3 mandatory arguments : `predictions` (a list of string of candidate sentences), `references` (a list of strings or list of list of strings of reference sentences) and either `lang` (a string of two letters indicating the language of the sentences, in [ISO 639-1 format](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes)) or `model_type` (a string specififying which model to use, according to the BERT specification). The default behavior of the metric is to use the suggested model for the target language when one is specified, otherwise to use the `model_type` indicated. | ||
|
||
```python | ||
from evaluate import load | ||
bertscore = load("bertscore") | ||
predictions = ["hello there", "general kenobi"] | ||
references = ["hello there", "general kenobi"] | ||
results = bertscore.compute(predictions=predictions, references=references, lang="en") | ||
``` | ||
|
||
BERTScore also accepts multiple optional arguments: | ||
|
||
|
||
`num_layers` (int): The layer of representation to use. The default is the number of layers tuned on WMT16 correlation data, which depends on the `model_type` used. | ||
|
||
`verbose` (bool): Turn on intermediate status update. The default value is `False`. | ||
|
||
`idf` (bool or dict): Use idf weighting; can also be a precomputed idf_dict. | ||
|
||
`device` (str): On which the contextual embedding model will be allocated on. If this argument is `None`, the model lives on `cuda:0` if cuda is available. | ||
|
||
`nthreads` (int): Number of threads used for computation. The default value is `4`. | ||
|
||
`rescale_with_baseline` (bool): Rescale BERTScore with the pre-computed baseline. The default value is `False`. | ||
|
||
`batch_size` (int): BERTScore processing batch size, at least one of `model_type` or `lang`. `lang` needs to be specified when `rescale_with_baseline` is `True`. | ||
|
||
`baseline_path` (str): Customized baseline file. | ||
|
||
`use_fast_tokenizer` (bool): `use_fast` parameter passed to HF tokenizer. The default value is `False`. | ||
|
||
|
||
## Output values | ||
|
||
BERTScore outputs a dictionary with the following values: | ||
|
||
`precision`: The [precision](https://huggingface.co/metrics/precision) for each sentence from the `predictions` + `references` lists, which ranges from 0.0 to 1.0. | ||
|
||
`recall`: The [recall](https://huggingface.co/metrics/recall) for each sentence from the `predictions` + `references` lists, which ranges from 0.0 to 1.0. | ||
|
||
`f1`: The [F1 score](https://huggingface.co/metrics/f1) for each sentence from the `predictions` + `references` lists, which ranges from 0.0 to 1.0. | ||
|
||
`hashcode:` The hashcode of the library. | ||
|
||
|
||
### Values from popular papers | ||
The [original BERTScore paper](https://openreview.net/pdf?id=SkeHuCVFDr) reported average model selection accuracies (Hits@1) on WMT18 hybrid systems for different language pairs, which ranged from 0.004 for `en<->tr` to 0.824 for `en<->de`. | ||
|
||
For more recent model performance, see the [metric leaderboard](https://paperswithcode.com/paper/bertscore-evaluating-text-generation-with). | ||
|
||
## Examples | ||
|
||
Maximal values with the `distilbert-base-uncased` model: | ||
|
||
```python | ||
from evaluate import load | ||
bertscore = load("bertscore") | ||
predictions = ["hello world", "general kenobi"] | ||
references = ["hello world", "general kenobi"] | ||
results = bertscore.compute(predictions=predictions, references=references, model_type="distilbert-base-uncased") | ||
print(results) | ||
{'precision': [1.0, 1.0], 'recall': [1.0, 1.0], 'f1': [1.0, 1.0], 'hashcode': 'distilbert-base-uncased_L5_no-idf_version=0.3.10(hug_trans=4.10.3)'} | ||
``` | ||
|
||
Partial match with the `distilbert-base-uncased` model: | ||
|
||
```python | ||
from evaluate import load | ||
bertscore = load("bertscore") | ||
predictions = ["hello world", "general kenobi"] | ||
references = ["goodnight moon", "the sun is shining"] | ||
results = bertscore.compute(predictions=predictions, references=references, model_type="distilbert-base-uncased") | ||
print(results) | ||
{'precision': [0.7380737066268921, 0.5584042072296143], 'recall': [0.7380737066268921, 0.5889028906822205], 'f1': [0.7380737066268921, 0.5732481479644775], 'hashcode': 'bert-base-uncased_L5_no-idf_version=0.3.10(hug_trans=4.10.3)'} | ||
``` | ||
|
||
## Limitations and bias | ||
|
||
The [original BERTScore paper](https://openreview.net/pdf?id=SkeHuCVFDr) showed that BERTScore correlates well with human judgment on sentence-level and system-level evaluation, but this depends on the model and language pair selected. | ||
|
||
Furthermore, not all languages are supported by the metric -- see the [BERTScore supported language list](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages) for more information. | ||
|
||
Finally, calculating the BERTScore metric involves downloading the BERT model that is used to compute the score-- the default model for `en`, `roberta-large`, takes over 1.4GB of storage space and downloading it can take a significant amount of time depending on the speed of your internet connection. If this is an issue, choose a smaller model; for instance `distilbert-base-uncased` is 268MB. A full list of compatible models can be found [here](https://docs.google.com/spreadsheets/d/1RKOVpselB98Nnh_EOC4A2BYn8_201tmPODpNWu4w7xI/edit#gid=0). | ||
|
||
|
||
## Citation | ||
|
||
```bibtex | ||
@inproceedings{bert-score, | ||
title={BERTScore: Evaluating Text Generation with BERT}, | ||
author={Tianyi Zhang* and Varsha Kishore* and Felix Wu* and Kilian Q. Weinberger and Yoav Artzi}, | ||
booktitle={International Conference on Learning Representations}, | ||
year={2020}, | ||
url={https://openreview.net/forum?id=SkeHuCVFDr} | ||
} | ||
``` | ||
|
||
## Further References | ||
|
||
- [BERTScore Project README](https://github.com/Tiiiger/bert_score#readme) | ||
- [BERTScore ICLR 2020 Poster Presentation](https://iclr.cc/virtual_2020/poster_SkeHuCVFDr.html) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,6 @@ | ||
import evaluate | ||
from evaluate.utils import launch_gradio_widget | ||
|
||
|
||
module = evaluate.load("bertscore") | ||
launch_gradio_widget(module) |
Oops, something went wrong.