Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🚨🚨🚨Deprecate evaluation_strategy to eval_strategy🚨🚨🚨 #30190

Merged
merged 8 commits into from
Apr 18, 2024
Merged
Show file tree
Hide file tree
Changes from 7 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/source/de/training.md
Original file line number Diff line number Diff line change
Expand Up @@ -128,12 +128,12 @@ Rufen Sie [`~evaluate.compute`] auf `metric` auf, um die Genauigkeit Ihrer Vorhe
... return metric.compute(predictions=predictions, references=labels)
```

Wenn Sie Ihre Bewertungsmetriken während der Feinabstimmung überwachen möchten, geben Sie den Parameter `evaluation_strategy` in Ihren Trainingsargumenten an, um die Bewertungsmetrik am Ende jeder Epoche zu ermitteln:
Wenn Sie Ihre Bewertungsmetriken während der Feinabstimmung überwachen möchten, geben Sie den Parameter `eval_strategy` in Ihren Trainingsargumenten an, um die Bewertungsmetrik am Ende jeder Epoche zu ermitteln:

```py
>>> from transformers import TrainingArguments, Trainer

>>> training_args = TrainingArguments(output_dir="test_trainer", evaluation_strategy="epoch")
>>> training_args = TrainingArguments(output_dir="test_trainer", eval_strategy="epoch")
```

### Trainer
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/model_memory_anatomy.md
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ arguments:
```py
default_args = {
"output_dir": "tmp",
"evaluation_strategy": "steps",
"eval_strategy": "steps",
"num_train_epochs": 1,
"log_level": "error",
"report_to": "none",
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/tasks/asr.md
Original file line number Diff line number Diff line change
Expand Up @@ -270,7 +270,7 @@ At this point, only three steps remain:
... gradient_checkpointing=True,
... fp16=True,
... group_by_length=True,
... evaluation_strategy="steps",
... eval_strategy="steps",
... per_device_eval_batch_size=8,
... save_steps=1000,
... eval_steps=1000,
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/tasks/audio_classification.md
Original file line number Diff line number Diff line change
Expand Up @@ -221,7 +221,7 @@ At this point, only three steps remain:
```py
>>> training_args = TrainingArguments(
... output_dir="my_awesome_mind_model",
... evaluation_strategy="epoch",
... eval_strategy="epoch",
... save_strategy="epoch",
... learning_rate=3e-5,
... per_device_train_batch_size=32,
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/tasks/document_question_answering.md
Original file line number Diff line number Diff line change
Expand Up @@ -399,7 +399,7 @@ In this case the `output_dir` will also be the name of the repo where your model
... num_train_epochs=20,
... save_steps=200,
... logging_steps=50,
... evaluation_strategy="steps",
... eval_strategy="steps",
... learning_rate=5e-5,
... save_total_limit=2,
... remove_unused_columns=False,
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/tasks/image_captioning.md
Original file line number Diff line number Diff line change
Expand Up @@ -196,7 +196,7 @@ training_args = TrainingArguments(
per_device_eval_batch_size=32,
gradient_accumulation_steps=2,
save_total_limit=3,
evaluation_strategy="steps",
eval_strategy="steps",
eval_steps=50,
save_strategy="steps",
save_steps=50,
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/tasks/image_classification.md
Original file line number Diff line number Diff line change
Expand Up @@ -302,7 +302,7 @@ At this point, only three steps remain:
>>> training_args = TrainingArguments(
... output_dir="my_awesome_food_model",
... remove_unused_columns=False,
... evaluation_strategy="epoch",
... eval_strategy="epoch",
... save_strategy="epoch",
... learning_rate=5e-5,
... per_device_train_batch_size=16,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ training_args = TrainingArguments(
fp16=True,
logging_dir=f"{repo_name}/logs",
logging_strategy="epoch",
evaluation_strategy="epoch",
eval_strategy="epoch",
save_strategy="epoch",
load_best_model_at_end=True,
metric_for_best_model="accuracy",
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/tasks/language_modeling.md
Original file line number Diff line number Diff line change
Expand Up @@ -249,7 +249,7 @@ At this point, only three steps remain:
```py
>>> training_args = TrainingArguments(
... output_dir="my_awesome_eli5_clm-model",
... evaluation_strategy="epoch",
... eval_strategy="epoch",
... learning_rate=2e-5,
... weight_decay=0.01,
... push_to_hub=True,
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/tasks/masked_language_modeling.md
Original file line number Diff line number Diff line change
Expand Up @@ -238,7 +238,7 @@ At this point, only three steps remain:
```py
>>> training_args = TrainingArguments(
... output_dir="my_awesome_eli5_mlm_model",
... evaluation_strategy="epoch",
... eval_strategy="epoch",
... learning_rate=2e-5,
... num_train_epochs=3,
... weight_decay=0.01,
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/tasks/multiple_choice.md
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@ At this point, only three steps remain:
```py
>>> training_args = TrainingArguments(
... output_dir="my_awesome_swag_model",
... evaluation_strategy="epoch",
... eval_strategy="epoch",
... save_strategy="epoch",
... load_best_model_at_end=True,
... learning_rate=5e-5,
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/tasks/question_answering.md
Original file line number Diff line number Diff line change
Expand Up @@ -218,7 +218,7 @@ At this point, only three steps remain:
```py
>>> training_args = TrainingArguments(
... output_dir="my_awesome_qa_model",
... evaluation_strategy="epoch",
... eval_strategy="epoch",
... learning_rate=2e-5,
... per_device_train_batch_size=16,
... per_device_eval_batch_size=16,
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/tasks/semantic_segmentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -535,7 +535,7 @@ At this point, only three steps remain:
... per_device_train_batch_size=2,
... per_device_eval_batch_size=2,
... save_total_limit=3,
... evaluation_strategy="steps",
... eval_strategy="steps",
... save_strategy="steps",
... save_steps=20,
... eval_steps=20,
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/tasks/sequence_classification.md
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,7 @@ At this point, only three steps remain:
... per_device_eval_batch_size=16,
... num_train_epochs=2,
... weight_decay=0.01,
... evaluation_strategy="epoch",
... eval_strategy="epoch",
... save_strategy="epoch",
... load_best_model_at_end=True,
... push_to_hub=True,
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/tasks/summarization.md
Original file line number Diff line number Diff line change
Expand Up @@ -202,7 +202,7 @@ At this point, only three steps remain:
```py
>>> training_args = Seq2SeqTrainingArguments(
... output_dir="my_awesome_billsum_model",
... evaluation_strategy="epoch",
... eval_strategy="epoch",
... learning_rate=2e-5,
... per_device_train_batch_size=16,
... per_device_eval_batch_size=16,
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/tasks/text-to-speech.md
Original file line number Diff line number Diff line change
Expand Up @@ -477,7 +477,7 @@ only look at the loss:
... max_steps=4000,
... gradient_checkpointing=True,
... fp16=True,
... evaluation_strategy="steps",
... eval_strategy="steps",
... per_device_eval_batch_size=2,
... save_steps=1000,
... eval_steps=1000,
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/tasks/token_classification.md
Original file line number Diff line number Diff line change
Expand Up @@ -290,7 +290,7 @@ At this point, only three steps remain:
... per_device_eval_batch_size=16,
... num_train_epochs=2,
... weight_decay=0.01,
... evaluation_strategy="epoch",
... eval_strategy="epoch",
... save_strategy="epoch",
... load_best_model_at_end=True,
... push_to_hub=True,
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/tasks/translation.md
Original file line number Diff line number Diff line change
Expand Up @@ -209,7 +209,7 @@ At this point, only three steps remain:
```py
>>> training_args = Seq2SeqTrainingArguments(
... output_dir="my_awesome_opus_books_model",
... evaluation_strategy="epoch",
... eval_strategy="epoch",
... learning_rate=2e-5,
... per_device_train_batch_size=16,
... per_device_eval_batch_size=16,
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/tasks/video_classification.md
Original file line number Diff line number Diff line change
Expand Up @@ -354,7 +354,7 @@ Most of the training arguments are self-explanatory, but one that is quite impor
>>> args = TrainingArguments(
... new_model_name,
... remove_unused_columns=False,
... evaluation_strategy="epoch",
... eval_strategy="epoch",
... save_strategy="epoch",
... learning_rate=5e-5,
... per_device_train_batch_size=batch_size,
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/trainer.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ training_args = TrainingArguments(
per_device_eval_batch_size=16,
num_train_epochs=2,
weight_decay=0.01,
evaluation_strategy="epoch",
eval_strategy="epoch",
save_strategy="epoch",
load_best_model_at_end=True,
push_to_hub=True,
Expand Down
4 changes: 2 additions & 2 deletions docs/source/en/training.md
Original file line number Diff line number Diff line change
Expand Up @@ -128,12 +128,12 @@ Call [`~evaluate.compute`] on `metric` to calculate the accuracy of your predict
... return metric.compute(predictions=predictions, references=labels)
```

If you'd like to monitor your evaluation metrics during fine-tuning, specify the `evaluation_strategy` parameter in your training arguments to report the evaluation metric at the end of each epoch:
If you'd like to monitor your evaluation metrics during fine-tuning, specify the `eval_strategy` parameter in your training arguments to report the evaluation metric at the end of each epoch:

```py
>>> from transformers import TrainingArguments, Trainer

>>> training_args = TrainingArguments(output_dir="test_trainer", evaluation_strategy="epoch")
>>> training_args = TrainingArguments(output_dir="test_trainer", eval_strategy="epoch")
```

### Trainer
Expand Down
2 changes: 1 addition & 1 deletion docs/source/es/tasks/asr.md
Original file line number Diff line number Diff line change
Expand Up @@ -260,7 +260,7 @@ En este punto, solo quedan tres pasos:
... gradient_checkpointing=True,
... fp16=True,
... group_by_length=True,
... evaluation_strategy="steps",
... eval_strategy="steps",
... per_device_eval_batch_size=8,
... save_steps=1000,
... eval_steps=1000,
Expand Down
2 changes: 1 addition & 1 deletion docs/source/es/tasks/image_captioning.md
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,7 @@ training_args = TrainingArguments(
per_device_eval_batch_size=32,
gradient_accumulation_steps=2,
save_total_limit=3,
evaluation_strategy="steps",
eval_strategy="steps",
eval_steps=50,
save_strategy="steps",
save_steps=50,
Expand Down
2 changes: 1 addition & 1 deletion docs/source/es/tasks/image_classification.md
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@ Al llegar a este punto, solo quedan tres pasos:
>>> training_args = TrainingArguments(
... output_dir="./results",
... per_device_train_batch_size=16,
... evaluation_strategy="steps",
... eval_strategy="steps",
... num_train_epochs=4,
... fp16=True,
... save_steps=100,
Expand Down
4 changes: 2 additions & 2 deletions docs/source/es/tasks/language_modeling.md
Original file line number Diff line number Diff line change
Expand Up @@ -232,7 +232,7 @@ A este punto, solo faltan tres pasos:
```py
>>> training_args = TrainingArguments(
... output_dir="./results",
... evaluation_strategy="epoch",
... eval_strategy="epoch",
... learning_rate=2e-5,
... weight_decay=0.01,
... )
Expand Down Expand Up @@ -338,7 +338,7 @@ A este punto, solo faltan tres pasos:
```py
>>> training_args = TrainingArguments(
... output_dir="./results",
... evaluation_strategy="epoch",
... eval_strategy="epoch",
... learning_rate=2e-5,
... num_train_epochs=3,
... weight_decay=0.01,
Expand Down
2 changes: 1 addition & 1 deletion docs/source/es/tasks/multiple_choice.md
Original file line number Diff line number Diff line change
Expand Up @@ -212,7 +212,7 @@ En este punto, solo quedan tres pasos:
```py
>>> training_args = TrainingArguments(
... output_dir="./results",
... evaluation_strategy="epoch",
... eval_strategy="epoch",
... learning_rate=5e-5,
... per_device_train_batch_size=16,
... per_device_eval_batch_size=16,
Expand Down
2 changes: 1 addition & 1 deletion docs/source/es/tasks/question_answering.md
Original file line number Diff line number Diff line change
Expand Up @@ -182,7 +182,7 @@ En este punto, solo quedan tres pasos:
```py
>>> training_args = TrainingArguments(
... output_dir="./results",
... evaluation_strategy="epoch",
... eval_strategy="epoch",
... learning_rate=2e-5,
... per_device_train_batch_size=16,
... per_device_eval_batch_size=16,
Expand Down
2 changes: 1 addition & 1 deletion docs/source/es/tasks/summarization.md
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,7 @@ En este punto, solo faltan tres pasos:
```py
>>> training_args = Seq2SeqTrainingArguments(
... output_dir="./results",
... evaluation_strategy="epoch",
... eval_strategy="epoch",
... learning_rate=2e-5,
... per_device_train_batch_size=16,
... per_device_eval_batch_size=16,
Expand Down
2 changes: 1 addition & 1 deletion docs/source/es/trainer.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ training_args = TrainingArguments(
per_device_eval_batch_size=16,
num_train_epochs=2,
weight_decay=0.01,
evaluation_strategy="epoch",
eval_strategy="epoch",
save_strategy="epoch",
load_best_model_at_end=True,
push_to_hub=True,
Expand Down
4 changes: 2 additions & 2 deletions docs/source/es/training.md
Original file line number Diff line number Diff line change
Expand Up @@ -120,12 +120,12 @@ Define la función `compute` en `metric` para calcular el accuracy de tus predic
... return metric.compute(predictions=predictions, references=labels)
```

Si quieres controlar tus métricas de evaluación durante el fine-tuning, especifica el parámetro `evaluation_strategy` en tus argumentos de entrenamiento para que el modelo tenga en cuenta la métrica de evaluación al final de cada época:
Si quieres controlar tus métricas de evaluación durante el fine-tuning, especifica el parámetro `eval_strategy` en tus argumentos de entrenamiento para que el modelo tenga en cuenta la métrica de evaluación al final de cada época:

```py
>>> from transformers import TrainingArguments

>>> training_args = TrainingArguments(output_dir="test_trainer", evaluation_strategy="epoch")
>>> training_args = TrainingArguments(output_dir="test_trainer", eval_strategy="epoch")
```

### Trainer
Expand Down
2 changes: 1 addition & 1 deletion docs/source/it/migration.md
Original file line number Diff line number Diff line change
Expand Up @@ -167,7 +167,7 @@ Per quanto riguarda la classe `Trainer`:
- Il metodo `is_world_master` di `Trainer` è deprecato a favore di `is_world_process_zero`.

Per quanto riguarda la classe `TrainingArguments`:
- L'argomento `evaluate_during_training` di `TrainingArguments` è deprecato a favore di `evaluation_strategy`.
- L'argomento `evaluate_during_training` di `TrainingArguments` è deprecato a favore di `eval_strategy`.

Per quanto riguarda il modello Transfo-XL:
- L'attributo di configurazione `tie_weight` di Transfo-XL diventa `tie_words_embeddings`.
Expand Down
4 changes: 2 additions & 2 deletions docs/source/it/training.md
Original file line number Diff line number Diff line change
Expand Up @@ -121,12 +121,12 @@ Richiama `compute` su `metric` per calcolare l'accuratezza delle tue previsioni.
... return metric.compute(predictions=predictions, references=labels)
```

Se preferisci monitorare le tue metriche di valutazione durante il fine-tuning, specifica il parametro `evaluation_strategy` nei tuoi training arguments per restituire le metriche di valutazione ad ogni epoca di addestramento:
Se preferisci monitorare le tue metriche di valutazione durante il fine-tuning, specifica il parametro `eval_strategy` nei tuoi training arguments per restituire le metriche di valutazione ad ogni epoca di addestramento:

```py
>>> from transformers import TrainingArguments, Trainer

>>> training_args = TrainingArguments(output_dir="test_trainer", evaluation_strategy="epoch")
>>> training_args = TrainingArguments(output_dir="test_trainer", eval_strategy="epoch")
```

### Trainer
Expand Down
2 changes: 1 addition & 1 deletion docs/source/ja/model_memory_anatomy.md
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ Tue Jan 11 08:58:05 2022
```py
default_args = {
"output_dir": "tmp",
"evaluation_strategy": "steps",
"eval_strategy": "steps",
"num_train_epochs": 1,
"log_level": "error",
"report_to": "none",
Expand Down
2 changes: 1 addition & 1 deletion docs/source/ja/tasks/asr.md
Original file line number Diff line number Diff line change
Expand Up @@ -270,7 +270,7 @@ MInDS-14 データセットのサンプリング レートは 8000kHz です (
... gradient_checkpointing=True,
... fp16=True,
... group_by_length=True,
... evaluation_strategy="steps",
... eval_strategy="steps",
... per_device_eval_batch_size=8,
... save_steps=1000,
... eval_steps=1000,
Expand Down
2 changes: 1 addition & 1 deletion docs/source/ja/tasks/audio_classification.md
Original file line number Diff line number Diff line change
Expand Up @@ -221,7 +221,7 @@ MInDS-14 データセットのサンプリング レートは 8000khz です (
```py
>>> training_args = TrainingArguments(
... output_dir="my_awesome_mind_model",
... evaluation_strategy="epoch",
... eval_strategy="epoch",
... save_strategy="epoch",
... learning_rate=3e-5,
... per_device_train_batch_size=32,
Expand Down
2 changes: 1 addition & 1 deletion docs/source/ja/tasks/document_question_answering.md
Original file line number Diff line number Diff line change
Expand Up @@ -403,7 +403,7 @@ end_index 18
... num_train_epochs=20,
... save_steps=200,
... logging_steps=50,
... evaluation_strategy="steps",
... eval_strategy="steps",
... learning_rate=5e-5,
... save_total_limit=2,
... remove_unused_columns=False,
Expand Down
2 changes: 1 addition & 1 deletion docs/source/ja/tasks/image_captioning.md
Original file line number Diff line number Diff line change
Expand Up @@ -194,7 +194,7 @@ training_args = TrainingArguments(
per_device_eval_batch_size=32,
gradient_accumulation_steps=2,
save_total_limit=3,
evaluation_strategy="steps",
eval_strategy="steps",
eval_steps=50,
save_strategy="steps",
save_steps=50,
Expand Down
2 changes: 1 addition & 1 deletion docs/source/ja/tasks/image_classification.md
Original file line number Diff line number Diff line change
Expand Up @@ -308,7 +308,7 @@ food["test"].set_transform(preprocess_val)
>>> training_args = TrainingArguments(
... output_dir="my_awesome_food_model",
... remove_unused_columns=False,
... evaluation_strategy="epoch",
... eval_strategy="epoch",
... save_strategy="epoch",
... learning_rate=5e-5,
... per_device_train_batch_size=16,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ training_args = TrainingArguments(
fp16=True,
logging_dir=f"{repo_name}/logs",
logging_strategy="epoch",
evaluation_strategy="epoch",
eval_strategy="epoch",
save_strategy="epoch",
load_best_model_at_end=True,
metric_for_best_model="accuracy",
Expand Down
2 changes: 1 addition & 1 deletion docs/source/ja/tasks/language_modeling.md
Original file line number Diff line number Diff line change
Expand Up @@ -246,7 +246,7 @@ Apply the `group_texts` function over the entire dataset:
```py
>>> training_args = TrainingArguments(
... output_dir="my_awesome_eli5_clm-model",
... evaluation_strategy="epoch",
... eval_strategy="epoch",
... learning_rate=2e-5,
... weight_decay=0.01,
... push_to_hub=True,
Expand Down
Loading
Loading