From 9f40639292eaf723d366607a707d3f405b41541e Mon Sep 17 00:00:00 2001 From: Ben Gubler Date: Wed, 11 Oct 2023 05:50:23 -0600 Subject: [PATCH] Update docs to explain disabling callbacks using report_to (#26155) * feat: update callback doc to explain disabling callbacks using report_to * docs: update report_to docstring --- docs/source/en/main_classes/callback.md | 4 +++- src/transformers/training_args.py | 7 ++++--- 2 files changed, 7 insertions(+), 4 deletions(-) diff --git a/docs/source/en/main_classes/callback.md b/docs/source/en/main_classes/callback.md index ccfdf256832472..87bf0d63af1fc2 100644 --- a/docs/source/en/main_classes/callback.md +++ b/docs/source/en/main_classes/callback.md @@ -25,7 +25,7 @@ Callbacks are "read only" pieces of code, apart from the [`TrainerControl`] obje cannot change anything in the training loop. For customizations that require changes in the training loop, you should subclass [`Trainer`] and override the methods you need (see [trainer](trainer) for examples). -By default a [`Trainer`] will use the following callbacks: +By default, `TrainingArguments.report_to` is set to `"all"`, so a [`Trainer`] will use the following callbacks. - [`DefaultFlowCallback`] which handles the default behavior for logging, saving and evaluation. - [`PrinterCallback`] or [`ProgressCallback`] to display progress and print the @@ -45,6 +45,8 @@ By default a [`Trainer`] will use the following callbacks: - [`~integrations.DagsHubCallback`] if [dagshub](https://dagshub.com/) is installed. - [`~integrations.FlyteCallback`] if [flyte](https://flyte.org/) is installed. +If a package is installed but you don't wish to use the accompanying integration, you can change `TrainingArguments.report_to` to a list of just those integrations you want to use (e.g. `["azure_ml", "wandb"]`). + The main class that implements callbacks is [`TrainerCallback`]. It gets the [`TrainingArguments`] used to instantiate the [`Trainer`], can access that Trainer's internal state via [`TrainerState`], and can take some actions on the training loop via diff --git a/src/transformers/training_args.py b/src/transformers/training_args.py index 635ab656ff699c..96cb467bcbeb7c 100644 --- a/src/transformers/training_args.py +++ b/src/transformers/training_args.py @@ -2345,10 +2345,11 @@ def set_logging( Logger log level to use on the main process. Possible choices are the log levels as strings: `"debug"`, `"info"`, `"warning"`, `"error"` and `"critical"`, plus a `"passive"` level which doesn't set anything and lets the application set the level. - report_to (`str` or `List[str]`, *optional*, defaults to `"none"`): + report_to (`str` or `List[str]`, *optional*, defaults to `"all"`): The list of integrations to report the results and logs to. Supported platforms are `"azure_ml"`, - `"comet_ml"`, `"mlflow"`, `"neptune"`, `"tensorboard"`,`"clearml"` and `"wandb"`. Use `"all"` to report - to all integrations installed, `"none"` for no integrations. + `"clearml"`, `"codecarbon"`, `"comet_ml"`, `"dagshub"`, `"flyte"`, `"mlflow"`, `"neptune"`, + `"tensorboard"`, and `"wandb"`. Use `"all"` to report to all integrations installed, `"none"` for no + integrations. first_step (`bool`, *optional*, defaults to `False`): Whether to log and evaluate the first `global_step` or not. nan_inf_filter (`bool`, *optional*, defaults to `True`):