From c16ab1f1abfb8fba83aa5edd14d84ee3d5733a90 Mon Sep 17 00:00:00 2001 From: TrickEye <20373866@buaa.edu.cn> Date: Wed, 2 Oct 2024 16:12:21 +0800 Subject: [PATCH] Fix: typo (#33880) Update llm_tutorial.md: typo --- docs/source/en/llm_tutorial.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/llm_tutorial.md b/docs/source/en/llm_tutorial.md index ac6386d85318a6..097d7bf1e9ca38 100644 --- a/docs/source/en/llm_tutorial.md +++ b/docs/source/en/llm_tutorial.md @@ -164,7 +164,7 @@ If not specified in the [`~generation.GenerationConfig`] file, `generate` return By default, and unless specified in the [`~generation.GenerationConfig`] file, `generate` selects the most likely token at each iteration (greedy decoding). Depending on your task, this may be undesirable; creative tasks like chatbots or writing an essay benefit from sampling. On the other hand, input-grounded tasks like audio transcription or translation benefit from greedy decoding. Enable sampling with `do_sample=True`, and you can learn more about this topic in this [blog post](https://huggingface.co/blog/how-to-generate). ```py ->>> # Set seed or reproducibility -- you don't need this unless you want full reproducibility +>>> # Set seed for reproducibility -- you don't need this unless you want full reproducibility >>> from transformers import set_seed >>> set_seed(42)