From 4941dd23f368c17cd202155fc7f443b54b95f3b9 Mon Sep 17 00:00:00 2001 From: Titus <9048635+Titus-von-Koeller@users.noreply.github.com> Date: Tue, 6 Feb 2024 15:51:09 -0300 Subject: [PATCH] fix missing bracket in link (#1046) --- docs/source/integrations.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/integrations.mdx b/docs/source/integrations.mdx index 7d47ede62..0df7efb72 100644 --- a/docs/source/integrations.mdx +++ b/docs/source/integrations.mdx @@ -4,7 +4,7 @@ With Transformers it's very easy to load any model in 4 or 8-bit, quantizing the Please review the [bitsandbytes section in the Accelerate docs](https://huggingface.co/docs/transformers/v4.37.2/en/quantization#bitsandbytes). -Details about the BitsAndBytesConfig can be found here](https://huggingface.co/docs/transformers/v4.37.2/en/main_classes/quantization#transformers.BitsAndBytesConfig). +Details about the BitsAndBytesConfig can be found [here](https://huggingface.co/docs/transformers/v4.37.2/en/main_classes/quantization#transformers.BitsAndBytesConfig). ## Beware: bf16 is optional compute data type If your hardware supports it, `bf16` is the optimal compute dtype. The default is `float32` for backward compatibility and numerical stability. `float16` often leads to numerical instabilities, but `bfloat16` provides the benefits of both worlds: numerical stability and significant computation speedup. Therefore, be sure to check if your hardware supports `bf16` and configure it using the `bnb_4bit_compute_dtype` parameter in BitsAndBytesConfig: