From 13c70d305803ca9804958dc097919e02c41102ee Mon Sep 17 00:00:00 2001 From: Steven Liu Date: Mon, 29 Apr 2024 11:43:08 -0700 Subject: [PATCH 1/3] clarify --- docs/source/fsdp_qlora.md | 27 ++++++++++++++++----------- 1 file changed, 16 insertions(+), 11 deletions(-) diff --git a/docs/source/fsdp_qlora.md b/docs/source/fsdp_qlora.md index 47922cfcc..648e79092 100644 --- a/docs/source/fsdp_qlora.md +++ b/docs/source/fsdp_qlora.md @@ -9,21 +9,26 @@ This guide provides a brief guide on how bitsandbytes supports storing quantized ## Quantized data storage -FSDP only supports sharding float data types which can be problematic because quantized weights are typically stored as integer data types (uint8). bitsandbytes doesn't have this problem because it uses `StoreChar` to read and write quantized weights regardless of the data type storage. This makes it simple to add a `quant_storage` parameter to the [`~nn.Linear4bit`] and [`~nn.Params4bit`] classes and set it to `torch.uint8` to maintain backward compatibility with the codebase. +FSDP only supports sharding float data types which can be problematic because quantized weights are typically stored as integer data types (uint8). bitsandbytes doesn't have this problem because it uses `StoreChar` to read and write quantized weights regardless of the data type storage. This makes it simple to add a `quant_storage` parameter to the [`~nn.Linear4bit`] and [`~nn.Params4bit`] classes and set it to `torch.uint8` to maintain backward compatibility with the codebase. With the `quant_storage` parameter, you can select any of the FSDP supported data types to shard [`~nn.Linear4bit`] with such as bfloat16, float16 or float32. + +For example, you can configure this option in [`transformers.BitsAndBytesConfig`] by setting the `bnb_4bit_quant_storage` parameter. ```py -import torch -import bitsandbytes as bnb - -model = bnb.nn.Linear4bit( - input_features, - output_features, - quant_type="fp4", - quant_storage=torch.uint8, +from transformers import BitsAndBytesConfig, AutoModelForCausalLM + +bnb_config = BitsAndBytesConfig( + load_in_4bit=True, + bnb_4bit_quant_type="nf4", + bnb_4bit_compute_dtype=torch.bfloat16, + bnb_4bit_quant_storage=torch.bfloat16, ) -``` -With the `quant_storage` parameter, you can select any of the FSDP supported data types to shard [`~nn.Linear4bit`] with such as bfloat16, float16 or float32. +model = AutoModelForCausalLM.from_pretrained( + "meta-llama/Llama-2-70b", + quantization_config=bnb_config, + torch_dtype=torch.bfloat16, +) +``` ## Training From 2b7daed3db413824f47a0fdfcc0e5353669eebd0 Mon Sep 17 00:00:00 2001 From: Steven Liu Date: Tue, 14 May 2024 10:55:35 -0700 Subject: [PATCH 2/3] clarify --- docs/source/fsdp_qlora.md | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/docs/source/fsdp_qlora.md b/docs/source/fsdp_qlora.md index 648e79092..c623fa424 100644 --- a/docs/source/fsdp_qlora.md +++ b/docs/source/fsdp_qlora.md @@ -11,7 +11,10 @@ This guide provides a brief guide on how bitsandbytes supports storing quantized FSDP only supports sharding float data types which can be problematic because quantized weights are typically stored as integer data types (uint8). bitsandbytes doesn't have this problem because it uses `StoreChar` to read and write quantized weights regardless of the data type storage. This makes it simple to add a `quant_storage` parameter to the [`~nn.Linear4bit`] and [`~nn.Params4bit`] classes and set it to `torch.uint8` to maintain backward compatibility with the codebase. With the `quant_storage` parameter, you can select any of the FSDP supported data types to shard [`~nn.Linear4bit`] with such as bfloat16, float16 or float32. -For example, you can configure this option in [`transformers.BitsAndBytesConfig`] by setting the `bnb_4bit_quant_storage` parameter. +You'll typically access and configure this option from [`transformers.BitsAndBytesConfig`] by setting the `bnb_4bit_quant_storage` parameter. It is very **important** the `quant_storage` data type matches the data types used throughout the model because FSDP can only wrap layers and modules that have the *same floating data type*. Making sure the data types are aligned will ensure the model is correctly sharded. + +> [!TIP] +> The `compute_dtype` is the data type used for computation inside the CUDA kernel, where the 4-bit quantized weights are unpacked from the data type in `quant_storage` and dequantized to `compute_dtype`. We recommend using torch.bfloat16 (if available on your hardware) for better numerical stability. ```py from transformers import BitsAndBytesConfig, AutoModelForCausalLM @@ -30,9 +33,11 @@ model = AutoModelForCausalLM.from_pretrained( ) ``` +Check out this [section](https://hf.co/docs/peft/main/en/accelerate/fsdp#use-peft-qlora-and-fsdp-for-finetuning-large-models-on-multiple-gpus) of the PEFT documentation for the config file and training code to run FSDP-QLoRA training. + ## Training -bitsandbytes is deeply integrated with the Hugging Face ecosystem, making it easy to use with libraries like [Transformers](https://hf/co/docs/transformers), [PEFT](https://hf/co/docs/peft), and [TRL](https://hf/co/docs/trl). +bitsandbytes is deeply integrated with the Hugging Face ecosystem, making it easy to use with libraries like [Transformers](https://hf.co/docs/transformers), [PEFT](https://hf.co/docs/peft), and [TRL](https://hf.co/docs/trl). Before you begin, make sure you have the latest libraries installed. From d7a5a244fb3b277284788a9b54deebf2ab8d2a75 Mon Sep 17 00:00:00 2001 From: Steven Liu Date: Thu, 16 May 2024 12:52:11 -0700 Subject: [PATCH 3/3] feedback --- docs/source/fsdp_qlora.md | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/docs/source/fsdp_qlora.md b/docs/source/fsdp_qlora.md index c623fa424..11e169ffb 100644 --- a/docs/source/fsdp_qlora.md +++ b/docs/source/fsdp_qlora.md @@ -37,17 +37,19 @@ Check out this [section](https://hf.co/docs/peft/main/en/accelerate/fsdp#use-pef ## Training +> [!TIP] +> FSDP is a distributed training framework that needs to be launched as a distributed training job with a library like [Accelerate](https://hf.co/docs/accelerate/index) or [torchrun](https://pytorch.org/docs/stable/elastic/run.html). The launch command provided in this section uses Accelerate to launch the training script. + bitsandbytes is deeply integrated with the Hugging Face ecosystem, making it easy to use with libraries like [Transformers](https://hf.co/docs/transformers), [PEFT](https://hf.co/docs/peft), and [TRL](https://hf.co/docs/trl). +PEFT provides a configuration file ([fsdp_config_qlora.yaml](https://github.com/huggingface/peft/blob/main/examples/sft/configs/fsdp_config_qlora.yaml)), launch command ([run_peft_qlora_fsdp.sh](https://github.com/huggingface/peft/blob/main/examples/sft/run_peft_qlora_fsdp.sh)), and training script ([train.py](https://github.com/huggingface/peft/blob/main/examples/sft/train.py)) for running FSDP-QLoRA. To learn more, check out the [Use PEFT QLoRA and FSDP for finetuning large models on multiple GPUs](https://huggingface.co/docs/peft/main/en/accelerate/fsdp#use-peft-qlora-and-fsdp-for-finetuning-large-models-on-multiple-gpus) documentation. This section briefly covers the steps to run FSDP-QLoRA training. + Before you begin, make sure you have the latest libraries installed. ```bash pip install -U bitsandbytes accelerate transformers peft trl ``` -> [!TIP] -> PEFT provides a configuration file ([fsdp_config_qlora.yaml](https://github.com/huggingface/peft/blob/main/examples/sft/configs/fsdp_config_qlora.yaml)), launch command ([run_peft_qlora_fsdp.sh](https://github.com/huggingface/peft/blob/main/examples/sft/run_peft_qlora_fsdp.sh)), and training script ([train.py](https://github.com/huggingface/peft/blob/main/examples/sft/train.py)) for FSDP-QLoRA. To learn more, check out the [Use PEFT QLoRA and FSDP for finetuning large models on multiple GPUs](https://huggingface.co/docs/peft/main/en/accelerate/fsdp#use-peft-qlora-and-fsdp-for-finetuning-large-models-on-multiple-gpus) documentation. - The important change that enables FSDP-QLoRA training is the `bnb_4bit_quant_storage` parameter in the [`~transformers.BitsAndBytesConfig`] class. This allows you to set the storage data type of the quantized weights to a float data type. ```py