Skip to content

Commit

Permalink
Added description of quantization_config (huggingface#31133)
Browse files Browse the repository at this point in the history
* Description of quantization_config

Added missing description about quantization_config in replace_with_bnb_linear for better readability.

* Removed trailing spaces
  • Loading branch information
vamsivallepu authored and zucchini-nlp committed Jun 11, 2024
1 parent 9c04769 commit 956dd46
Showing 1 changed file with 4 additions and 0 deletions.
4 changes: 4 additions & 0 deletions src/transformers/integrations/bitsandbytes.py
Original file line number Diff line number Diff line change
Expand Up @@ -243,6 +243,10 @@ def replace_with_bnb_linear(model, modules_to_not_convert=None, current_key_name
An array to track the current key of the recursion. This is used to check whether the current key (part of
it) is not in the list of modules to not convert (for instances modules that are offloaded to `cpu` or
`disk`).
quantization_config ('transformers.utils.quantization_config.BitsAndBytesConfig'):
To configure and manage settings related to quantization, a technique used to compress neural network models
by reducing the precision of the weights and activations, thus making models more efficient in terms of both
storage and computation.
"""
modules_to_not_convert = ["lm_head"] if modules_to_not_convert is None else modules_to_not_convert
model, has_been_replaced = _replace_with_bnb_linear(
Expand Down

0 comments on commit 956dd46

Please sign in to comment.