-
Notifications
You must be signed in to change notification settings - Fork 27.4k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* add new model like * add state dict slicing + new model config * update palma config and weights, passes vision activations * fix * update * reorder loading/unpacking * clean up * add debug statements * change device * fix * debugging * fix noncausal mask * fixup sdpa + causal mask * fix activation function * remove debug before changing modeling file * add variants * debug attention mask in generate * revert to non-debug sdpa * revert gemma modifications * add custom language modeling * use Processor * add language modeling file to init * try thin wrapper around generate * Update * update mask * breakpoints galore * remove conflict * switch to left-padding * add incomplete model doc * add paligemma global files * batch rename paligemma * make generation match outputs and captioning * style * style * remove copied from + doc * remove more copied from * remove copy from projector * minor fix * update config and style * add readme - dummy * CORRECT image captioning * moving to args * add siglip proper + fix merging image + text features * take update_causal_mask from upstream * remove breakpoint * leverage AutoModel * fix input_ids slicing * make siglip head conditional * remove encoder_decoder value * remove unneeded modeling file * add commented 4d attention mask * FIXED generation with 4D mask * Update src/transformers/models/siglip/modeling_siglip.py Co-authored-by: Arthur <[email protected]> * fix left padding detection * shuffle order of verifications * fix missing labels for training * fix * vectorize merging of features, improve slicing * improve testing before conversion * handle merging in processor * image token index depends on checkpoint * add variants, save processor too * save processors, base tokenizer off spm file * expand model embeddings due to additional image token * pass image processing args * add convert rgb to siglip processor * add \n token separately * fix tokenizer and prompts * fix docstrings * change to camel * fix casing * debug pos_ids and sdpa * pass and use cache_position * add flag for newline tokenization * Update src/transformers/models/paligemma/processing_paligemma.py Co-authored-by: Merve Noyan <[email protected]> * simplify conversion script * add copied from * add precision to conversion script * Update src/transformers/models/paligemma/modeling_paligemma.py Co-authored-by: Pedro Cuenca <[email protected]> * clean up * Shift attention mask from `1:` After discussion with @molbap * add docs, fix quality * quality, tied weights inheritance, and logits/label alignment * fix more tests * pass attn_implementation to language model correctly * add SiglipVisionTransformer to no split modules * skip paligemma test for sdpa dispatch to flash * skip incompatible tests * quality * [broken archive maps] * Apply suggestions - remove archive lists - style - take shape of inputs_embeds for batch Co-authored-by: Arthur <[email protected]> * Update src/transformers/utils/dummy_pt_objects.py Co-authored-by: Arthur <[email protected]> * simplify conversion script * add suggestions * add suggestions * add copied from * fix * move labels out * revert * fix * remove placeholder labels if None * use cache_position * fix quality + docstrings * fix quality * fix paligemma 4d gemma mask incompatibility * fix config docstring * fix query and attn_mask dtype --------- Co-authored-by: ArthurZucker <[email protected]> Co-authored-by: Arthur <[email protected]> Co-authored-by: Merve Noyan <[email protected]> Co-authored-by: Pedro Cuenca <[email protected]>
- Loading branch information
1 parent
c96aca3
commit 1360801
Showing
23 changed files
with
1,890 additions
and
8 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,38 @@ | ||
<!--Copyright 2024 The HuggingFace Team. All rights reserved. | ||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with | ||
the License. You may obtain a copy of the License at | ||
http://www.apache.org/licenses/LICENSE-2.0 | ||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on | ||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | ||
specific language governing permissions and limitations under the License. | ||
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be | ||
rendered properly in your Markdown viewer. | ||
--> | ||
|
||
# PaliGemma | ||
|
||
## Overview | ||
|
||
The PaliGemma model was proposed by Google. It is a 3B VLM composed by a Siglip-400m vision encoder and a Gemma-2B decoder linked by a multimodal linear projection. It is not a chat model with images. It cuts an image into a fixed number of VIT tokens and prepends it to an optional prompt. One particularity is that the model uses full block attention on all the image tokens plus the input text tokens. It comes in 3 resolutions, 224x224, 448x448 and 896x896 with 3 base models, with 55 fine-tuned versions for different tasks, and 2 mix models. | ||
|
||
|
||
This model was contributed by [Molbap](https://huggingface.co/Molbap). | ||
|
||
|
||
## PaliGemmaConfig | ||
|
||
[[autodoc]] PaliGemmaConfig | ||
|
||
## PaliGemmaProcessor | ||
|
||
[[autodoc]] PaliGemmaProcessor | ||
|
||
## PaliGemmaForConditionalGeneration | ||
|
||
[[autodoc]] PaliGemmaForConditionalGeneration | ||
- forward |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -173,6 +173,7 @@ | |
opt, | ||
owlv2, | ||
owlvit, | ||
paligemma, | ||
patchtsmixer, | ||
patchtst, | ||
pegasus, | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,54 @@ | ||
# Copyright 2024 The HuggingFace Team. All rights reserved. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
from typing import TYPE_CHECKING | ||
|
||
from ...utils import OptionalDependencyNotAvailable, _LazyModule, is_torch_available | ||
|
||
|
||
_import_structure = {"configuration_paligemma": ["PaliGemmaConfig"]} | ||
|
||
|
||
try: | ||
if not is_torch_available(): | ||
raise OptionalDependencyNotAvailable() | ||
except OptionalDependencyNotAvailable: | ||
pass | ||
else: | ||
_import_structure["modeling_paligemma"] = [ | ||
"PaliGemmaForConditionalGeneration", | ||
"PaliGemmaPreTrainedModel", | ||
] | ||
_import_structure["processing_paligemma"] = ["PaliGemmaProcessor"] | ||
|
||
|
||
if TYPE_CHECKING: | ||
from .configuration_paligemma import PaliGemmaConfig | ||
|
||
try: | ||
if not is_torch_available(): | ||
raise OptionalDependencyNotAvailable() | ||
except OptionalDependencyNotAvailable: | ||
pass | ||
else: | ||
from .modeling_paligemma import ( | ||
PaliGemmaForConditionalGeneration, | ||
PaliGemmaPreTrainedModel, | ||
) | ||
from .processing_paligemma import PaliGemmaProcessor | ||
|
||
|
||
else: | ||
import sys | ||
|
||
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure) |
130 changes: 130 additions & 0 deletions
130
src/transformers/models/paligemma/configuration_paligemma.py
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,130 @@ | ||
# coding=utf-8 | ||
# Copyright 2024 Microsoft Research & University of Wisconsin-Madison and the HuggingFace Inc. team. All rights reserved. | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
""" PaliGemmamodel configuration""" | ||
|
||
from ...configuration_utils import PretrainedConfig | ||
from ...utils import logging | ||
from ..auto import CONFIG_MAPPING | ||
|
||
|
||
logger = logging.get_logger(__name__) | ||
|
||
|
||
class PaliGemmaConfig(PretrainedConfig): | ||
r""" | ||
This is the configuration class to store the configuration of a [`PaliGemmaForConditionalGeneration`]. It is used to instantiate an | ||
PaliGemmamodel according to the specified arguments, defining the model architecture. Instantiating a configuration | ||
with the defaults will yield a similar configuration to that of the PaliGemma-2B. | ||
e.g. [paligemma-hf/paligemma-2b](https://huggingface.co/paligemma-hf/paligemma-2b) | ||
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the | ||
documentation from [`PretrainedConfig`] for more information. | ||
Args: | ||
vision_config (`PaliGemmaVisionConfig`, *optional*): | ||
Custom vision config or dict | ||
text_config (`Union[AutoConfig, dict]`, *optional*): | ||
The config object of the text backbone. Can be any of `LlamaConfig` or `MistralConfig`. | ||
ignore_index (`int`, *optional*, defaults to -100): | ||
The ignore index for the loss function. | ||
image_token_index (`int`, *optional*, defaults to 256000): | ||
The image token index to encode the image prompt. | ||
vocab_size (`int`, *optional*, defaults to 257152): | ||
Vocabulary size of the PaliGemmamodel. Defines the number of different tokens that can be represented by the | ||
`inputs_ids` passed when calling [`~PaliGemmaForConditionalGeneration`] | ||
projection_dim (`int`, *optional*, defaults to 2048): | ||
Dimension of the multimodal projection space. | ||
hidden_size (`int`, *optional*, defaults to 2048): | ||
Dimension of the hidden layer of the Language model. | ||
Example: | ||
```python | ||
>>> from transformers import PaliGemmaForConditionalGeneration, PaliGemmaConfig, SiglipVisionConfig, GemmaConfig | ||
>>> # Initializing a Siglip-like vision config | ||
>>> vision_config = SiglipVisionConfig() | ||
>>> # Initializing a PaliGemma config | ||
>>> text_config = GemmaConfig() | ||
>>> # Initializing a PaliGemma paligemma-3b-224 style configuration | ||
>>> configuration = PaliGemmaConfig(vision_config, text_config) | ||
>>> # Initializing a model from the paligemma-3b-224 style configuration | ||
>>> model = PaliGemmaForConditionalGeneration(configuration) | ||
>>> # Accessing the model configuration | ||
>>> configuration = model.config | ||
```""" | ||
|
||
model_type = "paligemma" | ||
is_composition = False | ||
|
||
def __init__( | ||
self, | ||
vision_config=None, | ||
text_config=None, | ||
ignore_index=-100, | ||
image_token_index=256000, | ||
vocab_size=257152, | ||
projection_dim=2048, | ||
hidden_size=2048, | ||
**kwargs, | ||
): | ||
self.ignore_index = ignore_index | ||
self.image_token_index = image_token_index | ||
self.vocab_size = vocab_size | ||
self.projection_dim = projection_dim | ||
self.hidden_size = hidden_size | ||
self.vision_config = vision_config | ||
self.is_encoder_decoder = False | ||
|
||
if isinstance(self.vision_config, dict): | ||
vision_config["model_type"] = ( | ||
vision_config["model_type"] if "model_type" in vision_config else "siglip_vision_model" | ||
) | ||
self.vision_config = CONFIG_MAPPING[vision_config["model_type"]](**vision_config) | ||
elif vision_config is None: | ||
self.vision_config = CONFIG_MAPPING["siglip_vision_model"]( | ||
intermediate_size=4096, | ||
hidden_size=1152, | ||
patch_size=14, | ||
image_size=224, | ||
num_hidden_layers=27, | ||
num_attention_heads=16, | ||
vocab_size=257152, | ||
vision_use_head=False, | ||
) | ||
self.vocab_size = self.vocab_size | ||
|
||
self.text_config = text_config | ||
|
||
if isinstance(self.text_config, dict): | ||
text_config["model_type"] = text_config["model_type"] if "model_type" in text_config else "gemma" | ||
self.text_config = CONFIG_MAPPING[text_config["model_type"]](**text_config) | ||
self.vocab_size = self.text_config.vocab_size | ||
elif text_config is None: | ||
self.text_config = CONFIG_MAPPING["gemma"]( | ||
hidden_size=2048, | ||
num_hidden_layers=18, | ||
intermediate_size=16384, | ||
num_attention_heads=8, | ||
num_key_value_heads=1, | ||
is_encoder_decoder=False, | ||
) | ||
self.text_config.num_image_tokens = (self.vision_config.image_size // self.vision_config.patch_size) ** 2 | ||
self.vision_config.projection_dim = projection_dim | ||
super().__init__(**kwargs) |
Oops, something went wrong.