Skip to content

Commit

Permalink
Merge branch 'huggingface:main' into rwkv6
Browse files Browse the repository at this point in the history
  • Loading branch information
uniartisan authored Dec 30, 2024
2 parents f4c9c4e + 5cabc75 commit f6749e5
Show file tree
Hide file tree
Showing 29 changed files with 2,313 additions and 265 deletions.
253 changes: 0 additions & 253 deletions .github/workflows/self-comment-ci.yml

This file was deleted.

2 changes: 2 additions & 0 deletions docs/source/en/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -655,6 +655,8 @@
title: DiNAT
- local: model_doc/dinov2
title: DINOV2
- local: model_doc/dinov2_with_registers
title: DINOv2 with Registers
- local: model_doc/dit
title: DiT
- local: model_doc/dpt
Expand Down
1 change: 1 addition & 0 deletions docs/source/en/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,6 +127,7 @@ Flax), PyTorch, and/or TensorFlow.
| [DialoGPT](model_doc/dialogpt) ||||
| [DiNAT](model_doc/dinat) ||||
| [DINOv2](model_doc/dinov2) ||||
| [DINOv2 with Registers](model_doc/dinov2_with_registers) ||||
| [DistilBERT](model_doc/distilbert) ||||
| [DiT](model_doc/dit) ||||
| [DonutSwin](model_doc/donut) ||||
Expand Down
54 changes: 54 additions & 0 deletions docs/source/en/model_doc/dinov2_with_registers.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# DINOv2 with Registers

## Overview

The DINOv2 with Registers model was proposed in [Vision Transformers Need Registers](https://arxiv.org/abs/2309.16588) by Timothée Darcet, Maxime Oquab, Julien Mairal, Piotr Bojanowski.

The [Vision Transformer](vit) (ViT) is a transformer encoder model (BERT-like) originally introduced to do supervised image classification on ImageNet.

Next, people figured out ways to make ViT work really well on self-supervised image feature extraction (i.e. learning meaningful features, also called embeddings) on images without requiring any labels. Some example papers here include [DINOv2](dinov2) and [MAE](vit_mae).

The authors of DINOv2 noticed that ViTs have artifacts in attention maps. It’s due to the model using some image patches as “registers”. The authors propose a fix: just add some new tokens (called "register" tokens), which you only use during pre-training (and throw away afterwards). This results in:
- no artifacts
- interpretable attention maps
- and improved performances.

The abstract from the paper is the following:

*Transformers have recently emerged as a powerful tool for learning visual representations. In this paper, we identify and characterize artifacts in feature maps of both supervised and self-supervised ViT networks. The artifacts correspond to high-norm tokens appearing during inference primarily in low-informative background areas of images, that are repurposed for internal computations. We propose a simple yet effective solution based on providing additional tokens to the input sequence of the Vision Transformer to fill that role. We show that this solution fixes that problem entirely for both supervised and self-supervised models, sets a new state of the art for self-supervised visual models on dense visual prediction tasks, enables object discovery methods with larger models, and most importantly leads to smoother feature maps and attention maps for downstream visual processing.*

<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/dinov2_with_registers_visualization.png"
alt="drawing" width="600"/>

<small> Visualization of attention maps of various models trained with vs. without registers. Taken from the <a href="https://arxiv.org/abs/2309.16588">original paper</a>. </small>

Tips:

- Usage of DINOv2 with Registers is identical to DINOv2 without, you'll just get better performance.

This model was contributed by [nielsr](https://huggingface.co/nielsr).
The original code can be found [here](https://github.com/facebookresearch/dinov2).


## Dinov2WithRegistersConfig

[[autodoc]] Dinov2WithRegistersConfig

## Dinov2WithRegistersModel

[[autodoc]] Dinov2WithRegistersModel
- forward

## Dinov2WithRegistersForImageClassification

[[autodoc]] Dinov2WithRegistersForImageClassification
- forward
3 changes: 2 additions & 1 deletion docs/source/en/perf_infer_gpu_one.md
Original file line number Diff line number Diff line change
Expand Up @@ -238,6 +238,7 @@ For now, Transformers supports SDPA inference and training for the following arc
* [Dbrx](https://huggingface.co/docs/transformers/model_doc/dbrx#transformers.DbrxModel)
* [DeiT](https://huggingface.co/docs/transformers/model_doc/deit#transformers.DeiTModel)
* [Dinov2](https://huggingface.co/docs/transformers/en/model_doc/dinov2)
* [Dinov2_with_registers](https://huggingface.co/docs/transformers/en/model_doc/dinov2)
* [DistilBert](https://huggingface.co/docs/transformers/model_doc/distilbert#transformers.DistilBertModel)
* [Dpr](https://huggingface.co/docs/transformers/model_doc/dpr#transformers.DprReader)
* [EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder_decoder#transformers.EncoderDecoderModel)
Expand Down Expand Up @@ -461,7 +462,7 @@ generated_ids = model.generate(**inputs)
outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
```

To load a model in 4-bit for inference with multiple GPUs, you can control how much GPU RAM you want to allocate to each GPU. For example, to distribute 1GB of memory to the first GPU and 2GB of memory to the second GPU:
To load a model in 8-bit for inference with multiple GPUs, you can control how much GPU RAM you want to allocate to each GPU. For example, to distribute 1GB of memory to the first GPU and 2GB of memory to the second GPU:

```py
max_memory_mapping = {0: "1GB", 1: "2GB"}
Expand Down
16 changes: 16 additions & 0 deletions src/transformers/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -404,6 +404,7 @@
"models.dialogpt": [],
"models.dinat": ["DinatConfig"],
"models.dinov2": ["Dinov2Config"],
"models.dinov2_with_registers": ["Dinov2WithRegistersConfig"],
"models.distilbert": [
"DistilBertConfig",
"DistilBertTokenizer",
Expand Down Expand Up @@ -2161,6 +2162,14 @@
"Dinov2PreTrainedModel",
]
)
_import_structure["models.dinov2_with_registers"].extend(
[
"Dinov2WithRegistersBackbone",
"Dinov2WithRegistersForImageClassification",
"Dinov2WithRegistersModel",
"Dinov2WithRegistersPreTrainedModel",
]
)
_import_structure["models.distilbert"].extend(
[
"DistilBertForMaskedLM",
Expand Down Expand Up @@ -5371,6 +5380,7 @@
from .models.detr import DetrConfig
from .models.dinat import DinatConfig
from .models.dinov2 import Dinov2Config
from .models.dinov2_with_registers import Dinov2WithRegistersConfig
from .models.distilbert import (
DistilBertConfig,
DistilBertTokenizer,
Expand Down Expand Up @@ -7030,6 +7040,12 @@
Dinov2Model,
Dinov2PreTrainedModel,
)
from .models.dinov2_with_registers import (
Dinov2WithRegistersBackbone,
Dinov2WithRegistersForImageClassification,
Dinov2WithRegistersModel,
Dinov2WithRegistersPreTrainedModel,
)
from .models.distilbert import (
DistilBertForMaskedLM,
DistilBertForMultipleChoice,
Expand Down
1 change: 1 addition & 0 deletions src/transformers/models/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,7 @@
dialogpt,
dinat,
dinov2,
dinov2_with_registers,
distilbert,
dit,
donut,
Expand Down
Loading

0 comments on commit f6749e5

Please sign in to comment.