Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mask Generation Task Guide #28897

Merged
merged 28 commits into from
Feb 14, 2024
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
f4038ab
Create mask_generation.md
merveenoyan Feb 6, 2024
f53df34
add h1
merveenoyan Feb 6, 2024
fd08eac
add to toctree
merveenoyan Feb 6, 2024
400cf83
Update docs/source/en/tasks/mask_generation.md
merveenoyan Feb 6, 2024
ce4cb70
Update docs/source/en/tasks/mask_generation.md
merveenoyan Feb 6, 2024
e51419c
Update docs/source/en/tasks/mask_generation.md
merveenoyan Feb 6, 2024
5398634
Update docs/source/en/tasks/mask_generation.md
merveenoyan Feb 6, 2024
cf0ae7f
Update docs/source/en/tasks/mask_generation.md
merveenoyan Feb 6, 2024
67119e0
Update mask_generation.md
merveenoyan Feb 6, 2024
d04c329
Update docs/source/en/tasks/mask_generation.md
merveenoyan Feb 6, 2024
7701add
Update docs/source/en/tasks/mask_generation.md
merveenoyan Feb 6, 2024
5ad8826
Update docs/source/en/tasks/mask_generation.md
merveenoyan Feb 6, 2024
dafb06e
Update docs/source/en/tasks/mask_generation.md
merveenoyan Feb 6, 2024
a3bf800
Update docs/source/en/tasks/mask_generation.md
merveenoyan Feb 6, 2024
268f07d
Update docs/source/en/tasks/mask_generation.md
merveenoyan Feb 6, 2024
da0ceae
Update docs/source/en/tasks/mask_generation.md
merveenoyan Feb 6, 2024
25371f0
Update docs/source/en/tasks/mask_generation.md
merveenoyan Feb 6, 2024
40700ee
Update docs/source/en/tasks/mask_generation.md
merveenoyan Feb 6, 2024
08766d9
Update docs/source/en/tasks/mask_generation.md
merveenoyan Feb 6, 2024
6d50404
Update mask_generation.md
merveenoyan Feb 6, 2024
e34c085
Update docs/source/en/tasks/mask_generation.md
merveenoyan Feb 12, 2024
32d4b33
Update docs/source/en/tasks/mask_generation.md
merveenoyan Feb 12, 2024
714e6b2
Update docs/source/en/tasks/mask_generation.md
merveenoyan Feb 12, 2024
62820d7
Update docs/source/en/tasks/mask_generation.md
merveenoyan Feb 12, 2024
a63e64c
Update docs/source/en/tasks/mask_generation.md
merveenoyan Feb 14, 2024
a21ad56
Update docs/source/en/tasks/mask_generation.md
merveenoyan Feb 14, 2024
1eb7acb
Update mask_generation.md
merveenoyan Feb 14, 2024
9379452
Update mask_generation.md
merveenoyan Feb 14, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions docs/source/en/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,8 @@
title: Depth estimation
- local: tasks/image_to_image
title: Image-to-Image
- local: tasks/mask_generation
title: Mask Generation
- local: tasks/knowledge_distillation_for_image_classification
title: Knowledge Distillation for Computer Vision
title: Computer Vision
Expand Down
247 changes: 247 additions & 0 deletions docs/source/en/tasks/mask_generation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,247 @@
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
merveenoyan marked this conversation as resolved.
Show resolved Hide resolved

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.

⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.

-->

# Mask Generation

Mask generation is the task of generating semantically meaningful masks for an image.
This task is very similar to image segmentation, but many differences exist. Image segmentation models are trained
merveenoyan marked this conversation as resolved.
Show resolved Hide resolved
on labeled datasets and are limited to the classes they have seen during training; they return a mask and its class,
merveenoyan marked this conversation as resolved.
Show resolved Hide resolved
given an image.

Mask generation models are trained on large amounts of data, and they operate in two modes. The first one is prompting
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Large amounts of unlabeled data?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the model is actually manually labeled with masks, semi-automatic labeled, and fully automatically labeled, however the label here doesn't imply class masks like fully supervised image segmentation. will try to put this properly

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can make this a list. E.g.:

"Mask generation models are trained on large amounts of data, and they operate in two modes: * Prompting mode: the model takes in....

  • Segment everything mode: ...."

mode. In this mode, the model takes in an image and a prompt, where a prompt can be a point location in the image within
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is a point location an XY coordinate of a pixel or something else?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes it is, will clarify

an object or a bounding box surrounding an object. In prompting mode, the model only returns the mask over the object
that the prompt is pointing out to. The second one is the segment everything mode. In segment everything, given an
merveenoyan marked this conversation as resolved.
Show resolved Hide resolved
image, the model generates every mask in the image. To do so, a grid of points is generated and overlaid on the image
for inference.

Mask generation task is supported by Segment Anything Model (SAM). It's a powerful model that consists of a Vision Transformer
merveenoyan marked this conversation as resolved.
Show resolved Hide resolved
base image encoder, a prompt encoder, and a mask decoder. Images and prompts are encoded, and the decoder takes these
embeddings and generates valid masks.

<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/blob/main/transformers/tasks/sam.png" alt="SAM Architecture"/>
merveenoyan marked this conversation as resolved.
Show resolved Hide resolved
</div>

SAM is very powerful and serves as a foundation model for segmentation as it has large data coverage. It is trained on
merveenoyan marked this conversation as resolved.
Show resolved Hide resolved
SA-1B, a dataset with 1 million images and 1.1 billion masks.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a link to this dataset we could add?


In this guide, we will learn how to:
merveenoyan marked this conversation as resolved.
Show resolved Hide resolved
- Infer in segment everything mode,
- Infer in point prompting mode,
- Infer in box prompting mode,
- Prompt batching.
merveenoyan marked this conversation as resolved.
Show resolved Hide resolved

First, let's install `transformers`:

```bash
pip install -q transformers
```

## Mask Generation Pipeline

The easiest way to infer mask generation models is to use `mask-generation` pipeline.
merveenoyan marked this conversation as resolved.
Show resolved Hide resolved
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The easiest way to infer mask generation models is to use `mask-generation` pipeline.
The easiest way to infer with mask generation models is to use `mask-generation` pipeline.


```python
>>> from transformers import pipeline

>>> checkpoint = "facebook/sam-vit-base"
>>> mask_generator = pipeline(model=checkpoint, task="mask-generation")
```

Let's see the image.

```python
from PIL import Image
import requests

img_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg"
image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")
```

<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg" alt="Example Image"/>
</div>

Let's segment everything. `points-per-batch` enables parallel inference of points in
segment everything mode. This will enable faster inference, but will consume high memory.
merveenoyan marked this conversation as resolved.
Show resolved Hide resolved
Moreover, SAM only enables batching over points and not the images. `pred_iou_thresh` is
the IoU confidence threshold where only the masks above that certain threshold are returned.

```python
masks = mask_generator(image, points_per_batch=128, pred_iou_thresh=0.88)
```

The `masks` looks like following:
merveenoyan marked this conversation as resolved.
Show resolved Hide resolved

```bash
{'masks': [array([[False, False, False, ..., True, True, True],
[False, False, False, ..., True, True, True],
[False, False, False, ..., True, True, True],
...,
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False]]),
array([[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
...,
'scores': tensor([0.9972, 0.9917,
...,
}
```

We can visualize them like following:
merveenoyan marked this conversation as resolved.
Show resolved Hide resolved

```python
import matplotlib.pyplot as plt

plt.imshow(image, cmap='gray')

for i, mask in enumerate(masks["masks"]):
plt.imshow(mask, cmap='viridis', alpha=0.1, vmin=0, vmax=1)

plt.axis('off')
plt.show()
```

Below is the original image in grayscale with colorful maps overlaid. Very impressive.

<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee_segmented.png" alt="Visualized"/>
</div>


## Model Inference

### Point Prompting

You can also use the model without the pipeline. To do so, simply initialize the model and
merveenoyan marked this conversation as resolved.
Show resolved Hide resolved
the processor.

```python
from transformers import SamModel, SamProcessor

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

model = SamModel.from_pretrained("facebook/sam-vit-base").to(device)
processor = SamProcessor.from_pretrained("facebook/sam-vit-base")
```

You can do point prompting like below. Simply pass the input point to the processor. Take the processor output
merveenoyan marked this conversation as resolved.
Show resolved Hide resolved
and pass it to the model for inference. To postprocess the model output, we pass the outputs and
merveenoyan marked this conversation as resolved.
Show resolved Hide resolved
`original_sizes` and `reshaped_input_sizes` we take from the processor's initial output. We need to pass these
since processor resizes the image, and the output needs to be extrapolated.
merveenoyan marked this conversation as resolved.
Show resolved Hide resolved

```
input_points = [[[2592, 1728]]] # point location of the bee

inputs = processor(image, input_points=input_points, return_tensors="pt").to(device)
outputs = model(**inputs)
masks = processor.image_processor.post_process_masks(outputs.pred_masks.cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu())
```
merveenoyan marked this conversation as resolved.
Show resolved Hide resolved

merveenoyan marked this conversation as resolved.
Show resolved Hide resolved
We can visualize the three masks in the `masks` output.

```python
import torch
import matplotlib.pyplot as plt
import numpy as np

fig, axes = plt.subplots(1, 4, figsize=(15, 5))

axes[0].imshow(image)
axes[0].set_title('Original Image')
mask_list = [masks[0][0][0].numpy(), masks[0][0][1].numpy(), masks[0][0][2].numpy()]

for i, mask in enumerate(mask_list, start=1):
overlayed_image = np.array(image).copy()

overlayed_image[:,:,0] = np.where(mask == 1, 255, overlayed_image[:,:,0])
overlayed_image[:,:,1] = np.where(mask == 1, 0, overlayed_image[:,:,1])
overlayed_image[:,:,2] = np.where(mask == 1, 0, overlayed_image[:,:,2])

axes[i].imshow(overlayed_image)
axes[i].set_title(f'Mask {i}')
for ax in axes:
ax.axis('off')

plt.show()
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/masks.png" alt="Visualized"/>
</div>

### Box Prompting

You can also do box prompting in a similar fashion to point prompting. You can simply pass the input box in format of a list
merveenoyan marked this conversation as resolved.
Show resolved Hide resolved
`[x_min, y_min, x_max, y_max]` format along with the image to the `processor`. Take the processor output and directly pass it
to the model, then postprocess the output again.
merveenoyan marked this conversation as resolved.
Show resolved Hide resolved
merveenoyan marked this conversation as resolved.
Show resolved Hide resolved


```python
# bounding box around the bee
box = [2350, 1600, 2850, 2100]

inputs = processor(
image,
input_boxes=[[[box]]],
return_tensors="pt"
).to("cuda")

with torch.no_grad():
outputs = model(**inputs)

mask = processor.image_processor.post_process_masks(
outputs.pred_masks.cpu(),
inputs["original_sizes"].cpu(),
inputs["reshaped_input_sizes"].cpu()
)[0][0][0].numpy()
```

You can see the bounding box around the bee like below.
merveenoyan marked this conversation as resolved.
Show resolved Hide resolved

```python
import matplotlib.patches as patches

fig, ax = plt.subplots()
ax.imshow(image)

rectangle = patches.Rectangle((2350, 1600, 500, 500, linewidth=2, edgecolor='r', facecolor='none')
ax.add_patch(rectangle)
ax.axis("off")
plt.show()
```

<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/bbox.png" alt="Visualized Bbox"/>
</div>

You can see the inference output below.

```python
fig, ax = plt.subplots()
ax.imshow(image)
ax.imshow(mask, cmap='viridis', alpha=0.4)

ax.axis("off")
plt.show()
```

<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/box_inference.png" alt="Visualized Inference"/>
</div>

Loading