qubvel-hf HF staff commited on
Commit
b6e5a45
1 Parent(s): 6b21909

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +128 -1
README.md CHANGED
@@ -1,3 +1,130 @@
1
  ---
2
  pipeline_tag: image-segmentation
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  pipeline_tag: image-segmentation
3
+ ---
4
+
5
+ <!---
6
+ Copyright 2024 The HuggingFace Team. All rights reserved.
7
+
8
+ Licensed under the Apache License, Version 2.0 (the "License");
9
+ you may not use this file except in compliance with the License.
10
+ You may obtain a copy of the License at
11
+
12
+ http://www.apache.org/licenses/LICENSE-2.0
13
+
14
+ Unless required by applicable law or agreed to in writing, software
15
+ distributed under the License is distributed on an "AS IS" BASIS,
16
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
17
+ See the License for the specific language governing permissions and
18
+ limitations under the License.
19
+ -->
20
+
21
+ # Instance Segmentation Example
22
+
23
+ Content:
24
+ - [PyTorch Version with Accelerate](#pytorch-version-with-accelerate)
25
+ - [Reload and Perform Inference](#reload-and-perform-inference)
26
+
27
+ ## PyTorch Version with Accelerate
28
+
29
+ This model is based on the script [`run_instance_segmentation_no_trainer.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/instance-segmentation/run_instance_segmentation_no_trainer.py).
30
+ The script uses [🤗 Accelerate](https://github.com/huggingface/accelerate) to write your own training loop in PyTorch and run it on various environments, including CPU, multi-CPU, GPU, multi-GPU, and TPU, with support for mixed precision.
31
+
32
+ First, configure the environment:
33
+
34
+ ```bash
35
+ accelerate config
36
+ ```
37
+
38
+ Answer the questions regarding your training environment. Then, run:
39
+
40
+ ```bash
41
+ accelerate test
42
+ ```
43
+
44
+ This command ensures everything is ready for training. Finally, launch training with:
45
+
46
+ ```bash
47
+ accelerate launch run_instance_segmentation_no_trainer.py \
48
+ --model_name_or_path facebook/mask2former-swin-tiny-coco-instance \
49
+ --output_dir finetune-instance-segmentation-ade20k-mini-mask2former-no-trainer \
50
+ --dataset_name qubvel-hf/ade20k-mini \
51
+ --do_reduce_labels \
52
+ --image_height 256 \
53
+ --image_width 256 \
54
+ --num_train_epochs 40 \
55
+ --learning_rate 1e-5 \
56
+ --lr_scheduler_type constant \
57
+ --per_device_train_batch_size 8 \
58
+ --gradient_accumulation_steps 2 \
59
+ --dataloader_num_workers 8 \
60
+ --push_to_hub
61
+ ```
62
+
63
+
64
+ ## Reload and Perform Inference
65
+
66
+ You can easily load this trained model and perform inference as follows:
67
+
68
+ ```python
69
+ import torch
70
+ import requests
71
+ import matplotlib.pyplot as plt
72
+
73
+ from PIL import Image
74
+ from transformers import Mask2FormerForUniversalSegmentation, Mask2FormerImageProcessor
75
+
76
+ # Load image
77
+ image = Image.open(requests.get("http://farm4.staticflickr.com/3017/3071497290_31f0393363_z.jpg", stream=True).raw)
78
+
79
+ # Load model and image processor
80
+ device = "cuda"
81
+ checkpoint = "qubvel-hf/finetune-instance-segmentation-ade20k-mini-mask2former-no-trainer"
82
+
83
+ model = Mask2FormerForUniversalSegmentation.from_pretrained(checkpoint, device_map=device)
84
+ image_processor = Mask2FormerImageProcessor.from_pretrained(checkpoint)
85
+
86
+ # Run inference on image
87
+ inputs = image_processor(images=[image], return_tensors="pt").to(device)
88
+ with torch.no_grad():
89
+ outputs = model(**inputs)
90
+
91
+ # Post-process outputs
92
+ outputs = image_processor.post_process_instance_segmentation(outputs, target_sizes=[image.size[::-1]])
93
+
94
+ print("Mask shape: ", outputs[0]["segmentation"].shape)
95
+ print("Mask values: ", outputs[0]["segmentation"].unique())
96
+ for segment in outputs[0]["segments_info"]:
97
+ print("Segment: ", segment)
98
+ ```
99
+
100
+ ```
101
+ Mask shape: torch.Size([427, 640])
102
+ Mask values: tensor([-1., 0., 1., 2., 3., 4., 5., 6.])
103
+ Segment: {'id': 0, 'label_id': 0, 'was_fused': False, 'score': 0.946127}
104
+ Segment: {'id': 1, 'label_id': 1, 'was_fused': False, 'score': 0.961582}
105
+ Segment: {'id': 2, 'label_id': 1, 'was_fused': False, 'score': 0.968367}
106
+ Segment: {'id': 3, 'label_id': 1, 'was_fused': False, 'score': 0.819527}
107
+ Segment: {'id': 4, 'label_id': 1, 'was_fused': False, 'score': 0.655761}
108
+ Segment: {'id': 5, 'label_id': 1, 'was_fused': False, 'score': 0.531299}
109
+ Segment: {'id': 6, 'label_id': 1, 'was_fused': False, 'score': 0.929477}
110
+ ```
111
+
112
+ Use the following code to visualize the results:
113
+
114
+ ```python
115
+ import numpy as np
116
+ import matplotlib.pyplot as plt
117
+
118
+ segmentation = outputs[0]["segmentation"].numpy()
119
+
120
+ plt.figure(figsize=(10, 10))
121
+ plt.subplot(1, 2, 1)
122
+ plt.imshow(np.array(image))
123
+ plt.axis("off")
124
+ plt.subplot(1, 2, 2)
125
+ plt.imshow(segmentation)
126
+ plt.axis("off")
127
+ plt.show()
128
+ ```
129
+
130
+ ![Result](https://i.imgur.com/rZmaRjD.png)