Chesebrough commited on
Commit
12e0f7e
1 Parent(s): c9a8046

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +55 -8
README.md CHANGED
@@ -14,24 +14,40 @@ widget:
14
  example_title: Palace
15
  ---
16
 
17
- # DPT (large-sized model) fine-tuned on ADE20k
 
18
 
19
  Dense Prediction Transformer (DPT) model trained on ADE20k for semantic segmentation. It was introduced in the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by Ranftl et al. and first released in [this repository](https://github.com/isl-org/DPT).
20
 
21
- Disclaimer: The team releasing DPT did not write a model card for this model so this model card has been written by the Hugging Face team.
 
 
 
 
22
 
23
- ## Model description
24
 
25
- DPT uses the Vision Transformer (ViT) as backbone and adds a neck + head on top for semantic segmentation.
26
 
27
- ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/dpt_architecture.jpg)
 
 
 
28
 
29
  ## Intended uses & limitations
30
 
31
  You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=dpt) to look for
32
  fine-tuned versions on a task that interests you.
33
 
34
- ### How to use
 
 
 
 
 
 
 
 
 
35
 
36
  Here is how to use this model:
37
 
@@ -40,16 +56,47 @@ from transformers import DPTFeatureExtractor, DPTForSemanticSegmentation
40
  from PIL import Image
41
  import requests
42
 
43
- url = "http://images.cocodataset.org/val2017/000000039769.jpg"
44
  image = Image.open(requests.get(url, stream=True).raw)
45
 
46
- feature_extractor = DPTFeatureExtractor.from_pretrained("Intel/dpt-large-ade")
47
  model = DPTForSemanticSegmentation.from_pretrained("Intel/dpt-large-ade")
48
 
49
  inputs = feature_extractor(images=image, return_tensors="pt")
50
 
51
  outputs = model(**inputs)
52
  logits = outputs.logits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
  ```
54
 
55
  For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/dpt).
 
14
  example_title: Palace
15
  ---
16
 
17
+
18
+ # Dense Prediction Transformer DPT (large-sized model) fine-tuned on ADE20k
19
 
20
  Dense Prediction Transformer (DPT) model trained on ADE20k for semantic segmentation. It was introduced in the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by Ranftl et al. and first released in [this repository](https://github.com/isl-org/DPT).
21
 
22
+ It is suited for image segmentation, specifically, semantic segmentation - tasks such as is seen the output below:
23
+
24
+ | Input Image | Output Depth Image |
25
+ | --- | --- |
26
+ | ![input image](https://cdn-uploads.huggingface.co/production/uploads/641bd18baebaa27e0753f2c9/W-l_okUcVQRYwR0VAkk_q.png) | ![Segmentation image](https://cdn-uploads.huggingface.co/production/uploads/641bd18baebaa27e0753f2c9/ETbeEDqVZE4Ut0QTjrfgc.png) |
27
 
 
28
 
29
+ Disclaimer: The team releasing DPT did not write a model card for this model so this model card has been written by the Hugging Face in conjunction with Intel team.
30
 
31
+
32
+ ## Model description
33
+
34
+ DPT uses the Vision Transformer (ViT) as backbone and adds a neck + head on top for semantic segmentation. It is based on MiDaS v3.0 as described in the paper above.
35
 
36
  ## Intended uses & limitations
37
 
38
  You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=dpt) to look for
39
  fine-tuned versions on a task that interests you.
40
 
41
+ ## Results:
42
+ According to the authors, at the time of publication, when applied to semantic segmentation,
43
+ dense vision transformers set a new state of the art on
44
+ **ADE20K with 49.02% mIoU**.
45
+
46
+ We further show that the architecture can be fine-tuned on smaller datasets such as NYUv2, KITTI, and Pascal Context where it also sets the new state of the art. Our models are available at
47
+
48
+ [DPT GitHub Repository](https://github.com/intel-isl/DPT)
49
+
50
+ ### How to use - Demonstration of Image Semantic Segmentation using DPTs
51
 
52
  Here is how to use this model:
53
 
 
56
  from PIL import Image
57
  import requests
58
 
59
+ url = "http://images.cocodataset.org/val2017/000000026204.jpg"
60
  image = Image.open(requests.get(url, stream=True).raw)
61
 
62
+ feature_extractor = DPTImageProcessor .from_pretrained("Intel/dpt-large-ade")
63
  model = DPTForSemanticSegmentation.from_pretrained("Intel/dpt-large-ade")
64
 
65
  inputs = feature_extractor(images=image, return_tensors="pt")
66
 
67
  outputs = model(**inputs)
68
  logits = outputs.logits
69
+ print(logits.shape)
70
+ logits
71
+ prediction = torch.nn.functional.interpolate(
72
+ logits,
73
+ size=image.size[::-1], # Reverse the size of the original image (width, height)
74
+ mode="bicubic",
75
+ align_corners=False
76
+ )
77
+
78
+ # Convert logits to class predictions
79
+ prediction = torch.argmax(prediction, dim=1) + 1
80
+
81
+ # Squeeze the prediction tensor to remove dimensions
82
+ prediction = prediction.squeeze()
83
+
84
+ # Move the prediction tensor to the CPU and convert it to a numpy array
85
+ prediction = prediction.cpu().numpy()
86
+
87
+ # Convert the prediction array to an image
88
+ predicted_seg = Image.fromarray(prediction.squeeze().astype('uint8'))
89
+
90
+ # Define the ADE20K palette
91
+ adepallete = [0,0,0,120,120,120,180,120,120,6,230,230,80,50,50,4,200,3,120,120,80,140,140,140,204,5,255,230,230,230,4,250,7,224,5,255,235,255,7,150,5,61,120,120,70,8,255,51,255,6,82,143,255,140,204,255,4,255,51,7,204,70,3,0,102,200,61,230,250,255,6,51,11,102,255,255,7,71,255,9,224,9,7,230,220,220,220,255,9,92,112,9,255,8,255,214,7,255,224,255,184,6,10,255,71,255,41,10,7,255,255,224,255,8,102,8,255,255,61,6,255,194,7,255,122,8,0,255,20,255,8,41,255,5,153,6,51,255,235,12,255,160,150,20,0,163,255,140,140,140,250,10,15,20,255,0,31,255,0,255,31,0,255,224,0,153,255,0,0,0,255,255,71,0,0,235,255,0,173,255,31,0,255,11,200,200,255,82,0,0,255,245,0,61,255,0,255,112,0,255,133,255,0,0,255,163,0,255,102,0,194,255,0,0,143,255,51,255,0,0,82,255,0,255,41,0,255,173,10,0,255,173,255,0,0,255,153,255,92,0,255,0,255,255,0,245,255,0,102,255,173,0,255,0,20,255,184,184,0,31,255,0,255,61,0,71,255,255,0,204,0,255,194,0,255,82,0,10,255,0,112,255,51,0,255,0,194,255,0,122,255,0,255,163,255,153,0,0,255,10,255,112,0,143,255,0,82,0,255,163,255,0,255,235,0,8,184,170,133,0,255,0,255,92,184,0,255,255,0,31,0,184,255,0,214,255,255,0,112,92,255,0,0,224,255,112,224,255,70,184,160,163,0,255,153,0,255,71,255,0,255,0,163,255,204,0,255,0,143,0,255,235,133,255,0,255,0,235,245,0,255,255,0,122,255,245,0,10,190,212,214,255,0,0,204,255,20,0,255,255,255,0,0,153,255,0,41,255,0,255,204,41,0,255,41,255,0,173,0,255,0,245,255,71,0,255,122,0,255,0,255,184,0,92,255,184,255,0,0,133,255,255,214,0,25,194,194,102,255,0,92,0,255]
92
+
93
+ # Apply the color map to the predicted segmentation image
94
+ predicted_seg.putpalette(adepallete)
95
+
96
+ # Blend the original image and the predicted segmentation image
97
+ out = Image.blend(image, predicted_seg.convert("RGB"), alpha=0.5)
98
+
99
+ out
100
  ```
101
 
102
  For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/dpt).