ahatamiz commited on
Commit
419fc4f
1 Parent(s): b826fdd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +93 -12
README.md CHANGED
@@ -4,46 +4,127 @@ license_name: nvclv1
4
  license_link: LICENSE
5
  datasets:
6
  - ILSVRC/imagenet-1k
7
- pipeline_tag: image-classification
8
  ---
9
 
10
 
11
  [**MambaVision: A Hybrid Mamba-Transformer Vision Backbone**](https://arxiv.org/abs/2407.08083).
12
 
13
- ### Model Overview
14
 
15
  We introduce a novel mixer block by creating a symmetric path without SSM to enhance the modeling of global context. MambaVision has a hierarchical architecture that employs both self-attention and mixer blocks.
16
 
17
 
18
- ### Model Performance
19
 
20
  MambaVision demonstrates a strong performance by achieving a new SOTA Pareto-front in
21
  terms of Top-1 accuracy and throughput.
22
 
23
  <p align="center">
24
- <img src="https://github.com/NVlabs/MambaVision/assets/26806394/79dcf841-3966-4b77-883d-76cd5e1d4320" width=42% height=42%
25
  class="center">
26
  </p>
27
 
28
 
29
- ### Model Usage
 
 
30
 
31
- You must first login into HuggingFace to pull the model:
32
 
33
  ```Bash
34
- huggingface-cli login
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
  ```
36
 
37
- The model can be simply used according to:
 
 
 
 
 
 
 
 
38
 
39
  ```Python
40
- access_token = "<YOUR ACCESS TOKEN"
 
 
 
 
41
  model = AutoModel.from_pretrained("nvidia/MambaVision-S-1K", trust_remote_code=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
  ```
43
 
44
 
45
  ### License:
46
 
47
- [NVIDIA Source Code License-NC](https://huggingface.co/nvidia/MambaVision-S-1K/blob/main/LICENSE)
48
-
49
-
 
4
  license_link: LICENSE
5
  datasets:
6
  - ILSVRC/imagenet-1k
7
+ pipeline_tag: image-feature-extraction
8
  ---
9
 
10
 
11
  [**MambaVision: A Hybrid Mamba-Transformer Vision Backbone**](https://arxiv.org/abs/2407.08083).
12
 
13
+ ## Model Overview
14
 
15
  We introduce a novel mixer block by creating a symmetric path without SSM to enhance the modeling of global context. MambaVision has a hierarchical architecture that employs both self-attention and mixer blocks.
16
 
17
 
18
+ ## Model Performance
19
 
20
  MambaVision demonstrates a strong performance by achieving a new SOTA Pareto-front in
21
  terms of Top-1 accuracy and throughput.
22
 
23
  <p align="center">
24
+ <img src="https://github.com/NVlabs/MambaVision/assets/26806394/79dcf841-3966-4b77-883d-76cd5e1d4320" width=70% height=70%
25
  class="center">
26
  </p>
27
 
28
 
29
+ ## Model Usage
30
+
31
+ It is highly recommended to install the requirements for MambaVision by running the following:
32
 
 
33
 
34
  ```Bash
35
+ pip install mambavision
36
+ ```
37
+
38
+ For each model, we offer two variants for image classification and feature extraction that can be imported with 1 line of code.
39
+
40
+ ### Image Classification
41
+
42
+ In the following example, we demonstrate how MambaVision can be used for image classification.
43
+
44
+ Given the following image from [COCO dataset](https://cocodataset.org/#home) val set as an input:
45
+
46
+
47
+ <p align="center">
48
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/64414b62603214724ebd2636/4duSnqLf4lrNiAHczSmAN.jpeg" width=70% height=70%
49
+ class="center">
50
+ </p>
51
+
52
+
53
+ The following snippet can be used for image classification:
54
+
55
+ ```Python
56
+ from transformers import AutoModelForImageClassification
57
+ from PIL import Image
58
+ from timm.data.transforms_factory import create_transform
59
+ import requests
60
+
61
+ model = AutoModelForImageClassification.from_pretrained("nvidia/MambaVision-S-1K", trust_remote_code=True)
62
+
63
+ # eval mode for inference
64
+ model.cuda().eval()
65
+
66
+ # prepare image for the model
67
+ url = 'http://images.cocodataset.org/val2017/000000020247.jpg'
68
+ image = Image.open(requests.get(url, stream=True).raw)
69
+ input_resolution = (3, 224, 224) # MambaVision supports any input resolutions
70
+
71
+ transform = create_transform(input_size=input_resolution,
72
+ is_training=False,
73
+ mean=model.config.mean,
74
+ std=model.config.std,
75
+ crop_mode=model.config.crop_mode,
76
+ crop_pct=model.config.crop_pct)
77
+
78
+ inputs = transform(image).unsqueeze(0).cuda()
79
+ # model inference
80
+ outputs = model(inputs)
81
+ logits = outputs['logits']
82
+ predicted_class_idx = logits.argmax(-1).item()
83
+ print("Predicted class:", model.config.id2label[predicted_class_idx])
84
  ```
85
 
86
+ The predicted label is brown bear, bruin, Ursus arctos.
87
+
88
+ ### Feature Extraction
89
+
90
+ MambaVision can also be used as a generic feature extractor.
91
+
92
+ Specifically, we can extract the outputs of each stage of model (4 stages) as well as the final averaged-pool features that are flattened.
93
+
94
+ The following snippet can be used for feature extraction:
95
 
96
  ```Python
97
+ from transformers import AutoModel
98
+ from PIL import Image
99
+ from timm.data.transforms_factory import create_transform
100
+ import requests
101
+
102
  model = AutoModel.from_pretrained("nvidia/MambaVision-S-1K", trust_remote_code=True)
103
+
104
+ # eval mode for inference
105
+ model.cuda().eval()
106
+
107
+ # prepare image for the model
108
+ url = 'http://images.cocodataset.org/val2017/000000020247.jpg'
109
+ image = Image.open(requests.get(url, stream=True).raw)
110
+ input_resolution = (3, 224, 224) # MambaVision supports any input resolutions
111
+
112
+ transform = create_transform(input_size=input_resolution,
113
+ is_training=False,
114
+ mean=model.config.mean,
115
+ std=model.config.std,
116
+ crop_mode=model.config.crop_mode,
117
+ crop_pct=model.config.crop_pct)
118
+ inputs = transform(image).unsqueeze(0).cuda()
119
+ # model inference
120
+ out_avg_pool, features = model(inputs)
121
+ print("Size of the averaged pool features:", out_avg_pool.size()) # torch.Size([1, 640])
122
+ print("Number of stages in extracted features:", len(features)) # 4 stages
123
+ print("Size of extracted features in stage 1:", features[0].size()) # torch.Size([1, 80, 56, 56])
124
+ print("Size of extracted features in stage 4:", features[3].size()) # torch.Size([1, 640, 7, 7])
125
  ```
126
 
127
 
128
  ### License:
129
 
130
+ [NVIDIA Source Code License-NC](https://huggingface.co/nvidia/MambaVision-T-1K/blob/main/LICENSE)