rwightman HF staff commited on
Commit
94ca04f
1 Parent(s): c10b8c9

Update model config and README

Browse files
Files changed (3) hide show
  1. README.md +135 -2
  2. config.json +1 -0
  3. model.safetensors +3 -0
README.md CHANGED
@@ -2,6 +2,139 @@
2
  tags:
3
  - image-classification
4
  - timm
5
- library_tag: timm
 
 
 
6
  ---
7
- # Model card for tinynet_b.in1k
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  tags:
3
  - image-classification
4
  - timm
5
+ library_name: timm
6
+ license: apache-2.0
7
+ datasets:
8
+ - imagenet-1k
9
  ---
10
+ # Model card for tinynet_b.in1k
11
+
12
+ A TinyNet image classification model. Trained on ImageNet-1k by paper authors.
13
+
14
+
15
+ ## Model Details
16
+ - **Model Type:** Image classification / feature backbone
17
+ - **Model Stats:**
18
+ - Params (M): 3.7
19
+ - GMACs: 0.2
20
+ - Activations (M): 4.4
21
+ - Image size: 188 x 188
22
+ - **Papers:**
23
+ - Model rubik's cube: Twisting resolution, depth and width for tinynets: https://arxiv.org/abs/2010.14819v2
24
+ - **Dataset:** ImageNet-1k
25
+
26
+ ## Model Usage
27
+ ### Image Classification
28
+ ```python
29
+ from urllib.request import urlopen
30
+ from PIL import Image
31
+ import timm
32
+
33
+ img = Image.open(urlopen(
34
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
35
+ ))
36
+
37
+ model = timm.create_model('tinynet_b.in1k', pretrained=True)
38
+ model = model.eval()
39
+
40
+ # get model specific transforms (normalization, resize)
41
+ data_config = timm.data.resolve_model_data_config(model)
42
+ transforms = timm.data.create_transform(**data_config, is_training=False)
43
+
44
+ output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
45
+
46
+ top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
47
+ ```
48
+
49
+ ### Feature Map Extraction
50
+ ```python
51
+ from urllib.request import urlopen
52
+ from PIL import Image
53
+ import timm
54
+
55
+ img = Image.open(urlopen(
56
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
57
+ ))
58
+
59
+ model = timm.create_model(
60
+ 'tinynet_b.in1k',
61
+ pretrained=True,
62
+ features_only=True,
63
+ )
64
+ model = model.eval()
65
+
66
+ # get model specific transforms (normalization, resize)
67
+ data_config = timm.data.resolve_model_data_config(model)
68
+ transforms = timm.data.create_transform(**data_config, is_training=False)
69
+
70
+ output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
71
+
72
+ for o in output:
73
+ # print shape of each feature map in output
74
+ # e.g.:
75
+ # torch.Size([1, 16, 94, 94])
76
+ # torch.Size([1, 24, 47, 47])
77
+ # torch.Size([1, 32, 24, 24])
78
+ # torch.Size([1, 88, 12, 12])
79
+ # torch.Size([1, 240, 6, 6])
80
+
81
+ print(o.shape)
82
+ ```
83
+
84
+ ### Image Embeddings
85
+ ```python
86
+ from urllib.request import urlopen
87
+ from PIL import Image
88
+ import timm
89
+
90
+ img = Image.open(urlopen(
91
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
92
+ ))
93
+
94
+ model = timm.create_model(
95
+ 'tinynet_b.in1k',
96
+ pretrained=True,
97
+ num_classes=0, # remove classifier nn.Linear
98
+ )
99
+ model = model.eval()
100
+
101
+ # get model specific transforms (normalization, resize)
102
+ data_config = timm.data.resolve_model_data_config(model)
103
+ transforms = timm.data.create_transform(**data_config, is_training=False)
104
+
105
+ output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
106
+
107
+ # or equivalently (without needing to set num_classes=0)
108
+
109
+ output = model.forward_features(transforms(img).unsqueeze(0))
110
+ # output is unpooled, a (1, 1280, 6, 6) shaped tensor
111
+
112
+ output = model.forward_head(output, pre_logits=True)
113
+ # output is a (1, num_features) shaped tensor
114
+ ```
115
+
116
+ ## Model Comparison
117
+ Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
118
+
119
+ ## Citation
120
+ ```bibtex
121
+ @article{han2020model,
122
+ title={Model rubik’s cube: Twisting resolution, depth and width for tinynets},
123
+ author={Han, Kai and Wang, Yunhe and Zhang, Qiulin and Zhang, Wei and Xu, Chunjing and Zhang, Tong},
124
+ journal={Advances in Neural Information Processing Systems},
125
+ volume={33},
126
+ pages={19353--19364},
127
+ year={2020}
128
+ }
129
+ ```
130
+ ```bibtex
131
+ @misc{rw2019timm,
132
+ author = {Ross Wightman},
133
+ title = {PyTorch Image Models},
134
+ year = {2019},
135
+ publisher = {GitHub},
136
+ journal = {GitHub repository},
137
+ doi = {10.5281/zenodo.4414861},
138
+ howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
139
+ }
140
+ ```
config.json CHANGED
@@ -3,6 +3,7 @@
3
  "num_classes": 1000,
4
  "num_features": 1280,
5
  "pretrained_cfg": {
 
6
  "custom_load": false,
7
  "input_size": [
8
  3,
 
3
  "num_classes": 1000,
4
  "num_features": 1280,
5
  "pretrained_cfg": {
6
+ "tag": "in1k",
7
  "custom_load": false,
8
  "input_size": [
9
  3,
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:29a59905c5f2c08cd468b12349e074dd29d2a967822dd6175c35b56f258f2e11
3
+ size 15088468