timm
/

Image Classification
timm
PyTorch
Safetensors
rwightman HF staff commited on
Commit
d9ab36d
1 Parent(s): 0021d17

Update model config and README

Browse files
Files changed (3) hide show
  1. README.md +143 -2
  2. config.json +1 -0
  3. model.safetensors +3 -0
README.md CHANGED
@@ -2,6 +2,147 @@
2
  tags:
3
  - image-classification
4
  - timm
5
- library_tag: timm
 
 
 
 
6
  ---
7
- # Model card for efficientnet_b5.in12k_ft_in1k
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  tags:
3
  - image-classification
4
  - timm
5
+ library_name: timm
6
+ license: apache-2.0
7
+ datasets:
8
+ - imagenet-1k
9
+ - imagenet-12k
10
  ---
11
+ # Model card for efficientnet_b5.sw_in12k_ft_in1k
12
+
13
+ A EfficientNet image classification model. Pretrained on ImageNet-12k and fine-tuned on ImageNet-1k by Ross Wightman in `timm` using recipe template described below.
14
+
15
+ Recipe details:
16
+ * Based on Swin Transformer train / pretrain recipe with modifications (related to both DeiT and ConvNeXt recipes)
17
+ * AdamW optimizer, gradient clipping, EMA weight averaging
18
+ * Cosine LR schedule with warmup
19
+
20
+
21
+ ## Model Details
22
+ - **Model Type:** Image classification / feature backbone
23
+ - **Model Stats:**
24
+ - Params (M): 30.4
25
+ - GMACs: 9.6
26
+ - Activations (M): 93.6
27
+ - Image size: 448 x 448
28
+ - **Papers:**
29
+ - EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks: https://arxiv.org/abs/1905.11946
30
+ - **Dataset:** ImageNet-1k
31
+ - **Pretrain Dataset:** ImageNet-12k
32
+ - **Original:** https://github.com/huggingface/pytorch-image-models
33
+
34
+ ## Model Usage
35
+ ### Image Classification
36
+ ```python
37
+ from urllib.request import urlopen
38
+ from PIL import Image
39
+ import timm
40
+
41
+ img = Image.open(urlopen(
42
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
43
+ ))
44
+
45
+ model = timm.create_model('efficientnet_b5.sw_in12k_ft_in1k', pretrained=True)
46
+ model = model.eval()
47
+
48
+ # get model specific transforms (normalization, resize)
49
+ data_config = timm.data.resolve_model_data_config(model)
50
+ transforms = timm.data.create_transform(**data_config, is_training=False)
51
+
52
+ output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
53
+
54
+ top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
55
+ ```
56
+
57
+ ### Feature Map Extraction
58
+ ```python
59
+ from urllib.request import urlopen
60
+ from PIL import Image
61
+ import timm
62
+
63
+ img = Image.open(urlopen(
64
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
65
+ ))
66
+
67
+ model = timm.create_model(
68
+ 'efficientnet_b5.sw_in12k_ft_in1k',
69
+ pretrained=True,
70
+ features_only=True,
71
+ )
72
+ model = model.eval()
73
+
74
+ # get model specific transforms (normalization, resize)
75
+ data_config = timm.data.resolve_model_data_config(model)
76
+ transforms = timm.data.create_transform(**data_config, is_training=False)
77
+
78
+ output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
79
+
80
+ for o in output:
81
+ # print shape of each feature map in output
82
+ # e.g.:
83
+ # torch.Size([1, 24, 224, 224])
84
+ # torch.Size([1, 40, 112, 112])
85
+ # torch.Size([1, 64, 56, 56])
86
+ # torch.Size([1, 176, 28, 28])
87
+ # torch.Size([1, 512, 14, 14])
88
+
89
+ print(o.shape)
90
+ ```
91
+
92
+ ### Image Embeddings
93
+ ```python
94
+ from urllib.request import urlopen
95
+ from PIL import Image
96
+ import timm
97
+
98
+ img = Image.open(urlopen(
99
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
100
+ ))
101
+
102
+ model = timm.create_model(
103
+ 'efficientnet_b5.sw_in12k_ft_in1k',
104
+ pretrained=True,
105
+ num_classes=0, # remove classifier nn.Linear
106
+ )
107
+ model = model.eval()
108
+
109
+ # get model specific transforms (normalization, resize)
110
+ data_config = timm.data.resolve_model_data_config(model)
111
+ transforms = timm.data.create_transform(**data_config, is_training=False)
112
+
113
+ output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
114
+
115
+ # or equivalently (without needing to set num_classes=0)
116
+
117
+ output = model.forward_features(transforms(img).unsqueeze(0))
118
+ # output is unpooled, a (1, 2048, 14, 14) shaped tensor
119
+
120
+ output = model.forward_head(output, pre_logits=True)
121
+ # output is a (1, num_features) shaped tensor
122
+ ```
123
+
124
+ ## Model Comparison
125
+ Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
126
+
127
+ ## Citation
128
+ ```bibtex
129
+ @misc{rw2019timm,
130
+ author = {Ross Wightman},
131
+ title = {PyTorch Image Models},
132
+ year = {2019},
133
+ publisher = {GitHub},
134
+ journal = {GitHub repository},
135
+ doi = {10.5281/zenodo.4414861},
136
+ howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
137
+ }
138
+ ```
139
+ ```bibtex
140
+ @inproceedings{tan2019efficientnet,
141
+ title={Efficientnet: Rethinking model scaling for convolutional neural networks},
142
+ author={Tan, Mingxing and Le, Quoc},
143
+ booktitle={International conference on machine learning},
144
+ pages={6105--6114},
145
+ year={2019},
146
+ organization={PMLR}
147
+ }
148
+ ```
config.json CHANGED
@@ -3,6 +3,7 @@
3
  "num_classes": 1000,
4
  "num_features": 2048,
5
  "pretrained_cfg": {
 
6
  "custom_load": false,
7
  "input_size": [
8
  3,
 
3
  "num_classes": 1000,
4
  "num_features": 2048,
5
  "pretrained_cfg": {
6
+ "tag": "sw_in12k_ft_in1k",
7
  "custom_load": false,
8
  "input_size": [
9
  3,
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0e5c09ad618a28d977acf8b7846105443c553a4b425dddb54598a0ac6088aca7
3
+ size 122330162