DongHyunKim commited on
Commit
0263b4f
1 Parent(s): a6abb1f

Delete rdnet_base.nv_1k.md

Browse files
Files changed (1) hide show
  1. rdnet_base.nv_1k.md +0 -126
rdnet_base.nv_1k.md DELETED
@@ -1,126 +0,0 @@
1
- ---
2
- tags:
3
- - image-classification
4
- - timm
5
- - rdnet
6
- library_name: timm
7
- datasets:
8
- - imagenet-1k
9
- ---
10
- # Model card for rdnet_base.nv_in1k
11
-
12
- A RDNet image classification model. Trained on ImageNet-1k, original torchvision weights.
13
-
14
- ## Model Details
15
- - **Model Type:** Image classification / feature backbone
16
- - **Model Stats:**
17
- - Imagenet-1k validation top-1 accuracy: 84.4%
18
- - Params (M): 87
19
- - GMACs: 15.4
20
- - Image size: 224 x 224
21
- - **Papers:**
22
- - DenseNets Reloaded: Paradigm Shift Beyond ResNets and ViTs: https://arxiv.org/abs/2403.19588
23
- - **Dataset:** ImageNet-1k
24
-
25
- ## Model Usage
26
- ### Image Classification
27
- ```python
28
- from urllib.request import urlopen
29
- from PIL import Image
30
- import timm
31
- import torch
32
-
33
- img = Image.open(urlopen(
34
- 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
35
- ))
36
-
37
- model = timm.create_model('rdnet_base.nv_in1k', pretrained=True)
38
- model = model.eval()
39
-
40
- # get model specific transforms (normalization, resize)
41
- data_config = timm.data.resolve_model_data_config(model)
42
- transforms = timm.data.create_transform(**data_config, is_training=False)
43
-
44
- output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
45
-
46
- top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
47
- ```
48
-
49
- ### Feature Map Extraction
50
- ```python
51
- from urllib.request import urlopen
52
- from PIL import Image
53
- import timm
54
-
55
- img = Image.open(urlopen(
56
- 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
57
- ))
58
-
59
- model = timm.create_model(
60
- 'rdnet_base.nv_in1k',
61
- pretrained=True,
62
- features_only=True,
63
- )
64
- model = model.eval()
65
-
66
- # get model specific transforms (normalization, resize)
67
- data_config = timm.data.resolve_model_data_config(model)
68
- transforms = timm.data.create_transform(**data_config, is_training=False)
69
-
70
- output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
71
-
72
- for o in output:
73
- # print shape of each feature map in output
74
- # e.g.:
75
- # torch.Size([1, 64, 224, 224])
76
- # torch.Size([1, 128, 112, 112])
77
- # torch.Size([1, 256, 56, 56])
78
- # torch.Size([1, 512, 28, 28])
79
- # torch.Size([1, 512, 14, 14])
80
- # torch.Size([1, 512, 7, 7])
81
-
82
- print(o.shape)
83
- ```
84
-
85
- ### Image Embeddings
86
- ```python
87
- from urllib.request import urlopen
88
- from PIL import Image
89
- import timm
90
-
91
- img = Image.open(urlopen(
92
- 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
93
- ))
94
-
95
- model = timm.create_model(
96
- 'rdnet_base.nv_in1k',
97
- pretrained=True,
98
- num_classes=0, # remove classifier nn.Linear
99
- )
100
- model = model.eval()
101
-
102
- # get model specific transforms (normalization, resize)
103
- data_config = timm.data.resolve_model_data_config(model)
104
- transforms = timm.data.create_transform(**data_config, is_training=False)
105
-
106
- output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
107
-
108
- # or equivalently (without needing to set num_classes=0)
109
-
110
- output = model.forward_features(transforms(img).unsqueeze(0))
111
- # output is unpooled, a (1, 512, 7, 7) shaped tensor
112
-
113
- output = model.forward_head(output, pre_logits=True)
114
- # output is a (1, num_features) shaped tensor
115
- ```
116
-
117
- ### Citation
118
- ```
119
- @misc{kim2024densenets,
120
- title={DenseNets Reloaded: Paradigm Shift Beyond ResNets and ViTs},
121
- author={Donghyun Kim and Byeongho Heo and Dongyoon Han},
122
- year={2024},
123
- eprint={2403.19588},
124
- archivePrefix={arXiv},
125
- }
126
- ```