michal-sokolski-tcl commited on
Commit
1b9fd7c
1 Parent(s): 16ccd49

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -0
README.md CHANGED
@@ -1,3 +1,72 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ datasets:
4
+ - imagenet-1k
5
+ metrics:
6
+ - accuracy
7
+ pipeline_tag: image-classification
8
+ tags:
9
+ - pytorch
10
+ - torch-dag
11
  ---
12
+ # Model Card for beit_base_patch16_224_pruned_65
13
+
14
+ This is a prunned version of the [timm/beit_base_patch16_224.in22k_ft_in22k_in1k](https://huggingface.co/timm/beit_base_patch16_224.in22k_ft_in22k_in1k) model in a [toch-dag](https://github.com/TCLResearchEurope/torch-dag) format.
15
+
16
+ This model has rougly 65% of the original model FLOPs with minimal metrics drop.
17
+
18
+
19
+ | Model | KMAPPs* | M Parameters | Accuracy (224x224) |
20
+ | ----------- | ----------- | ----------- | ------------------ |
21
+ | **timm/beit_base_patch16_224.in22k_ft_in22k_in1 (baseline)** | 673.2 | 86.5 | 85.23% |
22
+ | **beit_base_patch16_224_pruned_65 (ours)** | 438 **(65%)** | 56.7 **(66%)** | 84.53% **(↓ 0.7%)** |
23
+
24
+
25
+ \***KMAPPs** thousands of FLOPs per input pixel
26
+
27
+ `KMAPPs(model) = FLOPs(model) / (H * W * 1000)`, where `(H, W)` is the input resolution.
28
+
29
+ The accuracy was calculated on the ImageNet-1k validation dataset. For details about image pre-processing, please refer to the original repository.
30
+ ## Model Details
31
+
32
+ ### Model Description
33
+
34
+
35
+ - **Developed by:** [TCL Research Europe](https://github.com/TCLResearchEurope/)
36
+ - **Model type:** Classification / feature backbone
37
+ - **License:** Apache 2.0
38
+ - **Finetuned from model:** [timm/beit_base_patch16_224.in22k_ft_in22k_in1k](https://huggingface.co/timm/beit_base_patch16_224.in22k_ft_in22k_in1k)
39
+
40
+ ### Model Sources
41
+ - **Repository:** [timm/beit_base_patch16_224.in22k_ft_in22k_in1k](https://huggingface.co/timm/beit_base_patch16_224.in22k_ft_in22k_in1k)
42
+
43
+
44
+
45
+ ## How to Get Started with the Model
46
+
47
+ To load the model, You have to install [torch-dag](https://github.com/TCLResearchEurope/torch-dag#3-installation) library, which can be done using `pip` by
48
+
49
+ ```
50
+ pip install torch-dag
51
+ ```
52
+
53
+ then, clone this repository
54
+
55
+ ```
56
+ # Make sure you have git-lfs installed (https://git-lfs.com)
57
+ git lfs install
58
+ git clone https://huggingface.co/TCLResearchEurope/beit_base_patch16_224_pruned_65
59
+ ```
60
+
61
+ and now You are ready to load the model:
62
+
63
+ ```
64
+ import torch_dag
65
+ import torch
66
+
67
+ model = torch_dag.io.load_dag_from_path('./beit_base_patch16_224_pruned_65')
68
+
69
+ model.eval()
70
+ out = model(torch.ones(1, 3, 224, 224))
71
+ print(out.shape)
72
+ ```