File size: 2,312 Bytes
2b6472c
 
2fa5364
 
 
 
 
 
 
 
2b6472c
2fa5364
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
---
license: apache-2.0
datasets:
- imagenet-1k
metrics:
- accuracy
pipeline_tag: image-classification
tags:
- pytorch
- torch-dag
---
# Model Card for efficientnetv2_b0_pruned_61

This is a prunned version of the [Keras EfficientNetV2B0](https://keras.io/api/applications/efficientnet_v2/#efficientnetv2b0-function) model in a [toch-dag](https://github.com/TCLResearchEurope/torch-dag) format.

This model has rougly 61% of the original model FLOPs with minimal metrics drop.


| Model                                    | KMAPPs*         | M Parameters   | Accuracy (224x224)      |
| -----------                              | -----------     | -----------    | ------------------      |
| **Keras EfficientNetV2B (baseline)**   | 29           | 7.1           | 78.69%                  |
| **efficientnetv2_b0_pruned_61 (ours)**          | 17.7 **(61%)** | 6.5 **(92%)** | 78.14% **(↓ 0.55%)**    |


\***KMAPPs** thousands of FLOPs per input pixel

`KMAPPs(model) = FLOPs(model) / (H * W * 1000)`, where `(H, W)` is the input resolution.

The accuracy was calculated on the ImageNet-1k validation dataset. For details about image pre-processing, please refer to the original repository.
## Model Details

### Model Description


- **Developed by:** [TCL Research Europe](https://github.com/TCLResearchEurope/)
- **Model type:** Classification / feature backbone
- **License:** Apache 2.0
- **Finetuned from model:** [Keras EfficientNetV2B0](https://keras.io/api/applications/efficientnet_v2/#efficientnetv2b0-function)

### Model Sources
- **Repository:** [Keras EfficientNetV2B0](https://keras.io/api/applications/efficientnet_v2/#efficientnetv2b0-function)



## How to Get Started with the Model

To load the model, You have to install [torch-dag](https://github.com/TCLResearchEurope/torch-dag#3-installation) library, which can be done using `pip` by

```
pip install torch-dag
```

then, clone this repository

```
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
git clone https://huggingface.co/TCLResearchEurope/efficientnetv2_b0_pruned_61
```

and now You are ready to load the model:

```
import torch_dag
import torch

model = torch_dag.io.load_dag_from_path('./efficientnetv2_b0_pruned_61')

model.eval()
out = model(torch.ones(1, 3, 224, 224))
print(out.shape)
```