Model Card for fbnetv3_g_pruned_65
This is a prunned version of the timm/fbnetv3_g.ra2_in1k model in a toch-dag format.
This model has roughly 65% of the original model FLOPs with minimal metrics drop.
Model | KMAPPs* | M Parameters | Accuracy (224x224) | Accuracy (240x240) |
---|---|---|---|---|
timm/fbnetv3_g.ra2_in1 (baseline) | 42.75 | 16.6 | 80.61% | 81.32% |
fbnetv3_g_pruned_65 (ours) | 27.8 (65%) | 11.6 (70%) | 79.70% (↓ 0.91%) | 80.16% (↓ 1.16%) |
*KMAPPs thousands of FLOPs per input pixel
KMAPPs(model) = FLOPs(model) / (H * W * 1000)
, where (H, W)
is the input resolution.
The accuracy was calculated on the ImageNet-1k validation dataset. For details about image pre-processing, please refer to the original repository.
Model Details
Model Description
- Developed by: TCL Research Europe
- Model type: Classification / feature backbone
- License: Apache 2.0
- Finetuned from model: timm/fbnetv3_g.ra2_in1k
Model Sources
- Repository: timm/fbnetv3_g.ra2_in1k
How to Get Started with the Model
To load the model, You have to install torch-dag library, which can be done using pip
by
pip install torch-dag
then, clone this repository
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
git clone https://huggingface.co/TCLResearchEurope/fbnetv3_g_pruned_65
and now You are ready to load the model:
import torch_dag
import torch
model = torch_dag.io.load_dag_from_path('./fbnetv3_g_pruned_65')
model.eval()
out = model(torch.ones(1, 3, 224, 224))
print(out.shape)