EdgeFace-Base
We present EdgeFace- a lightweight and efficient face recognition network inspired by the hybrid architecture of EdgeNeXt. By effectively combining the strengths of both CNN and Transformer models, and a low rank linear layer, EdgeFace achieves excellent face recognition performance optimized for edge devices. The proposed EdgeFace network not only maintains low computational costs and compact storage, but also achieves high face recognition accuracy, making it suitable for deployment on edge devices. The proposed EdgeFace model achieved the top ranking among models with fewer than 2M parameters in the IJCB 2023 Efficient Face Recognition Competition. Extensive experiments on challenging benchmark face datasets demonstrate the effectiveness and efficiency of EdgeFace in comparison to state-of-the-art lightweight models and deep face recognition models. Our EdgeFace model with 1.77M parameters achieves state of the art results on LFW (99.73%), IJB-B (92.67%), and IJB-C (94.85%), outperforming other efficient models with larger computational complexities. The code to replicate the experiments will be made available publicly.
Overview
- Training: EdgeFace-Base was trained on Webface260M dataset (12M and 4M subsets)
- Parameters: 18.23M
- Task: Efficient Face Recognition Model for Edge Devices
- Framework: Pytorch
- Output structure: Batch of face images
Evaluation of EdgeFace
Model | MPARAMS | MFLOPs | LFW (%) | CA-LFW (%) | CP-LFW (%) | CFP-FP (%) | AgeDB-30 (%) | IJB-B (%) | IJB-C (%) |
---|---|---|---|---|---|---|---|---|---|
VarGFaceNet | 5.0 | 1022 | 99.85 | 95.15 | 88.55 | 98.50 | 98.15 | 92.9 | 94.7 |
ShuffleFaceNet 2× | 4.5 | 1050 | 99.62 | - | - | 97.56 | 97.28 | - | - |
MixFaceNet-M | 3.95 | 626.1 | 99.68 | - | - | - | 97.05 | 91.55 | 93.42 |
ShuffleMixFaceNet-M | 3.95 | 626.1 | 99.60 | - | - | - | 96.98 | 91.47 | 91.47 |
MobileFaceNetV1 | 3.4 | 1100 | 99.4 | 94.47 | 87.17 | 95.8 | 96.4 | 92.0 | 93.9 |
ProxylessFaceNAS | 3.2 | 900 | 99.2 | 92.55 | 84.17 | 94.7 | 94.4 | 87.1 | 89.7 |
MixFaceNet-S | 3.07 | 451.7 | 99.6 | - | - | - | 96.63 | 90.17 | 92.30 |
ShuffleMixFaceNet-S | 3.07 | 451.7 | 99.58 | - | - | - | 97.05 | 90.94 | 93.08 |
ShuffleFaceNet 1.5x | 2.6 | 577.5 | 99.7 | 95.05 | 88.50 | 96.9 | 97.3 | 92.3 | 94.3 |
MobileFaceNet | 2.0 | 933 | 99.7 | 95.2 | 89.22 | 96.9 | 97.6 | 92.8 | 94.7 |
PocketNetM-256 | 1.75 | 1099.15 | 99.58 | 95.63 | 90.03 | 95.66 | 97.17 | 90.74 | 92.70 |
PocketNetM-128 | 1.68 | 1099.02 | 99.65 | 95.67 | 90.00 | 95.07 | 96.78 | 90.63 | 92.63 |
MixFaceNet-XS | 1.04 | 161.9 | 99.60 | - | - | - | 95.85 | 88.48 | 90.73 |
ShuffleMixFaceNet-XS | 1.04 | 161.9 | 99.53 | - | - | - | 95.62 | 87.86 | 90.43 |
MobileFaceNets | 0.99 | 439.8 | 99.55 | - | - | - | 96.07 | - | - |
PocketNetS-256 | 0.99 | 587.24 | 99.66 | 95.50 | 88.93 | 93.34 | 96.35 | 89.31 | 91.33 |
PocketNetS-128 | 0.92 | 587.11 | 99.58 | 95.48 | 89.63 | 94.21 | 96.10 | 89.44 | 91.62 |
ShuffleFaceNet 0.5x | 0.5 | 66.9 | 99.23 | - | - | 92.59 | 93.22 | - | - |
EdgeFace-S(γ = 0.5)(ours) | 3.65 | 306.11 | 99.78 | 95.71 | 92.56 | 95.81 | 96.93 | 93.58 | 95.63 |
EdgeFace-XS(γ = 0.6)(ours) | 1.77 | 154 | 99.73 | 95.28 | 91.82 | 94.37 | 96.00 | 92.67 | 94.8 |
Edgeface_XXS (ours) | 1.24 | 94.72 | 99.57 ± 0.33 | 94.83 ± 0.98 | 90.27 ± 0.93 | 93.63 ± 0.99 | 94.92 ± 1.15 | - | - |
Edgeface_Base (ours) | 18.23 | 1398.83 | 99.83 ± 0.24 | 96.07 ± 1.03 | 93.75 ± 1.16 | 97.01 ± 0.94 | 97.60 ± 0.70 | - | - |
Performance benchmarks of different variants of EdgeFace:
Model | MPARAMS | MFLOPs | LFW (%) | CALFW (%) | CPLFW (%) | CFP-FP (%) | AgeDB30 (%) |
---|---|---|---|---|---|---|---|
edgeface_base | 18.23 | 1398.83 | 99.83 ± 0.24 | 96.07 ± 1.03 | 93.75 ± 1.16 | 97.01 ± 0.94 | 97.60 ± 0.70 |
edgeface_s_gamma_05 | 3.65 | 306.12 | 99.78 ± 0.27 | 95.55 ± 1.05 | 92.48 ± 1.42 | 95.74 ± 1.09 | 97.03 ± 0.85 |
edgeface_xs_gamma_06 | 1.77 | 154.00 | 99.73 ± 0.35 | 95.28 ± 1.37 | 91.58 ± 1.42 | 94.71 ± 1.07 | 96.08 ± 0.95 |
edgeface_xxs | 1.24 | 94.72 | 99.57 ± 0.33 | 94.83 ± 0.98 | 90.27 ± 0.93 | 93.63 ± 0.99 | 94.92 ± 1.15 |
Running EdgeFace-Base
- Minimal code to instantiate the model and perform inference:
import torch
from torchvision import transforms
from face_alignment import align
from backbones import get_model
# load model
model_name="edgeface_base"
model=get_model(model_name)
checkpoint_path=f'checkpoints/{arch}.pt'
model.load_state_dict(torch.load(checkpoint_path, map_location='cpu')).eval()
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
])
path = 'path_to_face_image'
aligned = align.get_aligned_face(path) # align face
transformed_input = transform(aligned) # preprocessing
# extract embedding
embedding = model(transformed_input)
Please check the project GitHub repository
License
EdgeFace is released under CC BY-NC-SA 4.0
Copyright
(c) 2024, Anjith George, Christophe Ecabert, Hatef Otroshi Shahreza, Ketan Kotwal, Sébastien Marcel Idiap Research Institute, Martigny 1920, Switzerland.
https://gitlab.idiap.ch/bob/bob.paper.tbiom2023_edgeface/-/blob/master/LICENSE
Please refer to the link for information about the License & Copyright terms and conditions.
Citation
If you find our work useful, please cite the following publication:
@article{edgeface,
title={Edgeface: Efficient face recognition model for edge devices},
author={George, Anjith and Ecabert, Christophe and Shahreza, Hatef Otroshi and Kotwal, Ketan and Marcel, Sebastien},
journal={IEEE Transactions on Biometrics, Behavior, and Identity Science},
year={2024}
}