--- license: mit metrics: - recall pipeline_tag: video-classification tags: - FER - Image Classification library_name: PyTorch --- # Static and dynamic facial emotion recognition using the Emo-AffectNet model [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/in-search-of-a-robust-facial-expressions/facial-expression-recognition-on-affectnet)](https://paperswithcode.com/paper/in-search-of-a-robust-facial-expressions) [![App](https://img.shields.io/badge/🤗-DEMO--Facial%20Expressions%20Recognition-FFD21F.svg)](https://huggingface.co/spaces/ElenaRyumina/Facial_Expression_Recognition) This is Emo-AffectNet model for facial emotion recognition by videos / images. To see the emotion detected by webcam, you should run ``run_webcam``. Webcam result:

result

For more information see [GitHub](https://github.com/ElenaRyumina/EMO-AffectNetModel). ### Citation If you are using EMO-AffectNet model in your research, please consider to cite research [paper](https://www.sciencedirect.com/science/article/pii/S0925231222012656). Here is an example of BibTeX entry:
@article{RYUMINA2022,
  title        = {In Search of a Robust Facial Expressions Recognition Model: A Large-Scale Visual Cross-Corpus Study},
  author       = {Elena Ryumina and Denis Dresvyanskiy and Alexey Karpov},
  journal      = {Neurocomputing},
  year         = {2022},
  doi          = {10.1016/j.neucom.2022.10.013},
  url          = {https://www.sciencedirect.com/science/article/pii/S0925231222012656},
}