Edit model card


This repo contains the data and pretrained models for FacialMMT, a framework that uses facial sequences of real speaker to help multimodal emotion recognition.

The model performance on MELD test set is:

Release W-F1(%)
23-07-10 66.73

It is currently ranked third on paperswithcode.

If you're interested, please check out this repo for more in-detail explanation of how to use our model.


Please consider citing the following if this repo is helpful to your research.

  title={A Facial Expression-Aware Multimodal Multi-task Learning Framework for Emotion Recognition in Multi-party Conversations},
  author={Zheng, Wenjie and Yu, Jianfei and Xia, Rui and Wang, Shijin},
  booktitle={Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
Downloads last month


Downloads are not tracked for this model. How to track
Unable to determine this model's library. Check the docs .