Edit model card

FacialMMT

This repo contains the data and pretrained models for FacialMMT, a framework that uses facial sequences of real speaker to help multimodal emotion recognition.

The model performance on MELD test set is:

Release W-F1(%)
23-07-10 66.73

It is currently ranked third on paperswithcode.

If you're interested, please check out this repo for more in-detail explanation of how to use our model.

Citation

Please consider citing the following if this repo is helpful to your research.

@inproceedings{zheng2023facial,
  title={A Facial Expression-Aware Multimodal Multi-task Learning Framework for Emotion Recognition in Multi-party Conversations},
  author={Zheng, Wenjie and Yu, Jianfei and Xia, Rui and Wang, Shijin},
  booktitle={Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
  pages={15445--15459},
  year={2023}
}
Downloads last month
0
Inference Examples
Unable to determine this model's library. Check the docs .