Update README.md
Browse files
README.md
CHANGED
@@ -24,6 +24,16 @@ It is currently ranked third on [paperswithcode](https://paperswithcode.com/sota
|
|
24 |
|
25 |
If you're interested, please check out this [repo](https://github.com/NUSTM/FacialMMT) for more in-detail explanation of how to use our model.
|
26 |
|
27 |
-
Paper: [A Facial Expression-Aware Multimodal Multi-task Learning Framework for Emotion Recognition in Multi-party Conversations](https://aclanthology.org/2023.acl-long.861.pdf). In Proceedings of ACL 2023 (Main Conference), pp. 15445–15459.
|
28 |
|
29 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
|
25 |
If you're interested, please check out this [repo](https://github.com/NUSTM/FacialMMT) for more in-detail explanation of how to use our model.
|
26 |
|
|
|
27 |
|
28 |
+
### Citation
|
29 |
+
|
30 |
+
Please consider citing the following if this repo is helpful to your research.
|
31 |
+
```
|
32 |
+
@inproceedings{zheng2023facial,
|
33 |
+
title={A Facial Expression-Aware Multimodal Multi-task Learning Framework for Emotion Recognition in Multi-party Conversations},
|
34 |
+
author={Zheng, Wenjie and Yu, Jianfei and Xia, Rui and Wang, Shijin},
|
35 |
+
booktitle={Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
|
36 |
+
pages={15445--15459},
|
37 |
+
year={2023}
|
38 |
+
}
|
39 |
+
```
|