ACL-OCL / Base_JSON /prefixW /json /wassa /2021.wassa-1.9.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
95.3 kB
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:07:27.171838Z"
},
"title": "An End-to-End Network for Emotion-Cause Pair Extraction",
"authors": [
{
"first": "Aaditya",
"middle": [],
"last": "Singh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology Kanpur (IIT Kanpur",
"location": {}
},
"email": ""
},
{
"first": "Shreeshail",
"middle": [],
"last": "Hingane",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology Kanpur (IIT Kanpur",
"location": {}
},
"email": ""
},
{
"first": "Saim",
"middle": [],
"last": "Wani",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology Kanpur (IIT Kanpur",
"location": {}
},
"email": ""
},
{
"first": "Ashutosh",
"middle": [],
"last": "Modi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology Kanpur (IIT Kanpur",
"location": {}
},
"email": "ashutoshm@cse.iitk.ac.in"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The task of Emotion-Cause Pair Extraction (ECPE) aims to extract all potential clausepairs of emotions and their corresponding causes in a document. Unlike the more wellstudied task of Emotion Cause Extraction (ECE), ECPE does not require the emotion clauses to be provided as annotations. Previous works on ECPE have either followed a multi-stage approach where emotion extraction, cause extraction, and pairing are done independently or use complex architectures to resolve its limitations. In this paper, we propose an end-to-end model for the ECPE task. Due to the unavailability of an English language ECPE corpus, we adapt the NTCIR-13 ECE corpus and establish a baseline for the ECPE task on this dataset. On this dataset, the proposed method produces significant performance improvements (\u223c 6.5% increase in F1 score) over the multi-stage approach and achieves comparable performance to the state of the art methods.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "The task of Emotion-Cause Pair Extraction (ECPE) aims to extract all potential clausepairs of emotions and their corresponding causes in a document. Unlike the more wellstudied task of Emotion Cause Extraction (ECE), ECPE does not require the emotion clauses to be provided as annotations. Previous works on ECPE have either followed a multi-stage approach where emotion extraction, cause extraction, and pairing are done independently or use complex architectures to resolve its limitations. In this paper, we propose an end-to-end model for the ECPE task. Due to the unavailability of an English language ECPE corpus, we adapt the NTCIR-13 ECE corpus and establish a baseline for the ECPE task on this dataset. On this dataset, the proposed method produces significant performance improvements (\u223c 6.5% increase in F1 score) over the multi-stage approach and achieves comparable performance to the state of the art methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "There have been several works on emotions prediction from the text (Alswaidan and Menai, 2020; Witon et al., 2018) as well as generating emotion oriented texts (Ghosh et al., 2017; Colombo et al., 2019; Goswamy et al., 2020) . However, recently the focus has also shifted to finding out the underlying cause(s) that lead to the emotion expressed in the text. In this respect, Gui et al. (2016) proposed the Emotion Cause Extraction (ECE), a task aimed at detecting the cause behind a given emotion annotation. The task is defined as a clause level classification problem. The text is divided at the clause level and the task is to detect the clause containing the cause, given the clause containing the emotion. However, the applicability of models solving the ECE problem is limited by the fact that emotion annotations are required at test time. More recently, introduced the Emotion-Cause Pair Extraction (ECPE) task i.e. extracting all possible emotion-cause clause pairs in a document with no emotion annotations. Thus, ECPE opens up avenues for applications of real-time sentiment-cause analysis in tweets and product reviews. ECPE builds on the existing and well studied ECE task. Figure 1 shows an example with ground truth annotations.",
"cite_spans": [
{
"start": 95,
"end": 114,
"text": "Witon et al., 2018)",
"ref_id": "BIBREF28"
},
{
"start": 160,
"end": 180,
"text": "(Ghosh et al., 2017;",
"ref_id": "BIBREF15"
},
{
"start": 181,
"end": 202,
"text": "Colombo et al., 2019;",
"ref_id": null
},
{
"start": 203,
"end": 224,
"text": "Goswamy et al., 2020)",
"ref_id": "BIBREF16"
},
{
"start": 376,
"end": 393,
"text": "Gui et al. (2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 1188,
"end": 1196,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Xia and Ding (2019) use a two-stage architecture to extract potential emotion-cause clauses. In Stage 1, the model extracts a set of emotion clauses and a set of cause clauses (not mutually exclusive) from the document. In Stage 2, it performs emotioncause pairing and filtering, i.e. eliminating pairs that the model predicts as an invalid emotion-cause pair. However, this fails to fully capture the mutual dependence between emotion and cause clauses since clause extraction happens in isolation from the pairing step. Thus, the model is never optimized using the overall task as the objective. Also, certain emotion clauses are likely not to be detected without the corresponding cause clauses as the context for that emotion. Recent methods such as Ding et al. (2020a) and Ding et al. (2020b) use complex encoder and classifier architectures to resolve these limitations of the multi-stage method.",
"cite_spans": [
{
"start": 754,
"end": 773,
"text": "Ding et al. (2020a)",
"ref_id": "BIBREF8"
},
{
"start": 778,
"end": 797,
"text": "Ding et al. (2020b)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose an end-to-end model to explicitly demonstrate the effectiveness of joint training on the ECPE task. The proposed model attempts to take into account the mutual interdependence between emotion and cause clauses. Based on the benchmark English-language corpus used in the ECE task of the NTCIR-13 workshop (Gao et al., 2017) , we evaluate our approach on this dataset after adapting it for the ECPE task. We demonstrate that the proposed approach works sig-Clause 1: Adele arrived at her apartment late in the afternoon after a long day of work. Clause 2: She was still furious with her husband for not remembering her 40th birthday. Clause 3: As soon as she unlocked the door, she gasped with surprise; Clause 4: Mikhael and Harriet had organized a huge party for her. Figure 1 : An example document. The above example contains two emotion-cause pairs. Clause 2 is an emotion clause (furious) and is also the corresponding cause clause (for not remembering her 40th birthday). Clause 3 is an emotion clause (surprise) and Clause 4 is its corresponding cause clause (organized a huge party for her).",
"cite_spans": [
{
"start": 330,
"end": 348,
"text": "(Gao et al., 2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 794,
"end": 802,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "nificantly better than the multi-stage method and achieves comparable performance to the state of the art methods. We also show that when used for the ECE task by providing ground truth emotion annotations, our model beats the state of the art performance of ECE models on the introduced corpus. We provide the dataset and implementations of our models via GitHub 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The problem of Emotion Cause Extraction (ECE) has been studied extensively over the past decade. ECE was initially proposed as a word-level sequence detection problem in . Attempts to solve this task focused on either classical machine learning techniques (Ghazi et al., 2015) , or on rule-based methods (Neviarouskaya and Aono, 2013; Gao et al., 2015) . Subsequently, the problem was reframed as a clause-level classification problem and the Chinese-language dataset introduced by Gui et al. (2016) has since become the benchmark dataset for ECE and the task has been an active area of research Yu et al., 2019; Li et al., 2018 Li et al., , 2019 Fan et al., 2019) .",
"cite_spans": [
{
"start": 256,
"end": 276,
"text": "(Ghazi et al., 2015)",
"ref_id": "BIBREF14"
},
{
"start": 304,
"end": 334,
"text": "(Neviarouskaya and Aono, 2013;",
"ref_id": "BIBREF24"
},
{
"start": 335,
"end": 352,
"text": "Gao et al., 2015)",
"ref_id": "BIBREF12"
},
{
"start": 482,
"end": 499,
"text": "Gui et al. (2016)",
"ref_id": "BIBREF18"
},
{
"start": 596,
"end": 612,
"text": "Yu et al., 2019;",
"ref_id": "BIBREF31"
},
{
"start": 613,
"end": 628,
"text": "Li et al., 2018",
"ref_id": "BIBREF22"
},
{
"start": 629,
"end": 646,
"text": "Li et al., , 2019",
"ref_id": "BIBREF21"
},
{
"start": 647,
"end": 664,
"text": "Fan et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "However, the main limitation of ECE remains that it requires emotion annotations even during test time, which severely limits the applicability of ECE models. To address this, introduced a new task called emotion-cause pair extraction (ECPE), which extracts both emotion and its cause without requiring the emotion annotation. They demonstrated the results of their two-stage architecture on the benchmark Chinese language ECE corpus (Gui et al., 2016) . Following their work, several works have been proposed to address the limitations of the two-stage architecture (Ding et al. (2020a) ",
"cite_spans": [
{
"start": 434,
"end": 452,
"text": "(Gui et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 567,
"end": 587,
"text": "(Ding et al. (2020a)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Formally, a document consists of text that is segmented into an ordered set of clauses D = [c 1 , c 2 , ..., c d ] and the ECPE task aims to extract a set of emotion-cause pairs",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "3"
},
{
"text": "P = {..., (c i , c j ), ...} (c i , c j \u2208 D)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "3"
},
{
"text": ", where c i is an emotion clause and c j is the corresponding cause clause. In the ECE task, we are additionally given the annotations of emotion clauses and the goal is to detect (one or more) clauses containing the cause for each emotion clause.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "3"
},
{
"text": "We propose an end-to-end emotion cause pairs extraction model (Figure 2 ), henceforth referred to as E2E-PExt E (refer to section 6 for the naming convention). The model takes an entire document as its input and computes, for each ordered pair of clauses (c i , c j ), the probability of being a potential emotion-cause pair. To facilitate the learning of suitable clause representations required for this primary task, we train the model on two other auxiliary tasks: Emotion Detection and Cause Detection.",
"cite_spans": [],
"ref_spans": [
{
"start": 62,
"end": 71,
"text": "(Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "We propose a hierarchical architecture. Word level representations are used to obtain clausal representations (vi BiLSTM) and clause level representations are further contextualized using another BiLSTM network. The resulting contextualized clause representations are then used for the classification task. Let w j i represent the vector representation of the j th word in the i th clause. Each clause c i in the document d is passed through a word-level encoder (BiLSTM + Attention (Bahdanau et al., 2015) ) to obtain the clause representation s i . The clause embeddings are then fed into two separate clause-level encoders (Emotion-Encoder and Cause-Encoder) each of which corresponds, respectively, to the two auxiliary tasks. The purpose of the clause-level encoders is to help learn contextualized clause representations by incorporating context from the neighboring clauses in the document. For each clause c i , we obtain its contextualized representations r e i and r c i by passing it through a BiLSTM network.",
"cite_spans": [
{
"start": 483,
"end": 506,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "For the clause c i , its contextual representations r e i and r c i are then used to predict whether the clause is an emotion-clause and a cause-clause respectively, i.e., y e i = softmax(W e * r e i + b e );",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y c i = softmax(W c * r c i + b c )",
"eq_num": "(2)"
}
],
"section": "Approach",
"sec_num": "4"
},
{
"text": "As observed by , we also noticed that performance on the two auxiliary tasks could be improved if done in an interactive manner rather than independently. Hence, the Cause-Encoder also makes use of the corresponding emotion-detection prediction y e i , when generating r c i (Figure 2 ). For the primary task, every ordered pair (c i , c j ) is represented by concatenating r e i , r c j and pe ij , wherein pe ij is the positional embedding vector representing the relative positioning between the two clauses i, j in the document (Shaw et al., 2018) .",
"cite_spans": [
{
"start": 532,
"end": 551,
"text": "(Shaw et al., 2018)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 275,
"end": 284,
"text": "(Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "The primary task is solved by passing this pairrepresentation through a fully-connected neural network to get the pair-predictions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "r p ij = [r e i \u2295 r c j \u2295 pe ij ]; (3) h p ij = ReLU(W p 1 * r p ij + b p 1 ); (4) y p ij = softmax(W p 2 * h p ij + b p 2 )",
"eq_num": "(5)"
}
],
"section": "Approach",
"sec_num": "4"
},
{
"text": "To train the end-to-end model, loss function is set as the weighted sum of loss on the primary task as well as the two auxiliary tasks i.e.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "L total = \u03bb c * L c + \u03bb e * L e + \u03bb p * L p , where L e , L c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": ", and L p are cross-entropy errors for emotion, cause and pair predictions respectively. Further, L p = L pos + loss weight * L neg , where L pos and L neg are the errors attributed to positive and negative examples respectively. We use hyperparameter loss weight to scale down L neg , since there are far more negative examples than positive ones in the primary pairing task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "We adapt an existing Emotion-Cause Extraction (ECE) (Fan et al., 2019; Li et al., 2019) corpus for evaluating our proposed models (as well as the architectures proposed in previous work). The corpus was introduced in the NTCIR-13 Workshop (Gao et al., 2017) for the ECE challenge.",
"cite_spans": [
{
"start": 52,
"end": 70,
"text": "(Fan et al., 2019;",
"ref_id": "BIBREF10"
},
{
"start": 71,
"end": 87,
"text": "Li et al., 2019)",
"ref_id": "BIBREF21"
},
{
"start": 239,
"end": 257,
"text": "(Gao et al., 2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus/Data Description",
"sec_num": "5"
},
{
"text": "The corpus consists of 2843 documents taken from several English novels. Each document is annotated with the following information: i) emotioncause pairs present in the document, that is, the set of emotion clauses and their corresponding cause clauses; ii) emotion category of each clause; and iii) the keyword within the clause denoting the labeled emotion. We do not use the emotion category or the keyword during training of either ECE or ECPE tasks only the emotion-cause pairs are used. At test time, none of the annotations are used for the ECPE task. For ECE task, emotion annotation is provided at test time and the model predicts the corresponding cause clauses. 80%-10%-10% splits are used for training, validation and testing. 10 such randomly generated splits are used to get statistically significant results, and the average results are reported. For the purpose of evaluation on our dataset, we reproduced the two-stage model: ECPE 2-stage . We also adapted two models that achieve state of the art performance on the ECPE task: ECPE-2D(BERT) (Ding et al., 2020a) and ECPE-MLL(ISML-6) (Ding et al., 2020b ) and compared them against our model: E2E-PExt E . The model is trained for 15 epochs using Adam optimizer (Kingma and Ba, 2014). The learning rate and batch size were set to 0.005 and 32 respectively. Model weights and biases were initialized by sampling from a uniform distribution U(\u22120.10, 0.10). GloVe word embeddings (Pennington et al., 2014) of 200 dimension are used. For regularization, we set the dropout rate to 0.8 for word embeddings and L2 weight decay of 1e-5 over softmax parameters. We set \u03bb c : \u03bb e : \u03bb p = 1 : 1 : 2.5. The values chosen through grid search on the hyperparameter space reflect the higher importance of the primary pair detection task compared to the auxiliary tasks. To obtain better positional embeddings which encode the relative positioning between clauses, we trained randomly initialized embeddings after setting the clipping distance (Shaw et al., 2018) to 10 all clauses that have a distance of 10 or more between them have the same positional embedding.",
"cite_spans": [
{
"start": 1059,
"end": 1079,
"text": "(Ding et al., 2020a)",
"ref_id": "BIBREF8"
},
{
"start": 1101,
"end": 1120,
"text": "(Ding et al., 2020b",
"ref_id": "BIBREF9"
},
{
"start": 1444,
"end": 1469,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF25"
},
{
"start": 1996,
"end": 2015,
"text": "(Shaw et al., 2018)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus/Data Description",
"sec_num": "5"
},
{
"text": "Cause Extraction Pair Extraction Precision",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Emotion Extraction",
"sec_num": null
},
{
"text": "We use the same evaluation metrics (precision, recall and F1-score), as used in the past work on ECE and ECPE tasks. Following , the metric definitions are defined as: The results are summarized in Table 1 . Our model E2E-PExt E outperforms ECPE 2-stage on the task of emotion-cause pair extraction by a significant margin of 6.5%. This explicitly demonstrates that an end-to-end model works much better since it leverages the mutual dependence between the emotion and cause clauses. As shown in Table 1 , our model achieves comparable performance to the highly parameterized and complex models ECPE-2D(BERT) and ECPE-MLL(ISML-6) (which either leverage a pre-trained BERT (Devlin et al., 2019) and 2D Transformer (Vaswani et al., 2017) or iterative BiL-STM encoder). To further demonstrate this point, we compare the number of trainable parameters across models in Table 2 .",
"cite_spans": [
{
"start": 672,
"end": 693,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 713,
"end": 735,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 198,
"end": 205,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 496,
"end": 503,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 865,
"end": 872,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Emotion Extraction",
"sec_num": null
},
{
"text": "P = #",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Emotion Extraction",
"sec_num": null
},
{
"text": "Trainable parameters E2E-PExt E 790,257 ECPE-2D(BERT) 1,064,886 ECPE-MLL 6,370,452 Table 2 : Comparison of trainable parameters of our model (E2E-PExt E ) with the state-of-the-art models (ECPE-2D(BERT) and ECPE-MLL(ISML-6)). We achieve comparable performance with these models with fewer parameters and simpler architecture.",
"cite_spans": [],
"ref_spans": [
{
"start": 83,
"end": 90,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Method",
"sec_num": null
},
{
"text": "We also evaluate our model on the ECE task. For this, we use a variant of E2E-PExt E i.e. E2E-CExt, so that it utilizes the emotion annotations. Specifically, we incorporate the knowledge of emotion annotations by incorporating them into the Cause-Encoder as well as the Pair-Prediction-Module and show that this improves performance in both the primary pair prediction task as well as the auxiliary cause detection task (appendix section A). E2E-CExt outperforms the state of the art model RHNN: (Fan et al., 2019) on the ECE task. This is indicative of the generalization capability of our model and further demonstrates that performance on ECPE can be enhanced with improvements in the quality of emotion predictions.",
"cite_spans": [
{
"start": 497,
"end": 515,
"text": "(Fan et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": null
},
{
"text": "We analyzed the effect of different components of the model on performance via ablation experiments. We present some notable results below. More results are presented in the Appendix. Positional Embeddings: For finding the extent to which positional embeddings affect the performance, we train E2E-PExt E without positional embeddings. This resulted in a slight drop in validation F1 score from 51.34 to 50.74, which suggests that the network is robust to withstanding the loss of distance information at the clause-level. Loss Weighting: To handle the problem of data imbalance, we varied loss weight. We observed (Figure 3 ) that assigning less weight to the negative examples leads to more predicted positives, and hence a better recall but worse precision.",
"cite_spans": [],
"ref_spans": [
{
"start": 615,
"end": 624,
"text": "(Figure 3",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Ablation Experiments",
"sec_num": "6.1"
},
{
"text": "In this paper, we demonstrated that a simple endto-end model can achieve competitive performance on the ECPE task by leveraging the inherent correlation between emotions and their causes and optimizing directly on the overall objective. We also showed that a variant of our model which further uses emotion annotations, outperforms the previously best performing model on the ECE task, thereby showing its applicability to variety of related tasks involving emotion analysis. In future, we plan on developing a larger benchmark Englishlanguage dataset for the ECPE task. We plan to explore other model-architectures which we expect will help us learn richer representations for causality detection in clause-pairs. Table 3 : Results of Model variants using precision, recall, and F1-score on the ECPE task and the two sub-tasks.",
"cite_spans": [],
"ref_spans": [
{
"start": 715,
"end": 722,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
}
],
"back_matter": [
{
"text": "We present a variant of E2E-PExt E called E2E-PExt C (Figure 4 ). Here, we feed the clause embeddings s i into a clause-level BiLSTM to obtain context-aware clause encodings r c i which is fed into a softmax layer to obtain cause predictions y c i . We concatenate the cause predictions with the clause embeddings, s i \u2295 y c i , and feed them into another clause-level BiLSTM to obtain emotion representations r e i , which are fed into another softmax layer to obtain the emotion predictions y e i . The pair prediction network remains the same as described for E2E-PExt E . For finding the extent to which emotion labels can help in improving pair predictions, we present a variant of E2E-PExt E called E2E-CExt. We use the true emotion labels instead of emotion predictions y e i to obtain the context-aware clause encodings r c i . We also concatenate them with the input of pair-prediction network r e i \u2295 r c j \u2295 pe ij to make full use of the additional knowledge of emotion labels. Similarly, the corresponding variant of E2E-PExt C which utilizes true cause labels is called E2E-EExt. The results of model variation are shown in Table 3 . Here, the pair prediction network consists of a single fully connected layer. After comparing the performance of E2E-PExt C and E2E-EExt we conclude that huge improvements in performance can be achieved if the quality of cause predictions is improved.",
"cite_spans": [],
"ref_spans": [
{
"start": 53,
"end": 62,
"text": "(Figure 4",
"ref_id": null
},
{
"start": 1137,
"end": 1144,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Appendix A Model Variants",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A survey of state-of-the-art approaches for emotion recognition in text. Knowledge and Information Systems",
"authors": [],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nourah Alswaidan and Mohamed El Bachir Menai. 2020. A survey of state-of-the-art approaches for emotion recognition in text. Knowledge and Infor- mation Systems, pages 1-51.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A unified sequence labeling model for emotion cause pair extraction",
"authors": [
{
"first": "Xinhong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Qing",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jianping",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "208--218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinhong Chen, Qing Li, and Jianping Wang. 2020. A unified sequence labeling model for emotion cause pair extraction. In Proceedings of the 28th Inter- national Conference on Computational Linguistics, pages 208-218.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Emotion cause detection with linguistic constructions",
"authors": [
{
"first": "Ying",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sophia Yat Mei",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Shoushan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Chu-Ren",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "179--187",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ying Chen, Sophia Yat Mei Lee, Shoushan Li, and Chu- Ren Huang. 2010. Emotion cause detection with linguistic constructions. In Proceedings of the 23rd International Conference on Computational Linguis- tics (Coling 2010), pages 179-187.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A symmetric local search network for emotion-cause pair extraction",
"authors": [
{
"first": "Zifeng",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Zhiwei",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Yafeng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Qing",
"middle": [],
"last": "Gu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "139--149",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zifeng Cheng, Zhiwei Jiang, Yafeng Yin, Hua Yu, and Qing Gu. 2020. A symmetric local search network for emotion-cause pair extraction. In Proceedings of the 28th International Conference on Computational Linguistics, pages 139-149.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Affect-driven dialog generation",
"authors": [],
"year": null,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "3734--3743",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1374"
]
},
"num": null,
"urls": [],
"raw_text": "Affect-driven dialog generation. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3734-3743, Minneapolis, Min- nesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Ecpe-2d: Emotion-cause pair extraction based on joint twodimensional representation, interaction and prediction",
"authors": [
{
"first": "Zixiang",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Jianfei",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3161--3170",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zixiang Ding, Rui Xia, and Jianfei Yu. 2020a. Ecpe-2d: Emotion-cause pair extraction based on joint two- dimensional representation, interaction and predic- tion. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3161-3170.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "End-toend emotion-cause pair extraction based on sliding window multi-label learning",
"authors": [
{
"first": "Zixiang",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Jianfei",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "3574--3583",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zixiang Ding, Rui Xia, and Jianfei Yu. 2020b. End-to- end emotion-cause pair extraction based on sliding window multi-label learning. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3574-3583.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A knowledge regularized hierarchical approach for emotion cause analysis",
"authors": [
{
"first": "Chuang",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Hongyu",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Jiachen",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Lin",
"middle": [],
"last": "Gui",
"suffix": ""
},
{
"first": "Lidong",
"middle": [],
"last": "Bing",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Ruifeng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Ruibin",
"middle": [],
"last": "Mao",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5614--5624",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1563"
]
},
"num": null,
"urls": [],
"raw_text": "Chuang Fan, Hongyu Yan, Jiachen Du, Lin Gui, Li- dong Bing, Min Yang, Ruifeng Xu, and Ruibin Mao. 2019. A knowledge regularized hierarchical ap- proach for emotion cause analysis. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5614-5624, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Transition-based directed graph construction for emotion-cause pair extraction",
"authors": [
{
"first": "Chuang",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Chaofa",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Jiachen",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Lin",
"middle": [],
"last": "Gui",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Ruifeng",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3707--3717",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chuang Fan, Chaofa Yuan, Jiachen Du, Lin Gui, Min Yang, and Ruifeng Xu. 2020. Transition-based di- rected graph construction for emotion-cause pair ex- traction. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 3707-3717.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A rule-based approach to emotion cause detection for chinese micro-blogs",
"authors": [
{
"first": "Kai",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jiushuo",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2015,
"venue": "Expert Syst. Appl",
"volume": "42",
"issue": "9",
"pages": "",
"other_ids": {
"DOI": [
"10.1016/j.eswa.2015.01.064"
]
},
"num": null,
"urls": [],
"raw_text": "Kai Gao, Hua Xu, and Jiushuo Wang. 2015. A rule-based approach to emotion cause detection for chinese micro-blogs. Expert Syst. Appl., 42(9):45174528.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Overview of ntcir-13 eca task",
"authors": [
{
"first": "Qinghong",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Gui",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Yulan",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jiannan",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Ruifeng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kam-Fai",
"middle": [],
"last": "Wong",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the NTCIR-13 Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qinghong Gao, Gui Lin, Yulan He, Jiannan Hu, Qin Lu, Ruifeng Xu, and Kam-Fai Wong. 2017. Overview of ntcir-13 eca task. In Proceedings of the NTCIR-13 Conference.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Detecting emotion stimuli in emotion-bearing sentences",
"authors": [
{
"first": "Diman",
"middle": [],
"last": "Ghazi",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Inkpen",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "Szpakowicz",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Intelligent Text Processing and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "152--165",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diman Ghazi, Diana Inkpen, and Stan Szpakowicz. 2015. Detecting emotion stimuli in emotion-bearing sentences. In International Conference on Intelli- gent Text Processing and Computational Linguistics, pages 152-165. Springer.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Affect-lm: A neural language model for customizable affective text generation",
"authors": [
{
"first": "Sayan",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "Mathieu",
"middle": [],
"last": "Chollet",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Laksana",
"suffix": ""
},
{
"first": "Louis-Philippe",
"middle": [],
"last": "Morency",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Scherer",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.06851"
]
},
"num": null,
"urls": [],
"raw_text": "Sayan Ghosh, Mathieu Chollet, Eugene Laksana, Louis-Philippe Morency, and Stefan Scherer. 2017. Affect-lm: A neural language model for customiz- able affective text generation. arXiv preprint arXiv:1704.06851.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Adapting a language model for controlled affective text generation",
"authors": [
{
"first": "Tushar",
"middle": [],
"last": "Goswamy",
"suffix": ""
},
{
"first": "Ishika",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Ahsan",
"middle": [],
"last": "Barkati",
"suffix": ""
},
{
"first": "Ashutosh",
"middle": [],
"last": "Modi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2787--2801",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.251"
]
},
"num": null,
"urls": [],
"raw_text": "Tushar Goswamy, Ishika Singh, Ahsan Barkati, and Ashutosh Modi. 2020. Adapting a language model for controlled affective text generation. In Proceed- ings of the 28th International Conference on Com- putational Linguistics, pages 2787-2801, Barcelona, Spain (Online). International Committee on Compu- tational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A question answering approach for emotion cause extraction",
"authors": [
{
"first": "Lin",
"middle": [],
"last": "Gui",
"suffix": ""
},
{
"first": "Jiannan",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Yulan",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Ruifeng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Jiachen",
"middle": [],
"last": "Du",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1593--1602",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1167"
]
},
"num": null,
"urls": [],
"raw_text": "Lin Gui, Jiannan Hu, Yulan He, Ruifeng Xu, Qin Lu, and Jiachen Du. 2017. A question answering ap- proach for emotion cause extraction. In Proceed- ings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1593-1602, Copenhagen, Denmark. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Event-driven emotion cause extraction with corpus construction",
"authors": [
{
"first": "Lin",
"middle": [],
"last": "Gui",
"suffix": ""
},
{
"first": "Dongyin",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Ruifeng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1639--1649",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1170"
]
},
"num": null,
"urls": [],
"raw_text": "Lin Gui, Dongyin Wu, Ruifeng Xu, Qin Lu, and Yu Zhou. 2016. Event-driven emotion cause extrac- tion with corpus construction. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 1639-1649, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A text-driven rule-based system for emotion cause detection",
"authors": [
{
"first": "Sophia Yat Mei",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chu-Ren",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text",
"volume": "",
"issue": "",
"pages": "45--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sophia Yat Mei Lee, Ying Chen, and Chu-Ren Huang. 2010. A text-driven rule-based system for emotion cause detection. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text, pages 45-53, Los Angeles, CA. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Context-aware emotion cause analysis with multi-attention-based neural network",
"authors": [
{
"first": "Xiangju",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shi",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Daling",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yifei",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "Knowl. Based Syst",
"volume": "174",
"issue": "",
"pages": "205--218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiangju Li, Shi Feng, Daling Wang, and Yifei Zhang. 2019. Context-aware emotion cause analysis with multi-attention-based neural network. Knowl. Based Syst., 174:205-218.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A co-attention neural network model for emotion cause analysis with emotional context awareness",
"authors": [
{
"first": "Xiangju",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Kaisong",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Shi",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Daling",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yifei",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4752--4757",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1506"
]
},
"num": null,
"urls": [],
"raw_text": "Xiangju Li, Kaisong Song, Shi Feng, Daling Wang, and Yifei Zhang. 2018. A co-attention neural network model for emotion cause analysis with emotional context awareness. In Proceedings of the 2018 Con- ference on Empirical Methods in Natural Language Processing, pages 4752-4757, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Affect-driven dialog generation",
"authors": [
{
"first": "Ashutosh",
"middle": [],
"last": "Modi",
"suffix": ""
},
{
"first": "Mubbasir",
"middle": [],
"last": "Kapadia",
"suffix": ""
},
{
"first": "Douglas",
"middle": [
"A"
],
"last": "Fidaleo",
"suffix": ""
},
{
"first": "James",
"middle": [
"R"
],
"last": "Kennedy",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Witon",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Colombo",
"suffix": ""
}
],
"year": 2020,
"venue": "US Patent",
"volume": "10",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashutosh Modi, Mubbasir Kapadia, Douglas A. Fida- leo, James R. Kennedy, Wojciech Witon, and Pierre Colombo. 2020. Affect-driven dialog generation. US Patent 10,818,312.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Extracting causes of emotions from text",
"authors": [
{
"first": "Alena",
"middle": [],
"last": "Neviarouskaya",
"suffix": ""
},
{
"first": "Masaki",
"middle": [],
"last": "Aono",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Sixth International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "932--936",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alena Neviarouskaya and Masaki Aono. 2013. Extract- ing causes of emotions from text. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 932-936.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Self-attention with relative position representations",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Shaw",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "464--468",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 464-468.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Disney at IEST 2018: Predicting emotions using an ensemble",
"authors": [
{
"first": "Wojciech",
"middle": [],
"last": "Witon",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Colombo",
"suffix": ""
},
{
"first": "Ashutosh",
"middle": [],
"last": "Modi",
"suffix": ""
},
{
"first": "Mubbasir",
"middle": [],
"last": "Kapadia",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "248--253",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6236"
]
},
"num": null,
"urls": [],
"raw_text": "Wojciech Witon, Pierre Colombo, Ashutosh Modi, and Mubbasir Kapadia. 2018. Disney at IEST 2018: Pre- dicting emotions using an ensemble. In Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 248-253, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Emotion-cause pair extraction: A new task to emotion analysis in texts",
"authors": [
{
"first": "Rui",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Zixiang",
"middle": [],
"last": "Ding",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1003--1012",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1096"
]
},
"num": null,
"urls": [],
"raw_text": "Rui Xia and Zixiang Ding. 2019. Emotion-cause pair extraction: A new task to emotion analysis in texts. In Proceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics, pages 1003- 1012, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Rthn: a rnn-transformer hierarchical network for emotion cause extraction",
"authors": [
{
"first": "Rui",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Mengran",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zixiang",
"middle": [],
"last": "Ding",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 28th International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "5285--5291",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rui Xia, Mengran Zhang, and Zixiang Ding. 2019. Rthn: a rnn-transformer hierarchical network for emotion cause extraction. In Proceedings of the 28th International Joint Conference on Artificial Intelli- gence, pages 5285-5291. AAAI Press.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Multiple level hierarchical network-based clause selection for emotion cause extraction",
"authors": [
{
"first": "X",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Rong",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Ouyang",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Xiong",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Access",
"volume": "7",
"issue": "",
"pages": "9071--9079",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. Yu, W. Rong, Z. Zhang, Y. Ouyang, and Z. Xiong. 2019. Multiple level hierarchical network-based clause selection for emotion cause extraction. IEEE Access, 7:9071-9079.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Emotion-cause pair extraction as sequence labeling based on a novel tagging scheme",
"authors": [
{
"first": "Chaofa",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Chuang",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Jianzhu",
"middle": [],
"last": "Bao",
"suffix": ""
},
{
"first": "Ruifeng",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "3568--3573",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chaofa Yuan, Chuang Fan, Jianzhu Bao, and Ruifeng Xu. 2020. Emotion-cause pair extraction as se- quence labeling based on a novel tagging scheme. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3568-3573.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": ",Ding et al. (2020b),,,Cheng et al. (2020),Chen et al. (2020)) . In order to explore the corpus further and to encourage future work from a broader commu-1 https://github.com/Aaditya-Singh/ E2E-ECPE nity, we use an English language ECE corpus. (see section 5).",
"num": null
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": "End-to-End network (E2E-PExt E )",
"num": null
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"text": "pairs = no. of emotion-cause pairs predicted by model #correct pairs = number of emotion-cause pairs predicted correctly by the model #annotated pairs = total number of actual emotion-cause pairs in the data P, R, F 1 for the two auxiliary classification tasks (emotion-detection and cause-detection) have the usual definition (see Gui et al. (2016)).",
"num": null
},
"FIGREF4": {
"type_str": "figure",
"uris": null,
"text": "Precision, Recall and F1 Score as a function of weight assigned to negative examples.",
"num": null
},
"FIGREF6": {
"type_str": "figure",
"uris": null,
"text": "End-to-End network variant (E2E-PExt C )",
"num": null
},
"TABREF0": {
"num": null,
"content": "<table><tr><td>ECPE 2-stage (Xia and Ding, 2019)</td><td colspan=\"2\">67.41 71.60</td><td>69.40</td><td colspan=\"2\">60.39 47.34</td><td>53.01</td><td>46.94 41.02</td><td>43.67</td></tr><tr><td>ECPE-2D(BERT) (Ding et al., 2020a)</td><td colspan=\"2\">74.35 69.68</td><td>71.89</td><td colspan=\"2\">64.91 53.53</td><td>58.55</td><td>60.49 43.84</td><td>50.73</td></tr><tr><td>ECPE-MLL(ISML-6) (Ding et al., 2020b)</td><td colspan=\"2\">75.46 69.96</td><td>72.55</td><td colspan=\"2\">63.50 59.19</td><td>61.10</td><td>59.26 45.30</td><td>51.21</td></tr><tr><td>E2E-PExt E (Ours)</td><td colspan=\"2\">71.63 67.49</td><td>69.43</td><td colspan=\"2\">66.36 43.75</td><td>52.26</td><td>51.34 49.29</td><td>50.17</td></tr><tr><td>RHNN (Fan et al., 2019)</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>69.01 52.67</td><td>59.75</td></tr><tr><td>E2E-CExt (Ours)</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>65.21 66.18</td><td>65.63</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Recall F1 Score Precision Recall F1 Score Precision Recall F1 Score"
},
"TABREF1": {
"num": null,
"content": "<table><tr><td>6 Experiments and Results</td></tr><tr><td>Naming Scheme for Model Variants:</td></tr><tr><td>E2E: End to End.</td></tr><tr><td>PExt/CExt/EExt: Pair Extraction/ Cause Ex-</td></tr><tr><td>traction/ Emotion Extraction.</td></tr><tr><td>Subscript E/C: represents how the auxiliary tasks</td></tr><tr><td>are solved interactively (emotion predictions used</td></tr><tr><td>for cause detection or vice versa).</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Experimental results comparing our models with the already existing benchmarks. The top half compares our model for ECPE (E2E-PExtE) against the existing benchmarks. The bottom half compares existing ECE benchmark on this dataset against our model (E2E-CExt). Note that the Pair Extraction task in ECPE with true-emotion provided reduces to the Cause Extraction task of ECE. The results on RHNN and E2E-CExt are only for the primary task, since in the ECE setting, there are no auxiliary tasks. The evaluation metrics are same as the ones used in previous works."
}
}
}
}