{ "paper_id": "D19-1016", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:04:47.780563Z" }, "title": "Knowledge-Enriched Transformer for Emotion Detection in Textual Conversations", "authors": [ { "first": "Peixiang", "middle": [], "last": "Zhong", "suffix": "", "affiliation": { "laboratory": "", "institution": "Joint NTU-UBC Research Centre", "location": {} }, "email": "peixiang001@e.ntu.edu.sg" }, { "first": "Di", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Joint NTU-UBC Research Centre", "location": {} }, "email": "wangdi@ntu.edu.sg" }, { "first": "Chunyan", "middle": [], "last": "Miao", "suffix": "", "affiliation": { "laboratory": "", "institution": "Joint NTU-UBC Research Centre", "location": {} }, "email": "ascymiao@ntu.edu.sg" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Messages in human conversations inherently convey emotions. The task of detecting emotions in textual conversations leads to a wide range of applications such as opinion mining in social networks. However, enabling machines to analyze emotions in conversations is challenging, partly because humans often rely on the context and commonsense knowledge to express emotions. In this paper, we address these challenges by proposing a Knowledge-Enriched Transformer (KET), where contextual utterances are interpreted using hierarchical self-attention and external commonsense knowledge is dynamically leveraged using a context-aware affective graph attention mechanism. Experiments on multiple textual conversation datasets demonstrate that both context and commonsense knowledge are consistently beneficial to the emotion detection performance. In addition, the experimental results show that our KET model outperforms the state-of-the-art models on most of the tested datasets in F1 score.", "pdf_parse": { "paper_id": "D19-1016", "_pdf_hash": "", "abstract": [ { "text": "Messages in human conversations inherently convey emotions. The task of detecting emotions in textual conversations leads to a wide range of applications such as opinion mining in social networks. However, enabling machines to analyze emotions in conversations is challenging, partly because humans often rely on the context and commonsense knowledge to express emotions. In this paper, we address these challenges by proposing a Knowledge-Enriched Transformer (KET), where contextual utterances are interpreted using hierarchical self-attention and external commonsense knowledge is dynamically leveraged using a context-aware affective graph attention mechanism. Experiments on multiple textual conversation datasets demonstrate that both context and commonsense knowledge are consistently beneficial to the emotion detection performance. In addition, the experimental results show that our KET model outperforms the state-of-the-art models on most of the tested datasets in F1 score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Emotions are \"generated states in humans that reflect evaluative judgments of the environment, the self and other social agents\" (Hudlicka, 2011) . Messages in human communications inherently convey emotions. With the prevalence of social media platforms such as Facebook Messenger, as well as conversational agents such as Amazon Alexa, there is an emerging need for machines to understand human emotions in natural conversations. This work addresses the task of detecting emotions (e.g., happy, sad, angry, etc.) in textual conversations, where the emotion of an utterance is detected in the conversational context. Being able to effectively detect emotions in conversations leads to a wide range of applications ranging from opinion mining in social media platforms Figure 1 : An example conversation with annotated labels from the DailyDialog dataset (Li et al., 2017) . By referring to the context, \"it\" in the third utterance is linked to \"birthday\" in the first utterance. By leveraging an external knowledge base, the meaning of \"friends\" in the forth utterance is enriched by associated knowledge entities, namely \"socialize\", \"party\", and \"movie\". Thus, the implicit \"happiness\" emotion in the fourth utterance can be inferred more easily via its enriched meaning. (Chatterjee et al., 2019) to building emotion-aware conversational agents (Zhou et al., 2018a) .", "cite_spans": [ { "start": 129, "end": 145, "text": "(Hudlicka, 2011)", "ref_id": "BIBREF24" }, { "start": 855, "end": 872, "text": "(Li et al., 2017)", "ref_id": "BIBREF31" }, { "start": 1275, "end": 1300, "text": "(Chatterjee et al., 2019)", "ref_id": "BIBREF6" }, { "start": 1349, "end": 1369, "text": "(Zhou et al., 2018a)", "ref_id": "BIBREF64" } ], "ref_spans": [ { "start": 769, "end": 777, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, enabling machines to analyze emotions in human conversations is challenging, partly because humans often rely on the context and commonsense knowledge to express emotions, which is difficult to be captured by machines. Figure 1 shows an example conversation demonstrating the importance of context and commonsense knowledge in understanding conversations and detecting implicit emotions.", "cite_spans": [], "ref_spans": [ { "start": 228, "end": 236, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There are several recent studies that model contextual information to detect emotions in conversations. Poria et al. (2017) and leveraged recurrent neural networks (RNN) to model the contextual utterances in sequence, where each utterance is represented by a feature vector extracted by convolutional neural networks (CNN) at an earlier stage. Similarly, Hazarika et al. (2018a,b) proposed to use extracted CNN features in memory networks to model contextual utterances. However, these methods require separate feature extraction and tuning, which may not be ideal for real-time applications. In addition, to the best of our knowledge, no attempts have been made in the literature to incorporate commonsense knowledge from external knowledge bases to detect emotions in textual conversations. Commonsense knowledge is fundamental to understanding conversations and generating appropriate responses (Zhou et al., 2018b) .", "cite_spans": [ { "start": 104, "end": 123, "text": "Poria et al. (2017)", "ref_id": "BIBREF43" }, { "start": 164, "end": 169, "text": "(RNN)", "ref_id": null }, { "start": 355, "end": 380, "text": "Hazarika et al. (2018a,b)", "ref_id": null }, { "start": 898, "end": 918, "text": "(Zhou et al., 2018b)", "ref_id": "BIBREF65" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To this end, we propose a Knowledge-Enriched Transformer (KET) to effectively incorporate contextual information and external knowledge bases to address the aforementioned challenges. The Transformer (Vaswani et al., 2017) has been shown to be a powerful representation learning model in many NLP tasks such as machine translation (Vaswani et al., 2017) and language understanding (Devlin et al., 2018) . The self-attention (Cheng et al., 2016) and cross-attention modules in the Transformer capture the intra-sentence and inter-sentence correlations, respectively. The shorter path of information flow in these two modules compared to gated RNNs and CNNs allows KET to model contextual information more efficiently. In addition, we propose a hierarchical self-attention mechanism allowing KET to model the hierarchical structure of conversations. Our model separates context and response into the encoder and decoder, respectively, which is different from other Transformer-based models, e.g., BERT (Devlin et al., 2018) , which directly concatenate context and response, and then train language models using only the encoder part.", "cite_spans": [ { "start": 200, "end": 222, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF54" }, { "start": 331, "end": 353, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF54" }, { "start": 381, "end": 402, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF12" }, { "start": 424, "end": 444, "text": "(Cheng et al., 2016)", "ref_id": "BIBREF7" }, { "start": 1000, "end": 1021, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Moreover, to exploit commonsense knowledge, we leverage external knowledge bases to facilitate the understanding of each word in the utterances by referring to related knowledge entities. The referring process is dynamic and balances between relatedness and affectiveness of the retrieved knowledge entities using a context-aware affective graph attention mechanism.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In summary, our contributions are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 For the first time, we apply the Transformer to analyze conversations and detect emotions. Our hierarchical self-attention and crossattention modules allow our model to exploit contextual information more efficiently than existing gated RNNs and CNNs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We derive dynamic, context-aware, and emotion-related commonsense knowledge from external knowledge bases and emotion lexicons to facilitate the emotion detection in conversations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We conduct extensive experiments demonstrating that both contextual information and commonsense knowledge are beneficial to the emotion detection performance. In addition, our proposed KET model outperforms the state-of-the-art models on most of the tested datasets across different domains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Emotion Detection in Conversations: Early studies on emotion detection in conversations focus on call center dialogs using lexicon-based methods and audio features (Lee and Narayanan, 2005; Devillers and Vidrascu, 2006) . Devillers et al. (2002) annotated and detected emotions in call center dialogs using unigram topic modelling. In recent years, there is an emerging research trend on emotion detection in conversational videos and multi-turn Tweets using deep learning methods (Hazarika et al., 2018b,a; Zahiri and Choi, 2018; Chatterjee et al., 2019; . Poria et al. (2017) proposed a long short-term memory network (LSTM) (Hochreiter and Schmidhuber, 1997) based model to capture contextual information for sentiment analysis in user-generated videos. proposed the DialogueRNN model that uses three gated recurrent units (GRU) to model the speaker, the context from the preceding utterances, and the emotions of the preceding utterances, respectively. They achieved the stateof-the-art performance on several conversational video datasets. Knowledge Base in Conversations: Recently there is a growing number of studies on incorporating knowledge base in generative conversation systems, such as open-domain dialogue systems (Han et al., 2015; Asghar et al., 2018; Ghazvininejad et al., 2018; Young et al., 2018; Parthasarathi and Pineau, 2018; Liu et al., 2018; Moghe et al., 2018; Dinan et al., 2019; , task-oriented dialogue systems (Madotto et al., 2018; Wu et al., 2019; He et al., 2019) and question answering systems (Kiddon et al., 2016; Hao et al., 2017; Sun et al., 2018; Mihaylov and Frank, 2018) . Zhou et al. (2018b) adopted structured knowledge graphs to enrich the interpretation of input sentences and help generate knowledgeaware responses using graph attentions. The graph attention in the knowledge interpreter (Zhou et al., 2018b) is static and only related to the recognized entity of interest. By contrast, our graph attention mechanism is dynamic and selects context-aware knowledge entities that balances between relatedness and affectiveness. Emotion Detection in Text: There is a trend moving from traditional machine learning methods (Pang et al., 2002; Wang and Manning, 2012; Seyeditabari et al., 2018) to deep learning methods (Abdul-Mageed and Ungar, 2017; Zhang et al., 2018b) for emotion detection in text. Khanpour and Caragea (2018) investigated the emotion detection from health-related posts in online health communities using both deep learning features and lexicon-based features.", "cite_spans": [ { "start": 164, "end": 189, "text": "(Lee and Narayanan, 2005;", "ref_id": "BIBREF30" }, { "start": 190, "end": 219, "text": "Devillers and Vidrascu, 2006)", "ref_id": "BIBREF11" }, { "start": 222, "end": 245, "text": "Devillers et al. (2002)", "ref_id": "BIBREF10" }, { "start": 481, "end": 507, "text": "(Hazarika et al., 2018b,a;", "ref_id": null }, { "start": 508, "end": 530, "text": "Zahiri and Choi, 2018;", "ref_id": "BIBREF59" }, { "start": 531, "end": 555, "text": "Chatterjee et al., 2019;", "ref_id": "BIBREF6" }, { "start": 558, "end": 577, "text": "Poria et al. (2017)", "ref_id": "BIBREF43" }, { "start": 627, "end": 661, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF22" }, { "start": 1229, "end": 1247, "text": "(Han et al., 2015;", "ref_id": "BIBREF17" }, { "start": 1248, "end": 1268, "text": "Asghar et al., 2018;", "ref_id": "BIBREF1" }, { "start": 1269, "end": 1296, "text": "Ghazvininejad et al., 2018;", "ref_id": "BIBREF16" }, { "start": 1297, "end": 1316, "text": "Young et al., 2018;", "ref_id": "BIBREF58" }, { "start": 1317, "end": 1348, "text": "Parthasarathi and Pineau, 2018;", "ref_id": "BIBREF41" }, { "start": 1349, "end": 1366, "text": "Liu et al., 2018;", "ref_id": "BIBREF32" }, { "start": 1367, "end": 1386, "text": "Moghe et al., 2018;", "ref_id": "BIBREF37" }, { "start": 1387, "end": 1406, "text": "Dinan et al., 2019;", "ref_id": "BIBREF13" }, { "start": 1440, "end": 1462, "text": "(Madotto et al., 2018;", "ref_id": "BIBREF33" }, { "start": 1463, "end": 1479, "text": "Wu et al., 2019;", "ref_id": "BIBREF57" }, { "start": 1480, "end": 1496, "text": "He et al., 2019)", "ref_id": "BIBREF21" }, { "start": 1528, "end": 1549, "text": "(Kiddon et al., 2016;", "ref_id": "BIBREF26" }, { "start": 1550, "end": 1567, "text": "Hao et al., 2017;", "ref_id": "BIBREF18" }, { "start": 1568, "end": 1585, "text": "Sun et al., 2018;", "ref_id": "BIBREF52" }, { "start": 1586, "end": 1611, "text": "Mihaylov and Frank, 2018)", "ref_id": "BIBREF36" }, { "start": 1614, "end": 1633, "text": "Zhou et al. (2018b)", "ref_id": "BIBREF65" }, { "start": 1834, "end": 1854, "text": "(Zhou et al., 2018b)", "ref_id": "BIBREF65" }, { "start": 2165, "end": 2184, "text": "(Pang et al., 2002;", "ref_id": "BIBREF40" }, { "start": 2185, "end": 2208, "text": "Wang and Manning, 2012;", "ref_id": "BIBREF56" }, { "start": 2209, "end": 2235, "text": "Seyeditabari et al., 2018)", "ref_id": "BIBREF48" }, { "start": 2261, "end": 2291, "text": "(Abdul-Mageed and Ungar, 2017;", "ref_id": "BIBREF0" }, { "start": 2292, "end": 2312, "text": "Zhang et al., 2018b)", "ref_id": "BIBREF61" }, { "start": 2344, "end": 2371, "text": "Khanpour and Caragea (2018)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Incorporating Knowledge in Sentiment Analysis: Traditional lexicon-based methods detect emotions or sentiments from a piece of text based on the emotions or sentiments of words or phrases that compose it (Hu et al., 2009; Taboada et al., 2011; Bandhakavi et al., 2017) . Few studies investigated the usage of knowledge bases in deep learning methods. Kumar et al. (2018) proposed to use knowledge from WordNet (Fellbaum, 2012) to enrich the text representations produced by LSTM and obtained improved performance. Transformer: The Transformer has been applied to many NLP tasks due to its rich representation and fast computation, e.g., document machine translation , response matching in dialogue system (Zhou et al., 2018c) , language modelling (Dai et al., 2019) and understanding (Radford et al., 2018) . A very recent work (Rik Koncel-Kedziorski and Hajishirzi, 2019) extends the Transformer to graph inputs and propose a model for graph-to-text generation.", "cite_spans": [ { "start": 204, "end": 221, "text": "(Hu et al., 2009;", "ref_id": "BIBREF23" }, { "start": 222, "end": 243, "text": "Taboada et al., 2011;", "ref_id": "BIBREF53" }, { "start": 244, "end": 268, "text": "Bandhakavi et al., 2017)", "ref_id": "BIBREF3" }, { "start": 351, "end": 370, "text": "Kumar et al. (2018)", "ref_id": "BIBREF29" }, { "start": 410, "end": 426, "text": "(Fellbaum, 2012)", "ref_id": "BIBREF15" }, { "start": 705, "end": 725, "text": "(Zhou et al., 2018c)", "ref_id": "BIBREF66" }, { "start": 747, "end": 765, "text": "(Dai et al., 2019)", "ref_id": "BIBREF9" }, { "start": 784, "end": 806, "text": "(Radford et al., 2018)", "ref_id": "BIBREF46" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In this section we present the task definition and our proposed KET model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Proposed KET Model", "sec_num": "3" }, { "text": "Let {X i j , Y i j }, i = 1, ...N, j = 1, .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Definition", "sec_num": "3.1" }, { "text": "..N i be a collection of {utterance, label} pairs in a given dialogue dataset, where N denotes the number of conversations and N i denotes the number of utterances in the ith conversation. The objective of the task is to maximize the following function:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Definition", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03a6 = N i=1 N i j=1 p(Y i j |X i j , X i j\u22121 , ..., X i 1 ; \u03b8),", "eq_num": "(1)" } ], "section": "Task Definition", "sec_num": "3.1" }, { "text": "where X i j\u22121 , ..., X i 1 denote contextual utterances and \u03b8 denotes the model parameters we want to optimize.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Definition", "sec_num": "3.1" }, { "text": "We limit the number of contextual utterances to M . Discarding early contextual utterances may cause information loss, but this loss is negligible because they only contribute the least amount of information . This phenomenon can be further observed in our model analysis regarding context length (see Section 5.2). Similar to (Poria et al., 2017) , we clip and pad each utterance X i j to a fixed m number of tokens. The overall architecture of our KET model is illustrated in Figure 2 .", "cite_spans": [ { "start": 327, "end": 347, "text": "(Poria et al., 2017)", "ref_id": "BIBREF43" } ], "ref_spans": [ { "start": 478, "end": 486, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Task Definition", "sec_num": "3.1" }, { "text": "We use a commonsense knowledge base Con-ceptNet (Speer et al., 2017) and an emotion lexicon NRC VAD (Mohammad, 2018a) as knowledge sources in our model.", "cite_spans": [ { "start": 48, "end": 68, "text": "(Speer et al., 2017)", "ref_id": "BIBREF50" }, { "start": 100, "end": 117, "text": "(Mohammad, 2018a)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Knowledge Retrieval", "sec_num": "3.2" }, { "text": "ConceptNet is a large-scale multilingual semantic graph that describes general human knowledge in natural language. The nodes in ConceptNet are concepts and the edges are relations. Each concept1, relation, concept2 triplet is an assertion. Each assertion is associated with a confidence score. An example assertion is friends, CausesDesire, socialize with confidence score of 3.46. Usually assertion confidence scores are in the [1, 10] interval. Currently, for English, Con-ceptNet comprises 5.9M assertions, 3.1M concepts and 38 relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge Retrieval", "sec_num": "3.2" }, { "text": "NRC VAD is a list of English words and their VAD scores, i.e., valence (negativepositive), arousal (calm-excited), and dominance (submissive-dominant) scores in the [0, 1] interval.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge Retrieval", "sec_num": "3.2" }, { "text": "The VAD measure of emotion is culture-independent and widely adopted in Psychology (Mehrabian, 1996) . Currently NRC VAD comprises around 20K words.", "cite_spans": [ { "start": 83, "end": 100, "text": "(Mehrabian, 1996)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Knowledge Retrieval", "sec_num": "3.2" }, { "text": "In general, for each non-stopword token t in X i j , we retrieve a connected knowledge graph g(t) comprising its immediate neighbors from Con-ceptNet. For each g(t), we remove concepts that are stopwords or not in our vocabulary. We further remove concepts with confidence scores less than 1 to reduce annotation noises. For each concept, we retrieve its VAD values from NRC VAD. The final knowledge representation for each token t is a list of tuples:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge Retrieval", "sec_num": "3.2" }, { "text": "(c 1 , s 1 , VAD(c 1 )), (c 2 , s 2 , VAD(c 2 )), ..., (c |g(t)| , s |g(t)| , VAD(c |g(t)| )),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge Retrieval", "sec_num": "3.2" }, { "text": "where c k \u2208 g(t) denotes the kth connected concept, s k denotes the associated confidence score, and VAD(c k ) denotes the VAD values of c k . The treatment for tokens that are not associated with any concept and concepts that are not included in NRC VAD are discussed in Section 3.4. We leave the treatment on relations as future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge Retrieval", "sec_num": "3.2" }, { "text": "We use a word embedding layer to convert each token t in X i into a vector representation t \u2208 R d , where d denotes the size of word embedding. To encode positional information, the position encoding (Vaswani et al., 2017 ) is added as follows:", "cite_spans": [ { "start": 200, "end": 221, "text": "(Vaswani et al., 2017", "ref_id": "BIBREF54" } ], "ref_spans": [], "eq_spans": [], "section": "Embedding Layer", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "t = Embed(t) + Pos(t).", "eq_num": "(2)" } ], "section": "Embedding Layer", "sec_num": "3.3" }, { "text": "Similarly, we use a concept embedding layer to convert each concept c into a vector representation c \u2208 R d but without position encoding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embedding Layer", "sec_num": "3.3" }, { "text": "To enrich word embedding with concept representations, we propose a dynamic context-aware affective graph attention mechanism to compute the concept representation for each token. Specifically, the concept representation c(t) \u2208 R d for token t is computed as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic Context-Aware Affective Graph Attention", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "c(t) = |g(t)| k=1 \u03b1 k * c k ,", "eq_num": "(3)" } ], "section": "Dynamic Context-Aware Affective Graph Attention", "sec_num": "3.4" }, { "text": "where c k \u2208 R d denotes the concept embedding of c k and \u03b1 k denotes its attention weight. If |g(t)| = 0, we set c(t) to the average of all concept embeddings. The attention \u03b1 k in Equation 3 is computed as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic Context-Aware Affective Graph Attention", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b1 k = softmax(w k ),", "eq_num": "(4)" } ], "section": "Dynamic Context-Aware Affective Graph Attention", "sec_num": "3.4" }, { "text": "where w k denotes the weight of c k . The derivation of w k is crucial because it regulates the contribution of c k towards enriching t. A standard graph attention mechanism (Velikovi et al., 2018) computes w k by feeding t and c k into a single-layer feedforward neural network. However, not all related concepts are equal in detecting emotions given the conversational context. In our model, we make the assumption that important concepts are those that relate to the conversational context and have strong emotion intensity. To this end, we propose a context-aware affective graph attention mechanism by incorporating two factors when computing w k , namely relatedness and affectiveness.", "cite_spans": [ { "start": 174, "end": 197, "text": "(Velikovi et al., 2018)", "ref_id": "BIBREF55" } ], "ref_spans": [], "eq_spans": [], "section": "Dynamic Context-Aware Affective Graph Attention", "sec_num": "3.4" }, { "text": "Relatedness: Relatedness measures the strength of the relation between c k and the conversational context. The relatedness factor in w k is computed as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic Context-Aware Affective Graph Attention", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "rel k = min-max(s k ) * abs(cos(CR(X i ), c k )),", "eq_num": "(5)" } ], "section": "Dynamic Context-Aware Affective Graph Attention", "sec_num": "3.4" }, { "text": "where s k is the confidence score introduced in Section 3.2, min-max denotes min-max scaling for each token t, abs denotes the absolute function, cos denotes the cosine similarity function, and CR(X i ) \u2208 R d denotes the context representation of the ith conversation X i . Here we compute CR(X i ) as the average of all sentence representations in X i as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic Context-Aware Affective Graph Attention", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "CR(X i ) = avg(SR(X i j\u2212M ), ..., SR(X i j )),", "eq_num": "(6)" } ], "section": "Dynamic Context-Aware Affective Graph Attention", "sec_num": "3.4" }, { "text": "where SR(X i j ) \u2208 R d denotes the sentence representation of X i j . We compute SR(X i j ) via hierarchical pooling (Shen et al., 2018) where ngram (n \u2264 3) representations in X i j are first computed by max-pooling and then all n-gram representations are averaged. The hierarchical pooling mechanism preserves word order information to certain degree and has demonstrated superior performance than average pooling or max-pooling on sentiment analysis tasks (Shen et al., 2018) . Affectiveness: Affectiveness measures the emotion intensity of c k . The affectiveness factor in w k is computed as", "cite_spans": [ { "start": 117, "end": 136, "text": "(Shen et al., 2018)", "ref_id": "BIBREF49" }, { "start": 458, "end": 477, "text": "(Shen et al., 2018)", "ref_id": "BIBREF49" } ], "ref_spans": [], "eq_spans": [], "section": "Dynamic Context-Aware Affective Graph Attention", "sec_num": "3.4" }, { "text": "aff k = min-max(||[V(c k )\u22121/2, A(c k )/2]|| 2 ), (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic Context-Aware Affective Graph Attention", "sec_num": "3.4" }, { "text": "where ||.|| k denotes l k norm, V(c k ) \u2208 [0, 1] and A(c k ) \u2208 [0, 1] denote the valence and arousal values of VAD(c k ), respectively. Intuitively, aff k considers the deviations of valence from neutral and the level of arousal from calm. There is no established method in the literature to compute the emotion intensity based on VAD values, but empirically we found that our method correlates better with an emotion intensity lexicon comprising 6K English words (Mohammad, 2018b) than other methods such as taking dominance into consideration or taking l 1 norm. For concept c k not in NRC VAD, we set aff k to the mid value of 0.5.", "cite_spans": [ { "start": 464, "end": 481, "text": "(Mohammad, 2018b)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Dynamic Context-Aware Affective Graph Attention", "sec_num": "3.4" }, { "text": "Combining both rel k and aff k , we define the weight w k as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic Context-Aware Affective Graph Attention", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "w k = \u03bb k * rel k + (1 \u2212 \u03bb k ) * aff k ,", "eq_num": "(8)" } ], "section": "Dynamic Context-Aware Affective Graph Attention", "sec_num": "3.4" }, { "text": "where \u03bb k is a model parameter balancing the impacts of relatedness and affectiveness on computing concept representations. Parameter \u03bb k can be fixed or learned during training. The analysis of \u03bb k is discussed in Section 5.2. Finally, the concept-enriched word representationt can be obtained via a linear transformation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic Context-Aware Affective Graph Attention", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "t = W[t; c(t)],", "eq_num": "(9)" } ], "section": "Dynamic Context-Aware Affective Graph Attention", "sec_num": "3.4" }, { "text": "where [; ] denotes concatenation and W \u2208 R d\u00d72d denotes a model parameter. All m tokens in each X i j then form a concept-enriched utterance em-beddingX i j \u2208 R m\u00d7d .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic Context-Aware Affective Graph Attention", "sec_num": "3.4" }, { "text": "We propose a hierarchical self-attention mechanism to exploit the structural representation of conversations and learn a vector representation for the contextual utterances X i j\u22121 , ..., X i j\u2212M . Specifically, the hierarchical self-attention follows two steps: 1) each utterance representation is computed using an utterance-level self-attention layer, and 2) a context representation is computed from M learned utterance representations using a context-level self-attention layer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Self-Attention", "sec_num": "3.5" }, { "text": "At step 1, for each utterance X i n , n=j \u2212 1, ..., j \u2212 M , its representationX i n \u2208 R m\u00d7d is learned as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Self-Attention", "sec_num": "3.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "X i n = FF(L (MH(L(X i n ), L(X i n ), L(X i n )))),", "eq_num": "(10)" } ], "section": "Hierarchical Self-Attention", "sec_num": "3.5" }, { "text": "where L(X i n ) \u2208 R m\u00d7h\u00d7ds is linearly transformed fromX i n to form h heads (d s = d/h), L linearly transforms from h heads back to 1 head, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Self-Attention", "sec_num": "3.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "MH(Q, K, V ) = softmax( QK T \u221a d s )V,", "eq_num": "(11)" } ], "section": "Hierarchical Self-Attention", "sec_num": "3.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "FF(x) = max(0, xW 1 + b 1 )W 2 + b 2 ,", "eq_num": "(12)" } ], "section": "Hierarchical Self-Attention", "sec_num": "3.5" }, { "text": "where Q, K, and V denote sets of queries, keys and values, respectively, W 1 \u2208 R d\u00d7p , b 1 \u2208 R p , W 2 \u2208 R p\u00d7d and b 2 \u2208 R d denote model parameters, and p denotes the hidden size of the point-wise feedforward layer (FF) (Vaswani et al., 2017) . The multi-head self-attention layer (MH) enables our model to jointly attend to information from different representation subspaces (Vaswani et al., 2017) . The scaling factor 1 \u221a ds is added to ensure the dot product of two vectors do not get overly large. Similar to (Vaswani et al., 2017) , both MH and FF layers are followed by residual connection and layer normalization, which are omitted in Equation 10 for brevity. At step 2, to effectively combine all utterance representations in the context, the contextlevel self-attention layer is proposed to hierarchically learn the context-level representation C i \u2208 R M \u00d7m\u00d7d as follows:", "cite_spans": [ { "start": 221, "end": 243, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF54" }, { "start": 378, "end": 400, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF54" }, { "start": 515, "end": 537, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF54" } ], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Self-Attention", "sec_num": "3.5" }, { "text": "C i = FF(L (MH(L(X i ), L(X i ), L(X i )))), (13) whereX i denotes [X i j\u2212M ; ...;X i j\u22121 ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Self-Attention", "sec_num": "3.5" }, { "text": ", which is the concatenation of all learned utterance representations in the context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Self-Attention", "sec_num": "3.5" }, { "text": "Finally, a context-aware concept-enriched response representation R i \u2208 R m\u00d7d for conversation X i is learned by cross-attention , which selectively attends to the concept-enriched context representation as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-Response Cross-Attention", "sec_num": "3.6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "R i = FF(L (MH(L(X i j ), L(C i ), L(C i )))),", "eq_num": "(14)" } ], "section": "Context-Response Cross-Attention", "sec_num": "3.6" }, { "text": "where the response utterance representationX i j \u2208 R m\u00d7d is obtained via the MH layer:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-Response Cross-Attention", "sec_num": "3.6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "X i j = L (MH(L(X i j ), L(X i j ), L(X i j ))),", "eq_num": "(15)" } ], "section": "Context-Response Cross-Attention", "sec_num": "3.6" }, { "text": "The resulted representation R i \u2208 R m\u00d7d is then fed into a max-pooling layer to learn discriminative features among the positions in the response and derive the final representation O \u2208 R d :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-Response Cross-Attention", "sec_num": "3.6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "O = max pool(R i ).", "eq_num": "(16)" } ], "section": "Context-Response Cross-Attention", "sec_num": "3.6" }, { "text": "The output probability p is then computed as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-Response Cross-Attention", "sec_num": "3.6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p = softmax(OW 3 + b 3 ),", "eq_num": "(17)" } ], "section": "Context-Response Cross-Attention", "sec_num": "3.6" }, { "text": "where W 3 \u2208 R d\u00d7q and b 3 \u2208 R q denote model parameters, and q denotes the number of classes. The entire KET model is optimized in an end-toend manner as defined in Equation 1. Our model is available at here 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-Response Cross-Attention", "sec_num": "3.6" }, { "text": "1 https://github.com/zhongpeixiang/KET", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-Response Cross-Attention", "sec_num": "3.6" }, { "text": "In this section we present the datasets, evaluation metrics, baselines, our model variants, and other experimental settings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Settings", "sec_num": "4" }, { "text": "We evaluate our model on the following five emotion detection datasets of various sizes and domains. The statistics are reported in (Busso et al., 2008) : Emotional dialogues. The emotion labels include neutral, happiness, sadness, anger, frustrated, and excited. In terms of the evaluation metric, for EC and DailyDialog, we follow (Chatterjee et al., 2019) to use the micro-averaged F1 excluding the majority class (neutral), due to their extremely unbalanced labels (the percentage of the majority class in the test set is over 80%). For the rest relatively balanced datasets, we follow to use the weighted macro-F1.", "cite_spans": [ { "start": 132, "end": 152, "text": "(Busso et al., 2008)", "ref_id": "BIBREF5" }, { "start": 333, "end": 358, "text": "(Chatterjee et al., 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets and Evaluations", "sec_num": "4.1" }, { "text": "For a comprehensive performance evaluation, we compare our model with the following baselines: cLSTM: A contextual LSTM model. An utterance-level bidirectional LSTM is used to encode each utterance. A context-level unidirectional LSTM is used to encode the context. (Kim, 2014) 0.7056 0.4934 0.5586 0.3259 0.5218 CNN+cLSTM (Poria et al., 2017) 0.7262 0.5024 0.5687 0.3289 0.5587 BERT BASE (Devlin et al., 2018) 0.6946 0.5312 0.5621 0.3315 0.6119 DialogueRNN CNN (Kim, 2014) : A single-layer CNN with strong empirical performance. This model is trained on the utterance-level without context. CNN+cLSTM (Poria et al., 2017) : An CNN is used to extract utterance features. An cLSTM is then applied to learn context representations. BERT BASE (Devlin et al., 2018) : Base version of the state-of-the-art model for sentiment classification. We treat each utterance with its context as a single document. We limit the document length to the last 100 tokens to allow larger batch size. We do not experiment with the large version of BERT due to memory constraint of our GPU. DialogueRNN : The stateof-the-art model for emotion detection in textual conversations. It models both context and speakers information. The CNN features used in Dia-logueRNN are extracted from the carefully tuned CNN model. For datasets without speaker information, i.e., EC and DailyDialog, we use two speakers only. For MELD and EmoryNLP, which have 260 and 255 speakers, respectively, we additionally experimented with clipping the number of speakers to the most frequent ones (6 main speakers + an universal speaker representing all other speakers) and reported the best results. KET SingleSelfAttn: We replace the hierarchical self-attention by a single self-attention layer to learn context representations. Contextual utterances are concatenated together prior to the single self-attention layer.", "cite_spans": [ { "start": 266, "end": 277, "text": "(Kim, 2014)", "ref_id": "BIBREF27" }, { "start": 323, "end": 343, "text": "(Poria et al., 2017)", "ref_id": "BIBREF43" }, { "start": 389, "end": 410, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF12" }, { "start": 462, "end": 473, "text": "(Kim, 2014)", "ref_id": "BIBREF27" }, { "start": 602, "end": 622, "text": "(Poria et al., 2017)", "ref_id": "BIBREF43" }, { "start": 740, "end": 761, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines and Model Variants", "sec_num": "4.2" }, { "text": "KET StdAttn: We replace the dynamic contextaware affective graph attention by the standard graph attention (Velikovi et al., 2018) .", "cite_spans": [ { "start": 107, "end": 130, "text": "(Velikovi et al., 2018)", "ref_id": "BIBREF55" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines and Model Variants", "sec_num": "4.2" }, { "text": "We preprocessed all datasets by lower-casing and tokenization using Spacy 2 . We keep all tokens in the vocabulary 3 . We use the released code for BERT BASE and DialogueRNN. For each dataset, all models are fine-tuned based on their performance on the validation set. For our model in all datasets, we use Adam optimization (Kingma and Ba, 2014) with a batch size of 64 and learning rate of 0.0001 throughout the training process. We use GloVe embedding (Pennington et al., 2014) for initialization in the word and concept embedding layers 4 . For the class weights in cross-entropy loss for each dataset, we set them as the ratio of the class distribution in the validation set to the class distribution in the training set. Thus, we can alleviate the problem of unbalanced dataset. The detailed hyper-parameter settings for KET are presented in Table 3 .", "cite_spans": [ { "start": 455, "end": 480, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF42" } ], "ref_spans": [ { "start": 848, "end": 855, "text": "Table 3", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Other Experimental Settings", "sec_num": "4.3" }, { "text": "In this section we present model evaluation results, model analysis, and error analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Result Analysis", "sec_num": "5" }, { "text": "We compare the performance of KET against that of the baseline models on the five afore-introduced datasets. The results are reported in Table 2 . Note that our results for CNN, CNN+cLSTM and Di-alogueRNN on EC, MELD and IEMOCAP are slightly different from the reported results in . cLSTM performs reasonably well on short conversations (i.e., EC and DailyDialog), but the worst on long conversations (i.e., MELD, EmoryNLP and IEMOCAP). One major reason is that learning long dependencies using gated RNNs may not be effective enough because the gradients are expected to propagate back through inevitably a huge number of utterances and tokens in sequence, which easily leads to the vanishing gradient problem (Bengio et al., 1994 ). In contrast, when the utterance-level LSTM in cLSTM is replaced by features extracted by CNN, i.e., the CNN+cLSTM, the model performs significantly better than cLSTM on long conversations, which further validates that modelling long conversations using only RNN models may not be sufficient. BERT BASE achieves very competitive performance on all datasets except EC due to its strong representational power via bi-directional context modelling using the Transformer. Note that BERT BASE has considerably more parameters than other baselines and our model (110M for BERT BASE versus 4M for our model), which can be a disadvantage when deployed to devices with limited computing power and memory. The state-of-the-art DialogueRNN model performs the best overall among all baselines. In particular, DialogueRNN performs better than our model on IEMOCAP, which may be attributed to its detailed speaker information for modelling the emotion dynamics in each speaker as the conversation flows.", "cite_spans": [ { "start": 711, "end": 731, "text": "(Bengio et al., 1994", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 137, "end": 144, "text": "Table 2", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Comparison with Baselines", "sec_num": "5.1" }, { "text": "It is encouraging to see that our KET model outperforms the baselines on most of the datasets tested. This finding indicates that our model is robust across datasets with varying training sizes, context lengths and domains. Our KET variants KET SingleSelfAttn and KET StdAttn perform comparably with the best baselines on all datasets except IEMOCAP. However, both variants perform noticeably worse than KET on all datasets except EC, validating the importance of our proposed hierarchical self-attention and dynamic context-aware affective graph attention mechanism. One observation worth mentioning is that these two variants perform on a par with the KET model on EC. Possible explanations are that 1) hierarchical self-attention may not be critical for modelling short conversations in EC, and 2) the informal linguistic styles of Tweets in EC, e.g., misspelled words and slangs, hinder the context representation learning in our graph attention mechanism.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison with Baselines", "sec_num": "5.1" }, { "text": "We analyze the impact of different settings on the validation performance of KET. All results in this section are averaged over 5 random seeds. Analysis of context length: We vary the context length M and plot model performance in Figure 3 (top portion). Note that EC has only a maximum number of 2 contextual utterances. It is clear that incorporating context into KET improves performance on all datasets. However, adding more context is contributing diminishing performance gain or even making negative impact in some datasets. This phenomenon has been observed in a prior study . One possible explanation is that incorporating long contextual information may introduce additional noises, e.g., polysemes expressing different meanings in different utterances of the same context. More thorough investigation of this diminishing return phenomenon is a worthwhile direction in the future. Analysis of the size of ConceptNet: We vary the size of ConceptNet by randomly keeping only a fraction of the concepts in ConceptNet when train- ing and evaluating our model. The results are illustrated in Figure 3 (bottom portion). Adding more concepts consistently improves model performance before reaching a plateau, validating the importance of commonsense knowledge in detecting emotions. We may expect the performance of our KET model to improve with the growing size of ConceptNet in the future. Analysis of the relatedness-affectiveness tradeoff: We experiment with different values of \u03bb k \u2208 [0, 1] (see Equation 8) for all k and report the results in Table 4 . It is clear that \u03bb k makes a noticeable impact on the model performance. Discarding relatedness or affectiveness completely will cause significant performance drop on all datasets, with one exception of IEMOCAP. One possible reason is that conversations in IEMOCAP are emotional dialogues, therefore, the affectiveness factor in our proposed graph attention mechanism can provide more discriminative power. Ablation Study: We conduct ablation study to investigate the contribution of context and knowledge as reported in Table 5 . It is clear that both context and knowledge are essential to the strong performance of KET on all datasets. Note that removing context has a greater impact on long conversations than short conversations, which is expected because more contextual information is lost in long conversations.", "cite_spans": [], "ref_spans": [ { "start": 231, "end": 239, "text": "Figure 3", "ref_id": "FIGREF1" }, { "start": 1096, "end": 1104, "text": "Figure 3", "ref_id": "FIGREF1" }, { "start": 1551, "end": 1558, "text": "Table 4", "ref_id": "TABREF10" }, { "start": 2082, "end": 2089, "text": "Table 5", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Model Analysis", "sec_num": "5.2" }, { "text": "Despite the strong performance of our model, it still fails to detect certain emotions on certain datasets. We rank the F1 score of each emotion per dataset and investigate the emotions with the worst scores. We found that disgust and fear are generally difficult to detect and differentiate. For example, the F1 score of fear emotion in MELD is as low as 0.0667. One possible cause is that these two emotions are intrinsically similar. The VAD values of both emotions have low valence, high arousal and low dominance (Mehrabian, 1996) . Another cause is the small amount of data available for these two emotions. How to differentiate intrinsically similar emotions and how to effectively detect emotions using limited data are two challenging directions in this field.", "cite_spans": [ { "start": 518, "end": 535, "text": "(Mehrabian, 1996)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "5.3" }, { "text": "We present a knowledge-enriched transformer to detect emotions in textual conversations. Our model learns structured conversation representations via hierarchical self-attention and dynamically refers to external, context-aware, and emotion-related knowledge entities from knowledge bases. Experimental analysis demonstrates that both contextual information and commonsense knowledge are beneficial to model performance. The tradeoff between relatedness and affectiveness plays an important role as well. In addition, our model outperforms the state-of-the-art models on most of the tested datasets of varying sizes and domains. Given that there are similar emotion lexicons to NRC VAD in other languages and ConceptNet is a multilingual knowledge base, our model can be easily adapted to other languages. In addition, given that NRC VAD is the only emotion-specific component, our model can be adapted as a generic model for conversation analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "https://spacy.io/3 We keep tokens with minimum frequency of 2 for Daily-Dialog due to its large vocabulary size4 We use GloVe embeddings from Magnitude Medium: https://github.com/plasticityai/magnitude", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The authors would like to thank the anonymous reviewers for their valuable comments. This research is supported, in part, by the National Research Foundation, Prime Ministers Office, Singapore under its AI Singapore Programme (Award Number: AISG-GC-2019-003) and under its NRF Investigatorship Programme (NRFI Award No. NRF-NRFI05-2019-0002). This research is also supported, in part, by the Alibaba-NTU Singapore Joint Research Institute, Nanyang Technological University, Singapore.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Emonet: Fine-grained emotion detection with gated recurrent neural networks", "authors": [ { "first": "Muhammad", "middle": [], "last": "Abdul", "suffix": "" }, { "first": "-Mageed", "middle": [], "last": "", "suffix": "" }, { "first": "Lyle", "middle": [], "last": "Ungar", "suffix": "" } ], "year": 2017, "venue": "In ACL", "volume": "1", "issue": "", "pages": "718--728", "other_ids": {}, "num": null, "urls": [], "raw_text": "Muhammad Abdul-Mageed and Lyle Ungar. 2017. Emonet: Fine-grained emotion detection with gated recurrent neural networks. In ACL, volume 1, pages 718-728.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Affective neural response generation", "authors": [ { "first": "Nabiha", "middle": [], "last": "Asghar", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Poupart", "suffix": "" }, { "first": "Jesse", "middle": [], "last": "Hoey", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Lili", "middle": [], "last": "Mou", "suffix": "" } ], "year": 2018, "venue": "ECIR", "volume": "", "issue": "", "pages": "154--166", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nabiha Asghar, Pascal Poupart, Jesse Hoey, Xin Jiang, and Lili Mou. 2018. Affective neural response gen- eration. In ECIR, pages 154-166. Springer.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1409.0473" ] }, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Lexicon generation for emotion detection from text", "authors": [ { "first": "Anil", "middle": [], "last": "Bandhakavi", "suffix": "" }, { "first": "Nirmalie", "middle": [], "last": "Wiratunga", "suffix": "" }, { "first": "Stewart", "middle": [], "last": "Massie", "suffix": "" }, { "first": "Deepak", "middle": [], "last": "Padmanabhan", "suffix": "" } ], "year": 2017, "venue": "IEEE Intelligent Systems", "volume": "32", "issue": "1", "pages": "102--108", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anil Bandhakavi, Nirmalie Wiratunga, Stewart Massie, and Deepak Padmanabhan. 2017. Lexicon genera- tion for emotion detection from text. IEEE Intelli- gent Systems, 32(1):102-108.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Learning long-term dependencies with gradient descent is difficult", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Patrice", "middle": [], "last": "Simard", "suffix": "" }, { "first": "Paolo", "middle": [], "last": "Frasconi", "suffix": "" } ], "year": 1994, "venue": "IEEE Transactions on Neural Networks", "volume": "5", "issue": "2", "pages": "157--166", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, Patrice Simard, Paolo Frasconi, et al. 1994. Learning long-term dependencies with gradi- ent descent is difficult. IEEE Transactions on Neu- ral Networks, 5(2):157-166.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "IEMOCAP: Interactive emotional dyadic motion capture database. Language Resources and Evaluation", "authors": [ { "first": "Carlos", "middle": [], "last": "Busso", "suffix": "" }, { "first": "Murtaza", "middle": [], "last": "Bulut", "suffix": "" }, { "first": "Chi-Chun", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Abe", "middle": [], "last": "Kazemzadeh", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Mower", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Jeannette", "middle": [ "N" ], "last": "Chang", "suffix": "" }, { "first": "Sungbok", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Shrikanth S", "middle": [], "last": "Narayanan", "suffix": "" } ], "year": 2008, "venue": "", "volume": "42", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jean- nette N Chang, Sungbok Lee, and Shrikanth S Narayanan. 2008. IEMOCAP: Interactive emotional dyadic motion capture database. Language Re- sources and Evaluation, 42(4):335.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Understanding emotions in text using deep learning and big data", "authors": [ { "first": "Ankush", "middle": [], "last": "Chatterjee", "suffix": "" }, { "first": "Umang", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Manoj", "middle": [], "last": "Kumar Chinnakotla", "suffix": "" }, { "first": "Radhakrishnan", "middle": [], "last": "Srikanth", "suffix": "" } ], "year": 2019, "venue": "Michel Galley, and Puneet Agrawal", "volume": "93", "issue": "", "pages": "309--317", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ankush Chatterjee, Umang Gupta, Manoj Kumar Chinnakotla, Radhakrishnan Srikanth, Michel Gal- ley, and Puneet Agrawal. 2019. Understanding emo- tions in text using deep learning and big data. Com- puters in Human Behavior, 93:309 -317.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Long short-term memory-networks for machine reading", "authors": [ { "first": "Jianpeng", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2016, "venue": "EMNLP", "volume": "", "issue": "", "pages": "551--561", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long short-term memory-networks for machine reading. In EMNLP, pages 551-561.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merrienboer", "suffix": "" }, { "first": "Caglar", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Fethi", "middle": [], "last": "Bougares", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "EMNLP", "volume": "", "issue": "", "pages": "1724--1734", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kyunghyun Cho, Bart van Merrienboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In EMNLP, pages 1724-1734.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Transformer-xl: Attentive language models beyond a fixed-length context", "authors": [ { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "W", "middle": [], "last": "William", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Le", "suffix": "" }, { "first": "", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1901.02860" ] }, "num": null, "urls": [], "raw_text": "Zihang Dai, Zhilin Yang, Yiming Yang, William W Cohen, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive lan- guage models beyond a fixed-length context. arXiv preprint arXiv:1901.02860.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Annotation and detection of emotion in a task-oriented human-human dialog corpus", "authors": [ { "first": "Laurence", "middle": [], "last": "Devillers", "suffix": "" }, { "first": "Ioana", "middle": [], "last": "Vasilescu", "suffix": "" }, { "first": "Lori", "middle": [], "last": "Lamel", "suffix": "" } ], "year": 2002, "venue": "Proceedings of ISLE Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laurence Devillers, Ioana Vasilescu, and Lori Lamel. 2002. Annotation and detection of emotion in a task-oriented human-human dialog corpus. In Pro- ceedings of ISLE Workshop.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Real-life emotions detection with lexical and paralinguistic cues on human-human call center dialogs", "authors": [ { "first": "Laurence", "middle": [], "last": "Devillers", "suffix": "" }, { "first": "Laurence", "middle": [], "last": "Vidrascu", "suffix": "" } ], "year": 2006, "venue": "Ninth International Conference on Spoken Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laurence Devillers and Laurence Vidrascu. 2006. Real-life emotions detection with lexical and par- alinguistic cues on human-human call center di- alogs. In Ninth International Conference on Spoken Language Processing.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Wizard of wikipedia: Knowledge-powered conversational agents", "authors": [ { "first": "Emily", "middle": [], "last": "Dinan", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Roller", "suffix": "" }, { "first": "Kurt", "middle": [], "last": "Shuster", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2019, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of wikipedia: Knowledge-powered conversational agents. In ICLR.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "An argument for basic emotions", "authors": [ { "first": "Paul", "middle": [], "last": "Ekman", "suffix": "" } ], "year": 1992, "venue": "Cognition & emotion", "volume": "6", "issue": "3-4", "pages": "169--200", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Ekman. 1992. An argument for basic emotions. Cognition & emotion, 6(3-4):169-200.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Wordnet. The Encyclopedia of Applied Linguistics", "authors": [ { "first": "Christiane", "middle": [], "last": "Fellbaum", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christiane Fellbaum. 2012. Wordnet. The Encyclope- dia of Applied Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A knowledge-grounded neural conversation model", "authors": [ { "first": "Marjan", "middle": [], "last": "Ghazvininejad", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2018, "venue": "Wen-tau Yih, and Michel Galley", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2018. A knowledge-grounded neural conversation model. In AAAI.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Exploiting knowledge base to generate responses for natural language dialog listening agents", "authors": [ { "first": "Sangdo", "middle": [], "last": "Han", "suffix": "" }, { "first": "Jeesoo", "middle": [], "last": "Bang", "suffix": "" }, { "first": "Seonghan", "middle": [], "last": "Ryu", "suffix": "" }, { "first": "Gary Geunbae", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 16th SIGDIAL", "volume": "", "issue": "", "pages": "129--133", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sangdo Han, Jeesoo Bang, Seonghan Ryu, and Gary Geunbae Lee. 2015. Exploiting knowledge base to generate responses for natural language di- alog listening agents. In Proceedings of the 16th SIGDIAL, pages 129-133.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "An endto-end model for question answering over knowledge base with cross-attention combining global knowledge", "authors": [ { "first": "Yanchao", "middle": [], "last": "Hao", "suffix": "" }, { "first": "Yuanzhe", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Shizhu", "middle": [], "last": "He", "suffix": "" }, { "first": "Zhanyi", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2017, "venue": "ACL", "volume": "", "issue": "", "pages": "221--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yanchao Hao, Yuanzhe Zhang, Kang Liu, Shizhu He, Zhanyi Liu, Hua Wu, and Jun Zhao. 2017. An end- to-end model for question answering over knowl- edge base with cross-attention combining global knowledge. In ACL, pages 221-231.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Icon: Interactive conversational memory network for multimodal emotion detection", "authors": [ { "first": "Devamanyu", "middle": [], "last": "Hazarika", "suffix": "" }, { "first": "Soujanya", "middle": [], "last": "Poria", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Erik", "middle": [], "last": "Cambria", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Zimmermann", "suffix": "" } ], "year": 2018, "venue": "EMNLP", "volume": "", "issue": "", "pages": "2594--2604", "other_ids": {}, "num": null, "urls": [], "raw_text": "Devamanyu Hazarika, Soujanya Poria, Rada Mihal- cea, Erik Cambria, and Roger Zimmermann. 2018a. Icon: Interactive conversational memory network for multimodal emotion detection. In EMNLP, pages 2594-2604.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Conversational memory network for emotion recognition in dyadic dialogue videos", "authors": [ { "first": "Devamanyu", "middle": [], "last": "Hazarika", "suffix": "" }, { "first": "Soujanya", "middle": [], "last": "Poria", "suffix": "" }, { "first": "Amir", "middle": [], "last": "Zadeh", "suffix": "" }, { "first": "Erik", "middle": [], "last": "Cambria", "suffix": "" }, { "first": "Louis-Philippe", "middle": [], "last": "Morency", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Zimmermann", "suffix": "" } ], "year": 2018, "venue": "NAACL", "volume": "1", "issue": "", "pages": "2122--2132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Devamanyu Hazarika, Soujanya Poria, Amir Zadeh, Erik Cambria, Louis-Philippe Morency, and Roger Zimmermann. 2018b. Conversational memory net- work for emotion recognition in dyadic dialogue videos. In NAACL, volume 1, pages 2122-2132.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Hierarchical attention and knowledge matching networks with information enhancement for end-to-end task-oriented dialog systems", "authors": [ { "first": "Junqing", "middle": [], "last": "He", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Mingming", "middle": [], "last": "Fu", "suffix": "" }, { "first": "Tianqi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Xuemin", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2019, "venue": "IEEE Access", "volume": "7", "issue": "", "pages": "18871--18883", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junqing He, Bing Wang, Mingming Fu, Tianqi Yang, and Xuemin Zhao. 2019. Hierarchical attention and knowledge matching networks with information en- hancement for end-to-end task-oriented dialog sys- tems. IEEE Access, 7:18871-18883.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural Computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Lyric-based song emotion detection with affective lexicon and fuzzy clustering method", "authors": [ { "first": "Yajie", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Xiaoou", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Deshun", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2009, "venue": "ISMIR", "volume": "", "issue": "", "pages": "123--128", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yajie Hu, Xiaoou Chen, and Deshun Yang. 2009. Lyric-based song emotion detection with affective lexicon and fuzzy clustering method. In ISMIR, pages 123-128.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Guidelines for designing computational models of emotions", "authors": [ { "first": "Eva", "middle": [], "last": "Hudlicka", "suffix": "" } ], "year": 2011, "venue": "International Journal of Synthetic Emotions", "volume": "2", "issue": "1", "pages": "26--79", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eva Hudlicka. 2011. Guidelines for designing compu- tational models of emotions. International Journal of Synthetic Emotions, 2(1):26-79.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Finegrained emotion detection in health-related online posts", "authors": [ { "first": "Hamed", "middle": [], "last": "Khanpour", "suffix": "" }, { "first": "Cornelia", "middle": [], "last": "Caragea", "suffix": "" } ], "year": 2018, "venue": "EMNLP", "volume": "", "issue": "", "pages": "1160--1166", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hamed Khanpour and Cornelia Caragea. 2018. Fine- grained emotion detection in health-related online posts. In EMNLP, pages 1160-1166.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Globally coherent text generation with neural checklist models", "authors": [ { "first": "Chlo\u00e9", "middle": [], "last": "Kiddon", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2016, "venue": "EMNLP", "volume": "", "issue": "", "pages": "329--339", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chlo\u00e9 Kiddon, Luke Zettlemoyer, and Yejin Choi. 2016. Globally coherent text generation with neu- ral checklist models. In EMNLP, pages 329-339.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Convolutional neural networks for sentence classification", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1408.5882" ] }, "num": null, "urls": [], "raw_text": "Yoon Kim. 2014. Convolutional neural net- works for sentence classification. arXiv preprint arXiv:1408.5882.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6980" ] }, "num": null, "urls": [], "raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Knowledge-enriched two-layered attention network for sentiment analysis", "authors": [ { "first": "Abhishek", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Daisuke", "middle": [], "last": "Kawahara", "suffix": "" }, { "first": "Sadao", "middle": [], "last": "Kurohashi", "suffix": "" } ], "year": 2018, "venue": "NAACL", "volume": "2", "issue": "", "pages": "253--258", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abhishek Kumar, Daisuke Kawahara, and Sadao Kuro- hashi. 2018. Knowledge-enriched two-layered at- tention network for sentiment analysis. In NAACL, volume 2, pages 253-258.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Toward detecting emotions in spoken dialogs", "authors": [ { "first": "Min", "middle": [], "last": "Chul", "suffix": "" }, { "first": "", "middle": [], "last": "Lee", "suffix": "" }, { "first": "S", "middle": [], "last": "Shrikanth", "suffix": "" }, { "first": "", "middle": [], "last": "Narayanan", "suffix": "" } ], "year": 2005, "venue": "IEEE Transactions on Speech and Audio Processing", "volume": "13", "issue": "2", "pages": "293--303", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chul Min Lee and Shrikanth S Narayanan. 2005. To- ward detecting emotions in spoken dialogs. IEEE Transactions on Speech and Audio Processing, 13(2):293-303.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Dailydialog: A manually labelled multi-turn dialogue dataset", "authors": [ { "first": "Yanran", "middle": [], "last": "Li", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Su", "suffix": "" }, { "first": "Xiaoyu", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Wenjie", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ziqiang", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Shuzi", "middle": [], "last": "Niu", "suffix": "" } ], "year": 2017, "venue": "IJC-NLP", "volume": "1", "issue": "", "pages": "986--995", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. Dailydialog: A man- ually labelled multi-turn dialogue dataset. In IJC- NLP, volume 1, pages 986-995.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Knowledge diffusion for neural dialogue generation", "authors": [ { "first": "Shuman", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Hongshen", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Zhaochun", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Dawei", "middle": [], "last": "Yin", "suffix": "" } ], "year": 2018, "venue": "ACL", "volume": "", "issue": "", "pages": "1489--1498", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shuman Liu, Hongshen Chen, Zhaochun Ren, Yang Feng, Qun Liu, and Dawei Yin. 2018. Knowledge diffusion for neural dialogue generation. In ACL, pages 1489-1498.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Mem2seq: Effectively incorporating knowledge bases into end-to-end task-oriented dialog systems", "authors": [ { "first": "Andrea", "middle": [], "last": "Madotto", "suffix": "" }, { "first": "Chien-Sheng", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Pascale", "middle": [], "last": "Fung", "suffix": "" } ], "year": 2018, "venue": "ACL", "volume": "1", "issue": "", "pages": "1468--1478", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2018. Mem2seq: Effectively incorporating knowl- edge bases into end-to-end task-oriented dialog sys- tems. In ACL, volume 1, pages 1468-1478.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Dialoguernn: An attentive rnn for emotion detection in conversations", "authors": [ { "first": "Navonil", "middle": [], "last": "Majumder", "suffix": "" }, { "first": "Soujanya", "middle": [], "last": "Poria", "suffix": "" }, { "first": "Devamanyu", "middle": [], "last": "Hazarika", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Gelbukh", "suffix": "" }, { "first": "Erik", "middle": [], "last": "Cambria", "suffix": "" } ], "year": 2019, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Navonil Majumder, Soujanya Poria, Devamanyu Haz- arika, Rada Mihalcea, Alexander Gelbukh, and Erik Cambria. 2019. Dialoguernn: An attentive rnn for emotion detection in conversations. In AAAI.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Pleasure-arousal-dominance: A general framework for describing and measuring individual differences in temperament", "authors": [ { "first": "Albert", "middle": [], "last": "Mehrabian", "suffix": "" } ], "year": 1996, "venue": "Current Psychology", "volume": "14", "issue": "4", "pages": "261--292", "other_ids": {}, "num": null, "urls": [], "raw_text": "Albert Mehrabian. 1996. Pleasure-arousal-dominance: A general framework for describing and measuring individual differences in temperament. Current Psy- chology, 14(4):261-292.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Knowledgeable reader: Enhancing cloze-style reading comprehension with external commonsense knowledge", "authors": [ { "first": "Todor", "middle": [], "last": "Mihaylov", "suffix": "" }, { "first": "Anette", "middle": [], "last": "Frank", "suffix": "" } ], "year": 2018, "venue": "ACL", "volume": "", "issue": "", "pages": "821--832", "other_ids": {}, "num": null, "urls": [], "raw_text": "Todor Mihaylov and Anette Frank. 2018. Knowledge- able reader: Enhancing cloze-style reading compre- hension with external commonsense knowledge. In ACL, pages 821-832.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Towards exploiting background knowledge for building conversation systems", "authors": [ { "first": "Nikita", "middle": [], "last": "Moghe", "suffix": "" }, { "first": "Siddhartha", "middle": [], "last": "Arora", "suffix": "" }, { "first": "Suman", "middle": [], "last": "Banerjee", "suffix": "" }, { "first": "Mitesh M", "middle": [], "last": "Khapra", "suffix": "" } ], "year": 2018, "venue": "EMNLP", "volume": "", "issue": "", "pages": "2322--2332", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nikita Moghe, Siddhartha Arora, Suman Banerjee, and Mitesh M Khapra. 2018. Towards exploiting back- ground knowledge for building conversation sys- tems. In EMNLP, pages 2322-2332.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Obtaining reliable human ratings of valence, arousal, and dominance for 20,000 english words", "authors": [ { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" } ], "year": 2018, "venue": "ACL", "volume": "", "issue": "", "pages": "174--184", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif Mohammad. 2018a. Obtaining reliable human rat- ings of valence, arousal, and dominance for 20,000 english words. In ACL, pages 174-184.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Word affect intensities", "authors": [ { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "", "middle": [], "last": "Mohammad", "suffix": "" } ], "year": 2018, "venue": "LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif M. Mohammad. 2018b. Word affect intensities. In LREC.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Thumbs up?: sentiment classification using machine learning techniques", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Shivakumar", "middle": [], "last": "Vaithyanathan", "suffix": "" } ], "year": 2002, "venue": "EMNLP", "volume": "", "issue": "", "pages": "79--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: sentiment classification using machine learning techniques. In EMNLP, pages 79- 86. Association for Computational Linguistics.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Extending neural generative conversational model using external knowledge sources", "authors": [ { "first": "Prasanna", "middle": [], "last": "Parthasarathi", "suffix": "" }, { "first": "Joelle", "middle": [], "last": "Pineau", "suffix": "" } ], "year": 2018, "venue": "EMNLP", "volume": "", "issue": "", "pages": "690--695", "other_ids": {}, "num": null, "urls": [], "raw_text": "Prasanna Parthasarathi and Joelle Pineau. 2018. Ex- tending neural generative conversational model us- ing external knowledge sources. In EMNLP, pages 690-695.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "EMNLP", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP, pages 1532-1543.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Context-dependent sentiment analysis in user-generated videos", "authors": [ { "first": "Soujanya", "middle": [], "last": "Poria", "suffix": "" }, { "first": "Erik", "middle": [], "last": "Cambria", "suffix": "" }, { "first": "Devamanyu", "middle": [], "last": "Hazarika", "suffix": "" }, { "first": "Navonil", "middle": [], "last": "Majumder", "suffix": "" }, { "first": "Amir", "middle": [], "last": "Zadeh", "suffix": "" }, { "first": "Louis-Philippe", "middle": [], "last": "Morency", "suffix": "" } ], "year": 2017, "venue": "ACL", "volume": "1", "issue": "", "pages": "873--883", "other_ids": {}, "num": null, "urls": [], "raw_text": "Soujanya Poria, Erik Cambria, Devamanyu Hazarika, Navonil Majumder, Amir Zadeh, and Louis-Philippe Morency. 2017. Context-dependent sentiment anal- ysis in user-generated videos. In ACL, volume 1, pages 873-883.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Meld: A multimodal multi-party dataset for emotion recognition in conversations", "authors": [ { "first": "Soujanya", "middle": [], "last": "Poria", "suffix": "" }, { "first": "Devamanyu", "middle": [], "last": "Hazarika", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.02508" ] }, "num": null, "urls": [], "raw_text": "Soujanya Poria, Devamanyu Hazarika, Navonil Ma- jumder, Gautam Naik, Erik Cambria, and Rada Mi- halcea. 2018. Meld: A multimodal multi-party dataset for emotion recognition in conversations. arXiv preprint arXiv:1810.02508.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Emotion recognition in conversation: Research challenges, datasets, and recent advances", "authors": [ { "first": "Soujanya", "middle": [], "last": "Poria", "suffix": "" }, { "first": "Navonil", "middle": [], "last": "Majumder", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1905.02947" ] }, "num": null, "urls": [], "raw_text": "Soujanya Poria, Navonil Majumder, Rada Mihalcea, and Eduard Hovy. 2019. Emotion recognition in conversation: Research challenges, datasets, and re- cent advances. arXiv preprint arXiv:1905.02947.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Improving language understanding by generative pre-training", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Karthik", "middle": [], "last": "Narasimhan", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Text Generation from Knowledge Graphs with Graph Transformers", "authors": [], "year": 2019, "venue": "NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi Luan Mirella Lapata Rik Koncel-Kedziorski, Dhanush Bekal and Hannaneh Hajishirzi. 2019. Text Generation from Knowledge Graphs with Graph Transformers. In NAACL.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Emotion detection in text: a review", "authors": [ { "first": "Armin", "middle": [], "last": "Seyeditabari", "suffix": "" }, { "first": "Narges", "middle": [], "last": "Tabari", "suffix": "" }, { "first": "Wlodek", "middle": [], "last": "Zadrozny", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1806.00674" ] }, "num": null, "urls": [], "raw_text": "Armin Seyeditabari, Narges Tabari, and Wlodek Zadrozny. 2018. Emotion detection in text: a re- view. arXiv preprint arXiv:1806.00674.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Baseline needs more love: On simple wordembedding-based models and associated pooling mechanisms", "authors": [ { "first": "Dinghan", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Guoyin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Wenlin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Renqiang Min", "suffix": "" }, { "first": "Qinliang", "middle": [], "last": "Su", "suffix": "" }, { "first": "Yizhe", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Chunyuan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ricardo", "middle": [], "last": "Henao", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Carin", "suffix": "" } ], "year": 2018, "venue": "ACL", "volume": "", "issue": "", "pages": "440--450", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dinghan Shen, Guoyin Wang, Wenlin Wang, Mar- tin Renqiang Min, Qinliang Su, Yizhe Zhang, Chun- yuan Li, Ricardo Henao, and Lawrence Carin. 2018. Baseline needs more love: On simple word- embedding-based models and associated pooling mechanisms. In ACL, pages 440-450.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Conceptnet 5.5: An open multilingual graph of general knowledge", "authors": [ { "first": "Robyn", "middle": [], "last": "Speer", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Chin", "suffix": "" }, { "first": "Catherine", "middle": [], "last": "Havasi", "suffix": "" } ], "year": 2017, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In AAAI.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "How time matters: Learning time-decay attention for contextual spoken language understanding in dialogues", "authors": [ { "first": "Shang-Yu", "middle": [], "last": "Su", "suffix": "" }, { "first": "Pei-Chieh", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Yun-Nung", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2018, "venue": "NAACL", "volume": "1", "issue": "", "pages": "2133--2142", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shang-Yu Su, Pei-Chieh Yuan, and Yun-Nung Chen. 2018. How time matters: Learning time-decay at- tention for contextual spoken language understand- ing in dialogues. In NAACL, volume 1, pages 2133- 2142.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Open domain question answering using early fusion of knowledge bases and text", "authors": [ { "first": "Haitian", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Bhuwan", "middle": [], "last": "Dhingra", "suffix": "" }, { "first": "Manzil", "middle": [], "last": "Zaheer", "suffix": "" }, { "first": "Kathryn", "middle": [], "last": "Mazaitis", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "William", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 2018, "venue": "EMNLP", "volume": "", "issue": "", "pages": "4231--4242", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan Salakhutdinov, and William Co- hen. 2018. Open domain question answering us- ing early fusion of knowledge bases and text. In EMNLP, pages 4231-4242.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Lexicon-based methods for sentiment analysis", "authors": [ { "first": "Maite", "middle": [], "last": "Taboada", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Brooke", "suffix": "" }, { "first": "Milan", "middle": [], "last": "Tofiloski", "suffix": "" }, { "first": "Kimberly", "middle": [], "last": "Voll", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Stede", "suffix": "" } ], "year": 2011, "venue": "Computational Linguistics", "volume": "37", "issue": "2", "pages": "267--307", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maite Taboada, Julian Brooke, Milan Tofiloski, Kim- berly Voll, and Manfred Stede. 2011. Lexicon-based methods for sentiment analysis. Computational Lin- guistics, 37(2):267-307.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "NIPS", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS, pages 5998-6008.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Graph attention networks", "authors": [ { "first": "Petar", "middle": [], "last": "Velikovi", "suffix": "" }, { "first": "Guillem", "middle": [], "last": "Cucurull", "suffix": "" }, { "first": "Arantxa", "middle": [], "last": "Casanova", "suffix": "" }, { "first": "Adriana", "middle": [], "last": "Romero", "suffix": "" }, { "first": "Pietro", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2018, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Petar Velikovi, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li, and Yoshua Bengio. 2018. Graph attention networks. In ICLR.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Baselines and bigrams: Simple, good sentiment and topic classification", "authors": [ { "first": "Sida", "middle": [], "last": "Wang", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2012, "venue": "ACL", "volume": "", "issue": "", "pages": "90--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sida Wang and Christopher D Manning. 2012. Base- lines and bigrams: Simple, good sentiment and topic classification. In ACL, pages 90-94. Association for Computational Linguistics.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "Global-to-local memory pointer networks for task-oriented dialogue", "authors": [ { "first": "Chien-Sheng", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" } ], "year": 2019, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chien-Sheng Wu, Richard Socher, and Caiming Xiong. 2019. Global-to-local memory pointer networks for task-oriented dialogue. In ICLR.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Augmenting end-to-end dialogue systems with commonsense knowledge", "authors": [ { "first": "Tom", "middle": [], "last": "Young", "suffix": "" }, { "first": "Erik", "middle": [], "last": "Cambria", "suffix": "" }, { "first": "Iti", "middle": [], "last": "Chaturvedi", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Subham", "middle": [], "last": "Biswas", "suffix": "" }, { "first": "Minlie", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2018, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Young, Erik Cambria, Iti Chaturvedi, Hao Zhou, Subham Biswas, and Minlie Huang. 2018. Aug- menting end-to-end dialogue systems with common- sense knowledge. In AAAI.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "Emotion detection on tv show transcripts with sequence-based convolutional neural networks", "authors": [ { "first": "M", "middle": [], "last": "Sayyed", "suffix": "" }, { "first": "Jinho D", "middle": [], "last": "Zahiri", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2018, "venue": "Workshops at AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sayyed M Zahiri and Jinho D Choi. 2018. Emotion de- tection on tv show transcripts with sequence-based convolutional neural networks. In Workshops at AAAI.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "Improving the transformer translation model with document-level context", "authors": [ { "first": "Jiacheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Huanbo", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Feifei", "middle": [], "last": "Zhai", "suffix": "" }, { "first": "Jingfang", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2018, "venue": "EMNLP", "volume": "", "issue": "", "pages": "533--542", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiacheng Zhang, Huanbo Luan, Maosong Sun, Feifei Zhai, Jingfang Xu, Min Zhang, and Yang Liu. 2018a. Improving the transformer translation model with document-level context. In EMNLP, pages 533-542.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "Text emotion distribution learning via multi-task convolutional neural network", "authors": [ { "first": "Yuxiang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jiamei", "middle": [], "last": "Fu", "suffix": "" }, { "first": "Dongyu", "middle": [], "last": "She", "suffix": "" }, { "first": "Ying", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Senzhang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jufeng", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2018, "venue": "In IJCAI", "volume": "", "issue": "", "pages": "4595--4601", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuxiang Zhang, Jiamei Fu, Dongyu She, Ying Zhang, Senzhang Wang, and Jufeng Yang. 2018b. Text emotion distribution learning via multi-task convo- lutional neural network. In IJCAI, pages 4595- 4601.", "links": null }, "BIBREF62": { "ref_id": "b62", "title": "ntuer at SemEval-2019 task 3: Emotion classification with word and sentence representations in RCNN", "authors": [ { "first": "Peixiang", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Chunyan", "middle": [], "last": "Miao", "suffix": "" } ], "year": 2019, "venue": "Se-mEval", "volume": "", "issue": "", "pages": "282--286", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peixiang Zhong and Chunyan Miao. 2019. ntuer at SemEval-2019 task 3: Emotion classification with word and sentence representations in RCNN. In Se- mEval, pages 282-286.", "links": null }, "BIBREF63": { "ref_id": "b63", "title": "An affect-rich neural conversational model with biased attention and weighted cross-entropy loss", "authors": [ { "first": "Peixiang", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Di", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Chunyan", "middle": [], "last": "Miao", "suffix": "" } ], "year": 2019, "venue": "AAAI", "volume": "", "issue": "", "pages": "7492--7500", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peixiang Zhong, Di Wang, and Chunyan Miao. 2019. An affect-rich neural conversational model with bi- ased attention and weighted cross-entropy loss. In AAAI, pages 7492-7500.", "links": null }, "BIBREF64": { "ref_id": "b64", "title": "Emotional chatting machine: Emotional conversation generation with internal and external memory", "authors": [ { "first": "Hao", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Minlie", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Tianyang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xiaoyan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2018, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, and Bing Liu. 2018a. Emotional chatting ma- chine: Emotional conversation generation with in- ternal and external memory. In AAAI.", "links": null }, "BIBREF65": { "ref_id": "b65", "title": "Commonsense knowledge aware conversation generation with graph attention", "authors": [ { "first": "Hao", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Young", "suffix": "" }, { "first": "Minlie", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Haizhou", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Jingfang", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Xiaoyan", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2018, "venue": "IJCAI", "volume": "", "issue": "", "pages": "4623--4629", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hao Zhou, Tom Young, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. 2018b. Com- monsense knowledge aware conversation generation with graph attention. In IJCAI, pages 4623-4629.", "links": null }, "BIBREF66": { "ref_id": "b66", "title": "Multi-turn response selection for chatbots with deep attention matching network", "authors": [ { "first": "Xiangyang", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Daxiang", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Ying", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Wayne", "middle": [ "Xin" ], "last": "Zhao", "suffix": "" }, { "first": "Dianhai", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2018, "venue": "ACL", "volume": "1", "issue": "", "pages": "1118--1127", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiangyang Zhou, Lu Li, Daxiang Dong, Yi Liu, Ying Chen, Wayne Xin Zhao, Dianhai Yu, and Hua Wu. 2018c. Multi-turn response selection for chatbots with deep attention matching network. In ACL, vol- ume 1, pages 1118-1127.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Overall architecture of our proposed KET model. The positional encoding, residual connection, and layer normalization are omitted in the illustration for brevity.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF1": { "text": "Validation performance by KET. Top: different context length (M ). Bottom: different sizes of random fractions of ConceptNet.", "uris": null, "num": null, "type_str": "figure" }, "TABREF3": { "num": null, "type_str": "table", "html": null, "content": "", "text": "Dataset descriptions." }, "TABREF4": { "num": null, "type_str": "table", "html": null, "content": "
.
", "text": "" }, "TABREF7": { "num": null, "type_str": "table", "html": null, "content": "
DatasetM mdph
EC230 200 100 4
DailyDialog630 300 400 4
MELD630 200 100 4
EmoryNLP630 100 200 4
IEMOCAP630 300 400 4
", "text": "Performance comparisons on the five test sets. Best values are highlighted in bold." }, "TABREF8": { "num": null, "type_str": "table", "html": null, "content": "", "text": "Hyper-parameter settings for KET. M : context length. m: number of tokens per utterance. d: word embedding size. p: hidden size in FF layer. h: number of heads." }, "TABREF9": { "num": null, "type_str": "table", "html": null, "content": "
Dataset00.30.71
EC0.7345 0
", "text": ".7397 0.7426 0.7363 DailyDialog 0.5365 0.5432 0.5451 0.5383 MELD 0.5321 0.5395 0.5366 0.5306 EmoryNLP 0.3528 0.3624 0.3571 0.3488 IEMOCAP 0.5344 0.5367 0.5314 0.5251" }, "TABREF10": { "num": null, "type_str": "table", "html": null, "content": "
DatasetKET-context -knowledge
EC0.74510.73430.7359
DailyDialog 0.55440.52820.5402
MELD0.54010.51770.5248
EmoryNLP 0.37120.35640.3553
IEMOCAP0.53890.49760.5217
", "text": "Analysis of the relatedness-affectiveness tradeoff on the validation sets. Each column corresponds to a fixed \u03bb k for all concepts (see Equation 8)." }, "TABREF11": { "num": null, "type_str": "table", "html": null, "content": "", "text": "Ablation study for KET on the validation sets." } } } }