{ "paper_id": "D19-1011", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:01:16.190142Z" }, "title": "Multi-hop Selector Network for Multi-turn Response Selection in Retrieval-based Chatbots", "authors": [ { "first": "Chunyuan", "middle": [], "last": "Yuan", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Chinese Academy of Sciences", "location": {} }, "email": "yuanchunyuan@iie.ac.cn" }, { "first": "Shangwen", "middle": [], "last": "Lv", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Chinese Academy of Sciences", "location": {} }, "email": "lvshangwen@iie.ac.cn" }, { "first": "Mingming", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Chinese Academy of Sciences", "location": {} }, "email": "limingming@iie.ac.cn" }, { "first": "Wei", "middle": [], "last": "Zhou", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Chinese Academy of Sciences", "location": {} }, "email": "zhouwei@iie.ac.cn" }, { "first": "Fuqing", "middle": [], "last": "Zhu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Chinese Academy of Sciences", "location": {} }, "email": "zhufuqing@iie.ac.cn" }, { "first": "Jizhong", "middle": [], "last": "Han", "suffix": "", "affiliation": { "laboratory": "", "institution": "Chinese Academy of Sciences", "location": {} }, "email": "hanjizhong@iie.ac.cn" }, { "first": "Songlin", "middle": [], "last": "Hu", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Chinese Academy of Sciences", "location": {} }, "email": "husonglin@iie.ac.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Multi-turn retrieval-based conversation is an important task for building intelligent dialogue systems. Existing works mainly focus on matching candidate responses with every context utterance on multiple levels of granularity, which ignore the side effect of using excessive context information. Context utterances provide abundant information for extracting more matching features, but it also brings noise signals and unnecessary information. In this paper, we will analyze the side effect of using too many context utterances and propose a multi-hop selector network (MSN) to alleviate the problem. Specifically, MSN firstly utilizes a multi-hop selector to select the relevant utterances as context. Then, the model matches the filtered context with the candidate response and obtains a matching score. Experimental results show that MSN outperforms some state-of-the-art methods on three public multi-turn dialogue datasets.", "pdf_parse": { "paper_id": "D19-1011", "_pdf_hash": "", "abstract": [ { "text": "Multi-turn retrieval-based conversation is an important task for building intelligent dialogue systems. Existing works mainly focus on matching candidate responses with every context utterance on multiple levels of granularity, which ignore the side effect of using excessive context information. Context utterances provide abundant information for extracting more matching features, but it also brings noise signals and unnecessary information. In this paper, we will analyze the side effect of using too many context utterances and propose a multi-hop selector network (MSN) to alleviate the problem. Specifically, MSN firstly utilizes a multi-hop selector to select the relevant utterances as context. Then, the model matches the filtered context with the candidate response and obtains a matching score. Experimental results show that MSN outperforms some state-of-the-art methods on three public multi-turn dialogue datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Building a dialogue system that can naturally and consistently converse with humans has drawn increasing research interests in past years. Existing works on building dialogue systems include generation-based and retrieval-based methods. Compared with generation-based methods, retrieval-based methods have advantages in providing fluent and informative responses. Many industrial products have applied retrieval-based dialogue system, e.g., the E-commerce assistant AliMe Assist from Alibaba Group and the XiaoIce (Shum et al., 2018) from Microsoft.", "cite_spans": [ { "start": 514, "end": 533, "text": "(Shum et al., 2018)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Early studies (Tan et al., 2015; Wan et al., 2016) of retrieval-based dialogue system \u2020Equally contributed. * Corresponding author.", "cite_spans": [ { "start": 14, "end": 32, "text": "(Tan et al., 2015;", "ref_id": "BIBREF14" }, { "start": 33, "end": 50, "text": "Wan et al., 2016)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "focus on response selection for single-turn conversation. Recently, researchers have begun to pay attention to the multi-turn conversation, aiming at selecting the most related response from a set of candidates given the context utterances of a conversation. Some effective models, such as Sequential Matching Network (SMN) , Deep Attention Matching network (DAM) (Zhou et al., 2018c) , Multi-Representation Fusion Network (MFRN) (Tao et al., 2019) , have been proposed to capture the matching features on multiple levels of granularity (words, phrases, sentences, etc. ) and short-term and long-term dependencies among words. Previous works have shown that utilizing multiturn utterances can further improve the matching performance than only using single-turn utterance (i.e., last utterance). But context utterance is a \"double-edged sword\", it also provides a lot of noise while providing abundant information, which would influence the performance due to the sensitivity of these matching-based methods. , DAM (Zhou et al., 2018c ) from E-commerce Corpus. The scores in the table are matching scores predicted by the models.", "cite_spans": [ { "start": 364, "end": 384, "text": "(Zhou et al., 2018c)", "ref_id": "BIBREF28" }, { "start": 430, "end": 448, "text": "(Tao et al., 2019)", "ref_id": "BIBREF15" }, { "start": 537, "end": 569, "text": "(words, phrases, sentences, etc.", "ref_id": null }, { "start": 1015, "end": 1034, "text": "(Zhou et al., 2018c", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Dialogue Text SMN DAM Turn-1 A: Are there any discounts activities recently? Turn-2 B: No. Our product have been cheaper than before. Turn-3 A: Oh. Turn-4 B: Hum! Turn-5 A: I'll buy these nuts. Can you sell me cheaper? Turn-6 B: You can get some coupons on the homepage. Turn-7 A: Will you give me some nut clips? Turn-8 B: Of course we will. Turn-9 A: How many clips will you give? Resp-1 One clip for every package. (True) 0.832 0.854 Resp-2 OK, we will give you a coupons worth $1. (False) 0.925 0.947", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Turns", "sec_num": null }, { "text": "To illustrate the problem, we show an error case of SMN and DAM (Zhou et al., 2018c ) from E-commerce Corpus in Table 1 . We can see that although \"Resp-1\" is the right answer for utterance \"Turn-9\", the SMN and DAM mod-els still choose \"Resp-2\". Because it has more words overlap with context utterances, thus accumulating a larger similarity score. We can easily observe that \"Resp-2\" is relevant to former utterances (Turn-1 to Turn-6), but the topic has changed after \"Turn-6\". Besides, we can see that \"Turn-3\" and \"Turn-4\" do not provide any useful information for selecting candidate response. From this example, irrelevant context utterances may cause the models making simple mistakes that humans would not make. Furthermore, we conduct several adversarial experiments and the results show that these matching-based models are very sensitive to the adversarial samples.", "cite_spans": [ { "start": 64, "end": 83, "text": "(Zhou et al., 2018c", "ref_id": "BIBREF28" } ], "ref_spans": [ { "start": 112, "end": 119, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Turns", "sec_num": null }, { "text": "In this paper, we propose a multi-hop selector network to tackle the above problem. Intuitively, the closer the utterance to the response is, the more it reflects the intention of the last dialogue session. Thus, we firstly use the last utterance as key to select context utterances that are relevant to it on the word and sentence level. However, we find that there are many samples whose last utterance is very short and contains very limited information (such as \"good\", \"ok\"), which will cause the selectors to lose too much useful context information. Therefore, we propose multi-hop selectors to select more relevant context utterances, yielding k different context. Then, we fuse these selected context utterances and match it with candidate response. During the matching stage, the convolution neural network (CNN) is applied to extract matching features and the gated recurrent unit (GRU) is applied to learn the temporal relationship of utterances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Turns", "sec_num": null }, { "text": "The contributions of this paper are summarized as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Turns", "sec_num": null }, { "text": "\u2022 We find the noises in context utterances could influence the matching performance and design adversarial experiments to verify it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Turns", "sec_num": null }, { "text": "\u2022 We propose a unified network MSN to select relevant context utterances from word and utterance level and fuse the selected context to generate a better context representation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Turns", "sec_num": null }, { "text": "\u2022 Experimental results on three public datasets achieve significant improvement, which shows the effectiveness of MSN.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Turns", "sec_num": null }, { "text": "The outline of the paper is as follows. Section 2 introduces related works. Section 3 describes adversarial experiment to check how sensitivity of previous models to the context utterances. Section 4 describes every component of MSN model. Section 5 discusses the experiments and corresponding results. Section 6 discusses some experiments to explore the influence of hyper-parameters on performance. We conclude our work in Section 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Turns", "sec_num": null }, { "text": "With the development of natural language processing, building intelligent chatbots with data-driven approaches has drawn increasing attention in recent years. Existing works can be generally categorized into retrieval-based methods (Wan et al., 2016; Zhang et al., 2018; Tao et al., 2019) and generation-based methods (Shang et al., 2015; Serban et al., 2016; Wu et al., 2018; Zhou et al., 2018a,b) . In this work, we focus on retrieval-based method and study context-based response selection.", "cite_spans": [ { "start": 232, "end": 250, "text": "(Wan et al., 2016;", "ref_id": "BIBREF17" }, { "start": 251, "end": 270, "text": "Zhang et al., 2018;", "ref_id": "BIBREF24" }, { "start": 271, "end": 288, "text": "Tao et al., 2019)", "ref_id": "BIBREF15" }, { "start": 318, "end": 338, "text": "(Shang et al., 2015;", "ref_id": "BIBREF12" }, { "start": 339, "end": 359, "text": "Serban et al., 2016;", "ref_id": "BIBREF11" }, { "start": 360, "end": 376, "text": "Wu et al., 2018;", "ref_id": "BIBREF21" }, { "start": 377, "end": 398, "text": "Zhou et al., 2018a,b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Early retrieval-based chatbots are devoted to response selection for single-turn conversation (Wang et al., 2013; Tan et al., 2015; . Recently, researchers have begun to turn to the multi-turn conversation. Lowe et al. (2015) use RNN to read context and response, use the last hidden states to represent context and response as two semantic vectors to measure their relevance. Zhou et al. (2016) perform context-response matching with a multi-view model on both word and utterance levels. Considering concatenating utterances in context may lose relationships among utterances or important contextual information, separately match the response with each utterance based on a convolutional neural network. This paradigm is applied in many subsequent works. Zhou et al. (2018c) consider the dependency relation among utterances based on the attention mechanism. Tao et al. (2019) fuse words, n-grams, and sub-sequences of utterances representations and capture both short-term and long-term dependencies among words.", "cite_spans": [ { "start": 94, "end": 113, "text": "(Wang et al., 2013;", "ref_id": "BIBREF18" }, { "start": 114, "end": 131, "text": "Tan et al., 2015;", "ref_id": "BIBREF14" }, { "start": 207, "end": 225, "text": "Lowe et al. (2015)", "ref_id": "BIBREF7" }, { "start": 377, "end": 395, "text": "Zhou et al. (2016)", "ref_id": "BIBREF27" }, { "start": 756, "end": 775, "text": "Zhou et al. (2018c)", "ref_id": "BIBREF28" }, { "start": 860, "end": 877, "text": "Tao et al. (2019)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Different from previous works, (i) we study the influence of using excessive context utterances, (ii) we explore how to filter out irrelevant context to improve the robustness of matching-based methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "To study how sensitive of the previous models Zhang et al., 2018; Zhou et al., 2018c; Tao et al., 2019) to the context utterances, we conduct several adversarial experiments inspired by (Jia and Liang, 2017) . We keep the training set unchanged and add some noises to the original test set. To be specific, we randomly sample 1\u223c3 words from context utterances and append them on every candidate response. In this way, we can obtain 3 different adversarial test sets: adversarial set1, adversarial set2, adversarial set3.", "cite_spans": [ { "start": 46, "end": 65, "text": "Zhang et al., 2018;", "ref_id": "BIBREF24" }, { "start": 66, "end": 85, "text": "Zhou et al., 2018c;", "ref_id": "BIBREF28" }, { "start": 86, "end": 103, "text": "Tao et al., 2019)", "ref_id": "BIBREF15" }, { "start": 186, "end": 207, "text": "(Jia and Liang, 2017)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Adversarial experiments", "sec_num": "3" }, { "text": "Then, we evaluate the models again to see how much will the performance change. To ensure the fairness of the experiments, we use the results from their papers for the original test set. Moreover, we use their open source code for adversarial experiments. We employ recall at position k in n candidates (R n @k) as the evaluation metric, which is the same as previous works. adversarial set3 R 10 @1 R 10 @2 R 10 @1 R 10 @2 R 10 @1 R 10 @2 R 10 @1 R 10 @2 The experimental results are shown in Table 2 . From the table, we can observe that the one-word noise will bring about 7% \u223c 13% absolute de-crease on R 10 @1 and the three-word noise brings about 20% R 10 @1 decrease. Thus, we can see that matching-based models Zhang et al., 2018; Zhou et al., 2018c; Tao et al., 2019) are very sensitive to small noises of the dataset. Moreover, using too many context utterances will greatly increase the probability of introducing noise. The results of MSN also show that filtering irrelevant utterances can effectively alleviate this problem and improve the robustness of matching-based models.", "cite_spans": [ { "start": 719, "end": 738, "text": "Zhang et al., 2018;", "ref_id": "BIBREF24" }, { "start": 739, "end": 758, "text": "Zhou et al., 2018c;", "ref_id": "BIBREF28" }, { "start": 759, "end": 776, "text": "Tao et al., 2019)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 494, "end": 501, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Adversarial experiments", "sec_num": "3" }, { "text": "Suppose that we have a data set", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formalization", "sec_num": "4.1" }, { "text": "D = {U i , r i , y i } N i=1 , where U i = {u i1 , u i2 , .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formalization", "sec_num": "4.1" }, { "text": ". . , u iL } represents a conversation context with L utterances and every utterance u ij contains T words. r i is a response candidate and y i \u2208 {0, 1} denotes a label. y i = 1 means r i is a proper response for U i , otherwise y i = 0. Our goal is to learn a matching model g(\u2022, \u2022) with D. For any context-response pair (U i , r i ), g(U i , r i ) measures the matching degree between U i and r i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formalization", "sec_num": "4.1" }, { "text": "To this end, we need to address two problems:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formalization", "sec_num": "4.1" }, { "text": "(1) how to select proper context utterances from U i ; and (2) how to fuse these selected utterances together for a better representation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formalization", "sec_num": "4.1" }, { "text": "We propose a multi-hop selector network (MSN) to model g(\u2022, \u2022). Figure 1 gives the architecture, which generally follows the representationmatching-aggregation framework Zhang et al., 2018; Zhou et al., 2018c; Tao et al., 2019) to match response with multi-turn context.", "cite_spans": [ { "start": 170, "end": 189, "text": "Zhang et al., 2018;", "ref_id": "BIBREF24" }, { "start": 190, "end": 209, "text": "Zhou et al., 2018c;", "ref_id": "BIBREF28" }, { "start": 210, "end": 227, "text": "Tao et al., 2019)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 64, "end": 72, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Model Overview", "sec_num": "4.2" }, { "text": "Different from previous works, we add a selection process before the above framework. MSN first constructs semantic representations at word level by an Attentive Module. Then, each utterance are packed as context or key and sent to the \"Hopk Selector\" to calculate relevance scores. The scores of k different selectors are fused together by a Context Fusion module. Finally, the fused scores are performed over original context utterances to filter out irrelevant context. The rest context utterances are applied for response matching.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Overview", "sec_num": "4.2" }, { "text": "We use the Attentive Module to learn the context information for word representation. Attentive Module is proposed in DAM (Zhou et al., 2018c) and it is a variant of Multi-head Attention (Vaswani et al., 2017) . Figure 2 shows its structure. The AttentiveModule(Q, K, V ) has three input sentences: the query sentence, the key sentence and the value sentence, namely Q \u2208 R nq\u00d7d , K \u2208 R n k \u00d7d , and V \u2208 R nv\u00d7d respectively, where n q , n k , and n v denote the number of words in each sentence, and d is the dimension of the embedding.", "cite_spans": [ { "start": 122, "end": 142, "text": "(Zhou et al., 2018c)", "ref_id": "BIBREF28" }, { "start": 187, "end": 209, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 212, "end": 220, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Attentive Module", "sec_num": "4.3" }, { "text": "The Attentive Module first takes each word in the query sentence to attend to words in the key sentence via Scaled Dot-Product Attention (Vaswani et al., 2017) , and then applies those attention weights upon the value sentence:", "cite_spans": [ { "start": 137, "end": 159, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Attentive Module", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "V att = sof tmax QK T \u221a d V .", "eq_num": "(1)" } ], "section": "Attentive Module", "sec_num": "4.3" }, { "text": "Then, a feed-forward network (FFN) with RELU (LeCun et al., 2015) activation is applied upon the normalization result, to further process the fused embeddings:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attentive Module", "sec_num": "4.3" }, { "text": "FFN(x) = max(0, xW 1 + b 1 )W 2 + b 2 , (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attentive Module", "sec_num": "4.3" }, { "text": "where x is a 2D matrix in the same shape of query sentence Q and W 1 , b 1 ,W 2 , b 2 are learnt parameters. The result FFN(x) is a 2D matrix that has the same shape as x, FFN(x) is then residually added to x, and the fusion result is then normalized as the final outputs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attentive Module", "sec_num": "4.3" }, { "text": "Given", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Selector", "sec_num": "4.4" }, { "text": "U i = [u i1 , . . . , u ij , . . . , u iL ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Selector", "sec_num": "4.4" }, { "text": ", the wordlevel embedding representations for utterance u ij \u2208 R T \u00d7d , where d is the dimension of word vector, we use the Attentive Module to reconstruct the word representations of each utterance to encode the context and dependency information into word, which is formulated as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Selector", "sec_num": "4.4" }, { "text": "u ij = AttentiveModule(u ij , u ij , u ij ) , (3) where u ij \u2208 R T \u00d7d . U i = [u i1 , u i2 , . . . , u iL ] \u2208 R L\u00d7T \u00d7d .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Selector", "sec_num": "4.4" }, { "text": "We first discuss how to construct \"Hop1 Selector\", which consists of word and utterance selector. To capture matching features at multiple levels of granularity, we leverage word and utterance level matching features to select relevant context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Selector", "sec_num": "4.4" }, { "text": "At word level, we utilize cross attention to obtain a matching feature map for each context utterance u ij and key K 1 = u iL , which is formulated as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Selector", "sec_num": "4.4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "A = v T tanh(K T 1 WU i + b) ,", "eq_num": "(4)" } ], "section": "Word Selector", "sec_num": "4.4.1" }, { "text": "where W \u2208 R d\u00d7d\u00d7h , b \u2208 R h and v \u2208 R h\u00d71 . And we get a word alignment matrix A \u2208 R L\u00d7T \u00d7T . Then, we extract the most prominent matching features from A by max pooling over row and column. Then, they are concatenated together:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Selector", "sec_num": "4.4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "m 1 (K 1 , U i ) = [ max dim=2 A; max dim=3 A] ,", "eq_num": "(5)" } ], "section": "Word Selector", "sec_num": "4.4.1" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Selector", "sec_num": "4.4.1" }, { "text": "m 1 (K 1 , U i ) \u2208 R L\u00d72T", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Selector", "sec_num": "4.4.1" }, { "text": ", which reflects which words have identical or similar meaning between utterances u ij and key u iL . The matching features are transformed to the relevance score by a linear layer:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Selector", "sec_num": "4.4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "s 1 = softmax(m 1 (K 1 , U i )c + b) ,", "eq_num": "(6)" } ], "section": "Word Selector", "sec_num": "4.4.1" }, { "text": "where c \u2208 R 2T \u00d71 and b \u2208 R L\u00d71 . The word selector can only capture word-level relevance between key and utterances. It can not reflect whether key and context are compatible on the overall semantic level. Thus, we continue to evaluate the relevance on the utterance level.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Selector", "sec_num": "4.4.1" }, { "text": "Firstly, the word-level representations U i are transformed to utterance-level representations by mean pooling over word dimension:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utterance Selector", "sec_num": "4.4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "U i = mean(U i ) ,", "eq_num": "(7)" } ], "section": "Utterance Selector", "sec_num": "4.4.2" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utterance Selector", "sec_num": "4.4.2" }, { "text": "U i \u2208 R L\u00d7d .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utterance Selector", "sec_num": "4.4.2" }, { "text": "We use cosine similarity to measure the relevance between key K 2 = U iL and context utterances U i , which is formulated as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utterance Selector", "sec_num": "4.4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "s 2 = U i K T 2 || U i || 2 K 2 2 ,", "eq_num": "(8)" } ], "section": "Utterance Selector", "sec_num": "4.4.2" }, { "text": "where s 2 \u2208 R L\u00d71 is the relevance score at utterance level. Both the scores of word selector and utterance selector are important to measure the relevance of last utterance and context. In order to make full use of word and utterance selectors, we design a combined strategy to fuse two scores. Specifically, we use the weighted sum of two scores for selection:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utterance Selector", "sec_num": "4.4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "s (1) = \u03b1 * s1 + (1 \u2212 \u03b1) * s2 ,", "eq_num": "(9)" } ], "section": "Utterance Selector", "sec_num": "4.4.2" }, { "text": "where \u03b1 is a hyper-parameter and s (1) is the final score that hop1 selector produces. The default value of \u03b1 is set to 0.5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utterance Selector", "sec_num": "4.4.2" }, { "text": "Although \"Hop1 Selector\" can choose proper context utterances that are related to the last dialogue session, we find that there are many samples whose last utterance contains very little information (such as \"good\", \"ok\"), which will cause the selector lose too much useful context information. Thus, we combine it with u i,L\u22121 , u i,L\u22122 , ..., u i,L\u2212k by mean pooling. Then, we treat them as key to conduct the same process as \"Hop1 Selector\" for context selection. In this way, we can get k different selectors, yielding k different scores", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hopk Selector", "sec_num": "4.4.3" }, { "text": "S = [s (1) , s (2) , . . . , s (k) ] \u2208 R L\u00d7k .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hopk Selector", "sec_num": "4.4.3" }, { "text": "Then we fuse the similarity scores from different selectors and apply it to select relevant context utterances for matching. Firstly, we combine the similarity scores S \u2208 R L\u00d7k to form the final scores for each context utterances and filter out irrelevant context, which is formulated as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Fusion", "sec_num": "4.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "s = SW T , s = s (sigmoid(s ) \u2265 \u03b3) ,", "eq_num": "(10)" } ], "section": "Context Fusion", "sec_num": "4.5" }, { "text": "where W \u2208 R 1\u00d7k is a dynamic weight vector and will be tuned by the gradient. \u03b3 is the threshold and will be tuned according to the dataset. The default value of \u03b3 can be set to 0.5. The utterances whose scores are below \u03b3 will be allocated lower weights or filtered out. Then, we multiply the mask weight s and context utterances to filter irrelevant context:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Fusion", "sec_num": "4.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "U i = s U i ,", "eq_num": "(11)" } ], "section": "Context Fusion", "sec_num": "4.5" }, { "text": "and generate\u00db i \u2208 R L\u00d7T \u00d7d , where U i \u2208 R L\u00d7T \u00d7d is the original utterances tensor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Fusion", "sec_num": "4.5" }, { "text": "Similar to DAM (Zhou et al., 2018c) , we utilize the self and cross matching paradigm to construct better matching feature maps.", "cite_spans": [ { "start": 15, "end": 35, "text": "(Zhou et al., 2018c)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Utterance-Response Matching", "sec_num": "4.6" }, { "text": "Given the filtered utterances\u00db i = [\u00fb i1 , . . . ,\u00fb ij , . . . ,\u00fb iL ] and candidate response r i \u2208 R T \u00d7d , they are then used to construct a word-word similarity matrix M 1 \u2208 R L\u00d72\u00d7T \u00d7T by dot product and cosine similarity. Both of them are stacked together as the channel dimension. The process can be formulated as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Origin Matching", "sec_num": "4.6.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "M 1 = [\u00db i A 1 r T i ; cos(\u00db i , r i )] .", "eq_num": "(12)" } ], "section": "Origin Matching", "sec_num": "4.6.1" }, { "text": "where A 1 \u2208 R d\u00d7d is a linear transformation matrix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Origin Matching", "sec_num": "4.6.1" }, { "text": "Then, we use the Attentive Module over word dimension to construct multi-grained representations, which is formulated as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Self Matching", "sec_num": "4.6.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "U self i = AttentiveModule(\u00dbi,\u00dbi,\u00dbi) , r self i = AttentiveModule(ri, ri, ri) .", "eq_num": "(13)" } ], "section": "Self Matching", "sec_num": "4.6.2" }, { "text": "By this means, the words in each utterance or candidate response are connected together repeatedly to combine more and more overall characterizations. Different from DAM (Zhou et al., 2018c) , we do not stack many Attentive Module layers because it will drastically increase the computational expense. Then, we use them to construct M 2 \u2208 R L\u00d72\u00d7T \u00d7T , whose element is", "cite_spans": [ { "start": 170, "end": 190, "text": "(Zhou et al., 2018c)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Self Matching", "sec_num": "4.6.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "M 2 = [\u00db self i A 2 (r self i ) T ; cos(\u00db self i , r self i )] ,", "eq_num": "(14)" } ], "section": "Self Matching", "sec_num": "4.6.2" }, { "text": "where A 2 \u2208 R d\u00d7d is a linear transformation matrix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Self Matching", "sec_num": "4.6.2" }, { "text": "Similarly, we build the semantic association between every utterance and response by the attentive module:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross Matching", "sec_num": "4.6.3" }, { "text": "U cross i = AttentiveModule(\u00dbi, ri, ri) , r cross i = AttentiveModule(ri,\u00dbi,\u00dbi) . (15)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross Matching", "sec_num": "4.6.3" }, { "text": "In this way, we can make the inter-dependent segment pairs close to each other, and aliment scores between those latently inter-dependent pairs could get increased, which will better encode the dependency relation into representation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross Matching", "sec_num": "4.6.3" }, { "text": "Finally, we use\u00db cross i and r cross i to construct M 3 \u2208 R L\u00d72\u00d7T \u00d7T , whose element is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross Matching", "sec_num": "4.6.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "M 3 = [\u00db cross i A 3 (r cross i ) T ; cos(\u00db cross i , r cross i )] ,", "eq_num": "(16)" } ], "section": "Cross Matching", "sec_num": "4.6.3" }, { "text": "where A 3 \u2208 R d\u00d7d is a linear transformation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross Matching", "sec_num": "4.6.3" }, { "text": "M = [M 1 ; M 2 ; M 3 ] \u2208 R L\u00d76\u00d7T", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MSN aggregates all the matching matrices together", "sec_num": null }, { "text": "\u00d7T and applies 2D CNN and max pooling for matching feature extraction and use GRU to model the temporal relationship of utterances in the context, which is the same as SMN .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MSN aggregates all the matching matrices together", "sec_num": null }, { "text": "Then we compute matching score g(U i , r i ) based on the matching features. Specifically, we use the final state of GRU output h L as features and apply a single-layer perceptron to obtain score:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MSN aggregates all the matching matrices together", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "g(U i , r i ) = \u03c3(Wh L + b) ,", "eq_num": "(17)" } ], "section": "MSN aggregates all the matching matrices together", "sec_num": null }, { "text": "where W and b are learnt parameters, \u03c3(\u2022) is sigmoid activation function. Finally, the negative log-likelihood is used as a loss function to optimize the training process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MSN aggregates all the matching matrices together", "sec_num": null }, { "text": "We test MSN on three widely used multi-turn response selection datasets, the Ubuntu Corpus (Lowe et al., 2015) , the Douban Corpus and the E-commerce Corpus (Zhang et al., 2018) . Data statistics are in Table 3 . Ubuntu Corpus consists of English multi-turn conversations about technical support collected from chat logs of the Ubuntu forum.", "cite_spans": [ { "start": 91, "end": 110, "text": "(Lowe et al., 2015)", "ref_id": "BIBREF7" }, { "start": 157, "end": 177, "text": "(Zhang et al., 2018)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 203, "end": 210, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Dataset", "sec_num": "5.1" }, { "text": "Douban Corpus contains dyadic dialogs (conversation between two persons) longer than 2 turns from the Douban group 1 which is a popular social networking service in China.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "5.1" }, { "text": "E-commerce Corpus is collected from realworld conversations between customers and customer service staff from Taobao 2 , the largest ecommerce platform in China. The dataset contains diverse types of conversations (e.g. commodity consultation, logistics express, recommendation, and chitchat) concerning various commodities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "5.1" }, { "text": "Following the previous works Zhang et al., 2018; Chaudhuri et al., 2018; Tao et al., 2019) , we employ recall at position k in n candidates (R n @k) as evaluation metrics. Apart from R n @k, we use MAP (Mean Average Precision), MRR (Mean Reciprocal Rank), and Precision-atone P@1 especially for Douban corpus, which is the same as previous works Tao et al., 2019) . For some dialogues in Douban corpus have more than one true candidate response.", "cite_spans": [ { "start": 29, "end": 48, "text": "Zhang et al., 2018;", "ref_id": "BIBREF24" }, { "start": 49, "end": 72, "text": "Chaudhuri et al., 2018;", "ref_id": "BIBREF0" }, { "start": 73, "end": 90, "text": "Tao et al., 2019)", "ref_id": "BIBREF15" }, { "start": 346, "end": 363, "text": "Tao et al., 2019)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metric", "sec_num": "5.2" }, { "text": "Single-turn matching models: Basic models in (Lowe et al., 2015; Kadlec et al., 2015) including RNN, CNN are used in early works. Some advanced single-turn matching models, such as DL2R , Atten-LSTM (Tan et al., Table 4 : Experimental results on Ubuntu, Douban and E-commerce datasets. MRFN is the state-of-the-art model until this submission.", "cite_spans": [ { "start": 45, "end": 64, "text": "(Lowe et al., 2015;", "ref_id": "BIBREF7" }, { "start": 65, "end": 85, "text": "Kadlec et al., 2015)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 212, "end": 219, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Baseline Models", "sec_num": "5.3" }, { "text": "Douban Corpus E-commerce Corpus R10@1 R10@2 R10@5 MAP MRR P@1 R10@1 R10@2 R10@5 R10@1 R10@2 R10@5 TF-IDF (Lowe et al., 2015) 41.0 54.5 70.8 33.1 35.9 18.0 9.6 17.2 40.5 15.9 25.6 47.7 RNN (Lowe et al., 2015) 40.3 54.7 81.9 39.0 42.2 20.8 11.8 22.3 58.9 32.5 46.3 77.5 CNN (Kadlec et al., 2015) 54.9 68.4 89.6 41.7 44.0 22.6 12.1 25.2 64.7 32.8 51.5 79.2 LSTM (Kadlec et al., 2015) 63.8 78.4 94.9 48.5 53.7 32.0 18.7 34.3 72.0 36.5 53.6 82.8 BiLSTM (Kadlec et al., 2015) 63.0 78.0 94.4 47.9 51.4 31.3 18.4 33.0 71.6 35.5 52.5 82.5 DL2R 62.6 78.3 94.4 48.8 52.7 33.0 19.3 34.2 70.5 39.9 57.1 84.2 Atten-LSTM (Tan et al., 2015) 63 72.6 84.7 96.1 52.9 56.9 39.7 23.3 39.6 72.4 45.3 65.4 88.6 DUA (Zhang et al., 2018) 75.2 86.8 96.2 55.1 59.9 42.1 24.3 42.1 78.0 50.1 70.0 92.1 DAM (Zhou et al., 2018c) 76.7 87.4 96.9 55.0 60.1 42.7 25.4 41.0 75.7 ---MRFN (Tao et al., 2019) 78 2015), and MV-LSTM (Wan et al., 2016) are also explored in this work. These models concatenate all context utterances together to match a response.", "cite_spans": [ { "start": 105, "end": 124, "text": "(Lowe et al., 2015)", "ref_id": "BIBREF7" }, { "start": 188, "end": 207, "text": "(Lowe et al., 2015)", "ref_id": "BIBREF7" }, { "start": 272, "end": 293, "text": "(Kadlec et al., 2015)", "ref_id": "BIBREF3" }, { "start": 359, "end": 380, "text": "(Kadlec et al., 2015)", "ref_id": "BIBREF3" }, { "start": 448, "end": 469, "text": "(Kadlec et al., 2015)", "ref_id": "BIBREF3" }, { "start": 606, "end": 624, "text": "(Tan et al., 2015)", "ref_id": "BIBREF14" }, { "start": 692, "end": 712, "text": "(Zhang et al., 2018)", "ref_id": "BIBREF24" }, { "start": 777, "end": 797, "text": "(Zhou et al., 2018c)", "ref_id": "BIBREF28" }, { "start": 851, "end": 869, "text": "(Tao et al., 2019)", "ref_id": "BIBREF15" }, { "start": 884, "end": 910, "text": "MV-LSTM (Wan et al., 2016)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Models Ubuntu Corpus", "sec_num": null }, { "text": "Multi-turn matching models: Multi-view (Zhou et al., 2016) models utterances from word level view and utterance level view; DL2R model reformulates the message with other utterances in the context; SMN matches a response with each utterance in the context; DUA (Zhang et al., 2018) formulates previous utterances into context using a proposed deep utterance aggregation model; DAM (Zhou et al., 2018c) constructs representations of utterances in the context and the response with stacked selfattention and cross attention; MRFN (Tao et al., 2019) fuses multiple types of representations with a multi-representation fusion network for response matching.", "cite_spans": [ { "start": 39, "end": 58, "text": "(Zhou et al., 2016)", "ref_id": "BIBREF27" }, { "start": 261, "end": 281, "text": "(Zhang et al., 2018)", "ref_id": "BIBREF24" }, { "start": 381, "end": 401, "text": "(Zhou et al., 2018c)", "ref_id": "BIBREF28" }, { "start": 528, "end": 546, "text": "(Tao et al., 2019)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Models Ubuntu Corpus", "sec_num": null }, { "text": "Our model was implemented by PyTorch (Paszke et al., 2017) . Word embeddings were initialized by the results of word2vec (Mikolov et al., 2013) which ran on the dataset, and the dimensionality of word vectors is 200. The hyper-parameter k of selectors is set to 3. We use three convolution layers to extract matching features. The 1st convolution layer has 16 [3, 3] [3, 3] stride. We set the dimension of the hidden states of GRU as 300. The parameters were updated by Adam algorithm (Kingma and Ba, 2014) and the parameters of Adam, \u03b2 1 and \u03b2 2 are 0.9 and 0.999 respectively. The learning rate is initialized as 1e-3 and gradually decreased during training. Same as previous works Zhang et al., 2018) , the maximum utterance length is 50 and the maximum context length (i.e., number of utterances) as 10. Table 4 shows the results of MSN and all baseline models on the datasets. All the experimental results are cited from previous works (Zhang et al., 2018; Chaudhuri et al., 2018; Tao et al., 2019) . Referring to the table, MSN significantly outperforms all other models in terms of most of the metrics on the three datasets, including MRFN, which is the state-of-the-art model until this submission. MSN extends from SMN and DAM (Zhou et al., 2018c) , and it achieves more than 3% absolute improvement on R 10 @1 compared with SMN and DAM. The improvement also shows the importance of filtering irrelevant context before matching.", "cite_spans": [ { "start": 37, "end": 58, "text": "(Paszke et al., 2017)", "ref_id": "BIBREF10" }, { "start": 121, "end": 143, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF8" }, { "start": 360, "end": 363, "text": "[3,", "ref_id": null }, { "start": 364, "end": 366, "text": "3]", "ref_id": null }, { "start": 367, "end": 370, "text": "[3,", "ref_id": null }, { "start": 371, "end": 373, "text": "3]", "ref_id": null }, { "start": 684, "end": 703, "text": "Zhang et al., 2018)", "ref_id": "BIBREF24" }, { "start": 941, "end": 961, "text": "(Zhang et al., 2018;", "ref_id": "BIBREF24" }, { "start": 962, "end": 985, "text": "Chaudhuri et al., 2018;", "ref_id": "BIBREF0" }, { "start": 986, "end": 1003, "text": "Tao et al., 2019)", "ref_id": "BIBREF15" }, { "start": 1236, "end": 1256, "text": "(Zhou et al., 2018c)", "ref_id": "BIBREF28" } ], "ref_spans": [ { "start": 808, "end": 815, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Model Training", "sec_num": "5.4" }, { "text": "6 Further Analysis", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Result", "sec_num": "5.5" }, { "text": "We perform a series of ablation experiments over the different parts of the model to investigate their relative importance. Firstly, we use the complete MSN as the baseline. Then, we gradually remove its modules as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation Study", "sec_num": "6.1" }, { "text": "\u2022 w/o Word Selector: A model that is trained using the utterance selector but without the word selector.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation Study", "sec_num": "6.1" }, { "text": "\u2022 w/o Utterance Selector: A model which is trained without the utterance selector.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation Study", "sec_num": "6.1" }, { "text": "\u2022 Only Hop1 (Hop2, Hop3) Selector: A model which is trained only with hop1 or hop2 or hop3 selector.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation Study", "sec_num": "6.1" }, { "text": "\u2022 w/o Selector: Removing all selector modules and only use the attention module for matching. From experimental results in Table 5 , we can observe that:", "cite_spans": [], "ref_spans": [ { "start": 123, "end": 130, "text": "Table 5", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Ablation Study", "sec_num": "6.1" }, { "text": "(1) Compared with MSN base , removing selectors leads to performance degradation, which shows that the multi-hop selectors are indeed help to improve the selection performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation Study", "sec_num": "6.1" }, { "text": "(2) The performances decay a large margin when the word selector and utterance selector are removed, which proves that both word selector and utterance selector play an important role in selecting relevant context utterances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation Study", "sec_num": "6.1" }, { "text": "(3) For E-commerce dataset, the context selected by Hop1 selector is more important than other selectors. We think the main reason is that the dialogs in E-commerce corpus happen between buyers and sellers on the Taobao platform. The intent of the dialogue is very clear and the dialogue is mainly in the form of one question and one answer. So the last dialogue session has little dependency on the very far context. However, the fusion of these hop selectors' results still brings more performance improvement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation Study", "sec_num": "6.1" }, { "text": "The choices of k for selectors and threshold \u03b3 in formula (10) may influence the performance. Thus, we conduct a series of sensitivity analysis experiments on the development dataset to study how different choices of parameters influence the performance of the model. The k decides how many selectors that MSN uses to select relevant context utterances. Referring to Figure 3 (a) , only using hop1 selector is not better than using multiple selectors. However, the performance does not increase when k > 3. It is easy to see that when k is too large, the key will contain too many noises and cannot reflect the intention of the last dialogue session.", "cite_spans": [], "ref_spans": [ { "start": 367, "end": 379, "text": "Figure 3 (a)", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Parameter Sensitivity", "sec_num": "6.2" }, { "text": "Figure 3 (b) shows the performance with different threshold \u03b3. Intuitively, when \u03b3 is too large, the selectors will filter out too much context, which may hurt performance. However, when \u03b3 is too small, the selectors do not work very well. We can observe that MSN achieves the best performance when \u03b3 = 0.3 or 0.5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Sensitivity", "sec_num": "6.2" }, { "text": "In this paper, we analyze the side effect of using unnecessary context utterances and verify matchingbased models are very sensitive to the context. We propose a multi-hop selector network to alleviate this problem. Empirical results on three large-scale datasets demonstrate the effectiveness of the model in multi-turn response selection and yield new stateof-the-art results at the same time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "7" }, { "text": "In the future, we will study how to solve the logical consistency problem between utterances and candidate responses to improve selection performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "7" }, { "text": "https://www.douban.com/group 2 https://www.taobao.com", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We gratefully thank the anonymous reviewers for their insightful comments. This research is supported in part by the Beijing Municipal Science and Technology Project under Grant Z191100007119008 and Z181100002718004, the National Key Research and Development Program of China under Grant 2018YFC0806900 and 2017YFB1010000.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": "8" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Improving response selection in multi-turn dialogue systems by incorporating domain knowledge", "authors": [ { "first": "Debanjan", "middle": [], "last": "Chaudhuri", "suffix": "" }, { "first": "Agustinus", "middle": [], "last": "Kristiadi", "suffix": "" }, { "first": "Jens", "middle": [], "last": "Lehmann", "suffix": "" }, { "first": "Asja", "middle": [], "last": "Fischer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 22nd Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "497--507", "other_ids": {}, "num": null, "urls": [], "raw_text": "Debanjan Chaudhuri, Agustinus Kristiadi, Jens Lehmann, and Asja Fischer. 2018. Improving response selection in multi-turn dialogue systems by incorporating domain knowledge. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 497-507.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Neural natural language inference models enhanced with external knowledge", "authors": [ { "first": "Qian", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Zhen-Hua", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Inkpen", "suffix": "" }, { "first": "Si", "middle": [], "last": "Wei", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "2406--2417", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, and Si Wei. 2018. Neural natural language inference models enhanced with external knowledge. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 2406-2417.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Adversarial examples for evaluating reading comprehension systems", "authors": [ { "first": "Robin", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2021--2031", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robin Jia and Percy Liang. 2017. Adversarial exam- ples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 2021-2031.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Improved deep learning baselines for ubuntu corpus dialogs", "authors": [ { "first": "Rudolf", "middle": [], "last": "Kadlec", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Schmid", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1510.03753" ] }, "num": null, "urls": [], "raw_text": "Rudolf Kadlec, Martin Schmid, and Jan Kleindienst. 2015. Improved deep learning baselines for ubuntu corpus dialogs. arXiv preprint arXiv:1510.03753.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6980" ] }, "num": null, "urls": [], "raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Deep learning. nature", "authors": [ { "first": "Yann", "middle": [], "last": "Lecun", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" } ], "year": 2015, "venue": "", "volume": "521", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. nature, 521(7553):436.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Alime assist: an intelligent assistant for creating an innovative e-commerce experience", "authors": [ { "first": "Feng-Lin", "middle": [], "last": "Li", "suffix": "" }, { "first": "Minghui", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Haiqing", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xiongwei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xing", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Juwei", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Zhongzhou", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Weipeng", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 ACM on Conference on Information and Knowledge Management", "volume": "", "issue": "", "pages": "2495--2498", "other_ids": {}, "num": null, "urls": [], "raw_text": "Feng-Lin Li, Minghui Qiu, Haiqing Chen, Xiong- wei Wang, Xing Gao, Jun Huang, Juwei Ren, Zhongzhou Zhao, Weipeng Zhao, Lei Wang, et al. 2017. Alime assist: an intelligent assistant for cre- ating an innovative e-commerce experience. In Pro- ceedings of the 2017 ACM on Conference on Infor- mation and Knowledge Management, pages 2495- 2498. ACM.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems", "authors": [ { "first": "Ryan", "middle": [], "last": "Lowe", "suffix": "" }, { "first": "Nissan", "middle": [], "last": "Pow", "suffix": "" }, { "first": "Iulian", "middle": [], "last": "Serban", "suffix": "" }, { "first": "Joelle", "middle": [], "last": "Pineau", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue", "volume": "", "issue": "", "pages": "285--294", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dia- logue systems. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 285-294.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Natural language inference by tree-based convolution and heuristic matching", "authors": [ { "first": "Lili", "middle": [], "last": "Mou", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Men", "suffix": "" }, { "first": "Ge", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Zhi", "middle": [], "last": "Jin", "suffix": "" } ], "year": 2016, "venue": "The 54th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui Yan, and Zhi Jin. 2016. Natural language inference by tree-based convolution and heuristic matching. In The 54th Annual Meeting of the Association for Com- putational Linguistics, page 130.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Automatic differentiation in pytorch", "authors": [ { "first": "Adam", "middle": [], "last": "Paszke", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Soumith", "middle": [], "last": "Chintala", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Chanan", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zachary", "middle": [], "last": "Devito", "suffix": "" }, { "first": "Zeming", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Alban", "middle": [], "last": "Desmaison", "suffix": "" }, { "first": "Luca", "middle": [], "last": "Antiga", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Lerer", "suffix": "" } ], "year": 2017, "venue": "NIPS-W", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. In NIPS-W.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Building end-to-end dialogue systems using generative hierarchical neural network models", "authors": [ { "first": "Alessandro", "middle": [], "last": "Iulian V Serban", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Sordoni", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Joelle", "middle": [], "last": "Courville", "suffix": "" }, { "first": "", "middle": [], "last": "Pineau", "suffix": "" } ], "year": 2016, "venue": "Thirtieth AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hier- archical neural network models. In Thirtieth AAAI Conference on Artificial Intelligence.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Neural responding machine for short-text conversation", "authors": [ { "first": "Lifeng", "middle": [], "last": "Shang", "suffix": "" }, { "first": "Zhengdong", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Li", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "1577--1586", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neu- ral responding machine for short-text conversation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), vol- ume 1, pages 1577-1586.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "From eliza to xiaoice: challenges and opportunities with social chatbots", "authors": [ { "first": "Heung-Yeung", "middle": [], "last": "Shum", "suffix": "" }, { "first": "Xiao-Dong", "middle": [], "last": "He", "suffix": "" }, { "first": "Di", "middle": [], "last": "Li", "suffix": "" } ], "year": 2018, "venue": "Frontiers of Information Technology & Electronic Engineering", "volume": "19", "issue": "1", "pages": "10--26", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heung-Yeung Shum, Xiao-dong He, and Di Li. 2018. From eliza to xiaoice: challenges and opportunities with social chatbots. Frontiers of Information Tech- nology & Electronic Engineering, 19(1):10-26.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Lstm-based deep learning models for nonfactoid answer selection", "authors": [ { "first": "Ming", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Cicero Dos Santos", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Xiang", "suffix": "" }, { "first": "", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ming Tan, Cicero Dos Santos, Bing Xiang, and Bowen Zhou. 2015. Lstm-based deep learning models for nonfactoid answer selection. In Proceedings of the International Conference on Learning Representa- tions.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Multirepresentation fusion network for multi-turn response selection in retrieval-based chatbots", "authors": [ { "first": "Chongyang", "middle": [], "last": "Tao", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Can", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Wenpeng", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Dongyan", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Yan", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining", "volume": "", "issue": "", "pages": "267--275", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chongyang Tao, Wei Wu, Can Xu, Wenpeng Hu, Dongyan Zhao, and Rui Yan. 2019. Multi- representation fusion network for multi-turn re- sponse selection in retrieval-based chatbots. In Pro- ceedings of the Twelfth ACM International Confer- ence on Web Search and Data Mining, pages 267- 275. ACM.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Match-srnn: modeling the recursive matching structure with spatial rnn", "authors": [ { "first": "Yanyan", "middle": [], "last": "Shengxian Wan", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Jiafeng", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Xueqi", "middle": [], "last": "Pang", "suffix": "" }, { "first": "", "middle": [], "last": "Cheng", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "2922--2928", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shengxian Wan, Yanyan Lan, Jun Xu, Jiafeng Guo, Liang Pang, and Xueqi Cheng. 2016. Match-srnn: modeling the recursive matching structure with spa- tial rnn. In Proceedings of the Twenty-Fifth Inter- national Joint Conference on Artificial Intelligence, pages 2922-2928. AAAI Press.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A dataset for research on short-text conversations", "authors": [ { "first": "Hao", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zhengdong", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Enhong", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "935--945", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hao Wang, Zhengdong Lu, Hang Li, and Enhong Chen. 2013. A dataset for research on short-text conver- sations. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 935-945.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Learning natural language inference with lstm", "authors": [ { "first": "Shuohang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Jiang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "1442--1451", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shuohang Wang and Jing Jiang. 2016. Learning natu- ral language inference with lstm. In Proceedings of NAACL-HLT, pages 1442-1451.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Sequential matching network: A new architecture for multi-turn response selection in retrieval-based chatbots", "authors": [ { "first": "Yu", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Chen", "middle": [], "last": "Xing", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Zhoujun", "middle": [], "last": "Li", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "496--505", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu Wu, Wei Wu, Chen Xing, Ming Zhou, and Zhou- jun Li. 2017. Sequential matching network: A new architecture for multi-turn response selection in retrieval-based chatbots. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 496-505.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Neural response generation with dynamic vocabularies", "authors": [ { "first": "Yu", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Dejian", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Can", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Zhoujun", "middle": [], "last": "Li", "suffix": "" } ], "year": 2018, "venue": "Thirty-Second AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu Wu, Wei Wu, Dejian Yang, Can Xu, and Zhoujun Li. 2018. Neural response generation with dynamic vocabularies. In Thirty-Second AAAI Conference on Artificial Intelligence.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Topic aware neural response generation", "authors": [ { "first": "Chen", "middle": [], "last": "Xing", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yalou", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei-Ying", "middle": [], "last": "Ma", "suffix": "" } ], "year": 2017, "venue": "Thirty-First AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2017. Topic aware neural response generation. In Thirty-First AAAI Conference on Artificial Intelligence.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Learning to respond with deep neural networks for retrievalbased human-computer conversation system", "authors": [ { "first": "Rui", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Yiping", "middle": [], "last": "Song", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "55--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rui Yan, Yiping Song, and Hua Wu. 2016. Learning to respond with deep neural networks for retrieval- based human-computer conversation system. In Pro- ceedings of the 39th International ACM SIGIR con- ference on Research and Development in Informa- tion Retrieval, pages 55-64. ACM.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Modeling multiturn conversation with deep utterance aggregation", "authors": [ { "first": "Zhuosheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jiangtong", "middle": [], "last": "Li", "suffix": "" }, { "first": "Pengfei", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Gongshen", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "3740--3752", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhuosheng Zhang, Jiangtong Li, Pengfei Zhu, Hai Zhao, and Gongshen Liu. 2018. Modeling multi- turn conversation with deep utterance aggregation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3740-3752.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Emotional chatting machine: Emotional conversation generation with internal and external memory", "authors": [ { "first": "Hao", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Minlie", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Tianyang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xiaoyan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2018, "venue": "Thirty-Second AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, and Bing Liu. 2018a. Emotional chatting ma- chine: Emotional conversation generation with in- ternal and external memory. In Thirty-Second AAAI Conference on Artificial Intelligence.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Commonsense knowledge aware conversation generation with graph attention", "authors": [ { "first": "Hao", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Young", "suffix": "" }, { "first": "Minlie", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Haizhou", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Jingfang", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Xiaoyan", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2018, "venue": "IJCAI", "volume": "", "issue": "", "pages": "4623--4629", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hao Zhou, Tom Young, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. 2018b. Com- monsense knowledge aware conversation generation with graph attention. In IJCAI, pages 4623-4629.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Multi-view response selection for human-computer conversation", "authors": [ { "first": "Xiangyang", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Daxiang", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Shiqi", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Dianhai", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Xuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Yan", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "372--381", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiangyang Zhou, Daxiang Dong, Hua Wu, Shiqi Zhao, Dianhai Yu, Hao Tian, Xuan Liu, and Rui Yan. 2016. Multi-view response selection for human-computer conversation. In Proceedings of the 2016 Confer- ence on Empirical Methods in Natural Language Processing, pages 372-381.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Multi-turn response selection for chatbots with deep attention matching network", "authors": [ { "first": "Xiangyang", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Daxiang", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Ying", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Wayne", "middle": [ "Xin" ], "last": "Zhao", "suffix": "" }, { "first": "Dianhai", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1118--1127", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiangyang Zhou, Lu Li, Daxiang Dong, Yi Liu, Ying Chen, Wayne Xin Zhao, Dianhai Yu, and Hua Wu. 2018c. Multi-turn response selection for chatbots with deep attention matching network. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 1118-1127.", "links": null } }, "ref_entries": { "FIGREF1": { "num": null, "uris": null, "type_str": "figure", "text": "Architecture of multi-hop selector network." }, "FIGREF2": { "num": null, "uris": null, "type_str": "figure", "text": "Architecture of Attentive Module." }, "FIGREF3": { "num": null, "uris": null, "type_str": "figure", "text": "Effects of threshold \u03b3." }, "FIGREF4": { "num": null, "uris": null, "type_str": "figure", "text": "Parameter sensitivity analysis on the development datasets of Ubuntu, Douban, and E-commerce Corpus." }, "TABREF0": { "content": "", "html": null, "text": "An error case of SMN", "num": null, "type_str": "table" }, "TABREF1": { "content": "
original test setadversarial set1adversarial set2
Models
", "html": null, "text": "Adversarial experimental results on Ubuntu Dialogue Corpus. The results of SMN, DUA(Zhang et al., 2018), DAM(Zhou et al., 2018c), MFRN(Tao et al., 2019) on original test set are cited from their papers.", "num": null, "type_str": "table" }, "TABREF3": { "content": "
UbuntuDoubanE-commerce
ModelsTrain Val Test Train Val Test Train Val Test
#context-response pairs 1M 500K 500K 1M 50K 50K 1M 10K 10K
#candidates per context210 10221022 10
Avg #turns per context 10.13 10.11 10.11 6.69 6.75 6.45 5.51 5.48 5.64
Avg #words per utterance 11.35 11.34 11.37 18.56 18.50 20.74 7.02 6.99 7.11
", "html": null, "text": "Data statistics for Ubuntu, Douban and Ecommerce datasets.", "num": null, "type_str": "table" }, "TABREF7": { "content": "
ModelR 10 @1 R 10 @2 R 10 @5
MSN base60.677.093.7
w/o Selector55.474.292.5
w/o Word Selector59.376.592.4
w/o Utterance Selector 58.675.392.8
Only Hop1 Selector58.374.993.3
Only Hop2 Selector56.876.794.6
Only Hop3 Selector56.674.794.0
", "html": null, "text": "Ablation study on E-commerce corpus.", "num": null, "type_str": "table" } } } }