{ "paper_id": "D19-1013", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:59:35.698501Z" }, "title": "Entity-Consistent End-to-end Task-Oriented Dialogue System with KB Retriever", "authors": [ { "first": "Libo", "middle": [], "last": "Qin", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harbin Institute of Technology", "location": { "country": "China" } }, "email": "lbqin@ir.hit.edu.cn" }, { "first": "Yijia", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harbin Institute of Technology", "location": { "country": "China" } }, "email": "yjliu@ir.hit.edu.cn" }, { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harbin Institute of Technology", "location": { "country": "China" } }, "email": "" }, { "first": "Haoyang", "middle": [], "last": "Wen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harbin Institute of Technology", "location": { "country": "China" } }, "email": "hywen@ir.hit.edu.cn" }, { "first": "Yangming", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harbin Institute of Technology", "location": { "country": "China" } }, "email": "yangmingli@ir.hit.edu.cn" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harbin Institute of Technology", "location": { "country": "China" } }, "email": "tliu@ir.hit.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Querying the knowledge base (KB) has long been a challenge in the end-to-end taskoriented dialogue system. Previous sequenceto-sequence (Seq2Seq) dialogue generation work treats the KB query as an attention over the entire KB, without the guarantee that the generated entities are consistent with each other. In this paper, we propose a novel framework which queries the KB in two steps to improve the consistency of generated entities. In the first step, inspired by the observation that a response can usually be supported by a single KB row, we introduce a KB retrieval component which explicitly returns the most relevant KB row given a dialogue history. The retrieval result is further used to filter the irrelevant entities in a Seq2Seq response generation model to improve the consistency among the output entities. In the second step, we further perform the attention mechanism to address the most correlated KB column. Two methods are proposed to make the training feasible without labeled retrieval data, which include distant supervision and Gumbel-Softmax technique. Experiments on two publicly available task oriented dialog datasets show the effectiveness of our model by outperforming the baseline systems and producing entity-consistent responses.", "pdf_parse": { "paper_id": "D19-1013", "_pdf_hash": "", "abstract": [ { "text": "Querying the knowledge base (KB) has long been a challenge in the end-to-end taskoriented dialogue system. Previous sequenceto-sequence (Seq2Seq) dialogue generation work treats the KB query as an attention over the entire KB, without the guarantee that the generated entities are consistent with each other. In this paper, we propose a novel framework which queries the KB in two steps to improve the consistency of generated entities. In the first step, inspired by the observation that a response can usually be supported by a single KB row, we introduce a KB retrieval component which explicitly returns the most relevant KB row given a dialogue history. The retrieval result is further used to filter the irrelevant entities in a Seq2Seq response generation model to improve the consistency among the output entities. In the second step, we further perform the attention mechanism to address the most correlated KB column. Two methods are proposed to make the training feasible without labeled retrieval data, which include distant supervision and Gumbel-Softmax technique. Experiments on two publicly available task oriented dialog datasets show the effectiveness of our model by outperforming the baseline systems and producing entity-consistent responses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Task-oriented dialogue system, which helps users to achieve specific goals with natural language, is attracting more and more research attention. With the success of the sequence-to-sequence (Seq2Seq) models in text generation (Sutskever et al., 2014; Bahdanau et al., 2014; Luong et al., 2015a; Nallapati et al., 2016b,a) , several works tried to model the task-oriented dialogue as the Seq2Seq generation of response from the dialogue Figure 1 : An example of a task-oriented dialogue that incorporates a knowledge base (KB). The fourth row in KB supports the second turn of the dialogue. A dialogue system will produce a response with conflict entities if it includes the POI in the fourth row and the address in the fifth row, like \"Valero is located at 899 Ames Ct\".", "cite_spans": [ { "start": 227, "end": 251, "text": "(Sutskever et al., 2014;", "ref_id": "BIBREF22" }, { "start": 252, "end": 274, "text": "Bahdanau et al., 2014;", "ref_id": "BIBREF0" }, { "start": 275, "end": 295, "text": "Luong et al., 2015a;", "ref_id": "BIBREF11" }, { "start": 296, "end": 322, "text": "Nallapati et al., 2016b,a)", "ref_id": null } ], "ref_spans": [ { "start": 437, "end": 445, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "history Eric et al., 2017; Madotto et al., 2018) . This kind of modeling scheme frees the task-oriented dialogue system from the manually designed pipeline modules and heavy annotation labor for these modules. Different from typical text generation, the successful conversations for task-oriented dialogue system heavily depend on accurate knowledge base (KB) queries. Taking the dialogue in Figure 1 as an example, to answer the driver's query on the gas station, the dialogue system is required to retrieve the entities like \"200 Alester Ave\" and \"Valero\". For the task-oriented system based on Seq2Seq generation, there is a trend in recent study towards modeling the KB query as an attention network over the entire KB entity representations, hoping to learn a model to pay more attention to the relevant entities (Eric et al., 2017; Madotto et al., 2018; Reddy et al., 2018; Wen et al., 2018) .", "cite_spans": [ { "start": 8, "end": 26, "text": "Eric et al., 2017;", "ref_id": "BIBREF7" }, { "start": 27, "end": 48, "text": "Madotto et al., 2018)", "ref_id": "BIBREF13" }, { "start": 818, "end": 837, "text": "(Eric et al., 2017;", "ref_id": "BIBREF7" }, { "start": 838, "end": 859, "text": "Madotto et al., 2018;", "ref_id": "BIBREF13" }, { "start": 860, "end": 879, "text": "Reddy et al., 2018;", "ref_id": "BIBREF19" }, { "start": 880, "end": 897, "text": "Wen et al., 2018)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 392, "end": 400, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Though achieving good end-to-end dialogue generation with over-the-entire-KB attention mechanism, these methods do not guarantee the generation consistency regarding KB entities and sometimes yield responses with conflict entities, like \"Valero is located at 899 Ames Ct\" for the gas station query (as shown in Figure 1 ). In fact, the correct address for Valero is 200 Alester Ave. A consistent response is relatively easy to achieve for the conventional pipeline systems because they query the KB by issuing API calls (Bordes and Weston, 2017; Wen et al., 2017b,a) , and the returned entities, which typically come from a single KB row, are consistently related to the object (like the \"gas station\") that serves the user's request. This indicates that a response can usually be supported by a single KB row. It's promising to incorporate such observation into the Seq2Seq dialogue generation model, since it encourages KB relevant generation and avoids the model from producing responses with conflict entities.", "cite_spans": [ { "start": 520, "end": 545, "text": "(Bordes and Weston, 2017;", "ref_id": "BIBREF1" }, { "start": 546, "end": 566, "text": "Wen et al., 2017b,a)", "ref_id": null } ], "ref_spans": [ { "start": 311, "end": 319, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To achieve entity-consistent generation in the Seq2Seq task-oriented dialogue system, we propose a novel framework which query the KB in two steps. In the first step, we introduce a retrieval module -KB-retriever to explicitly query the KB. Inspired by the observation that a single KB row usually supports a response, given the dialogue history and a set of KB rows, the KB-retriever uses a memory network (Sukhbaatar et al., 2015) to select the most relevant row. The retrieval result is then fed into a Seq2Seq dialogue generation model to filter the irrelevant KB entities and improve the consistency within the generated entities. In the second step, we further perform attention mechanism to address the most correlated KB column. Finally, we adopt the copy mechanism to incorporate the retrieved KB entity.", "cite_spans": [ { "start": 407, "end": 432, "text": "(Sukhbaatar et al., 2015)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Since dialogue dataset is not typically annotated with the retrieval results, training the KB-retriever is non-trivial. To make the training feasible, we propose two methods: 1) we use a set of heuristics to derive the training data and train the retriever in a distant supervised fashion; 2) we use Gumbel-Softmax (Jang et al., 2017) as an approximation of the non-differentiable selecting process and train the retriever along with the Seq2Seq dialogue generation model. Experiments on two publicly available datasets (Camrest (Wen et al., 2017b) and InCar Assistant (Eric et al., 2017) ) confirm the effectiveness of the KB-retriever. Both the retrievers trained with distant-supervision and Gumbel-Softmax technique outperform the compared systems in the automatic and human evaluations. Analysis empirically verifies our assump-tion that more than 80% responses in the dataset can be supported by a single KB row and better retrieval results lead to better task-oriented dialogue generation performance.", "cite_spans": [ { "start": 315, "end": 334, "text": "(Jang et al., 2017)", "ref_id": "BIBREF7" }, { "start": 529, "end": 548, "text": "(Wen et al., 2017b)", "ref_id": "BIBREF26" }, { "start": 569, "end": 588, "text": "(Eric et al., 2017)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we will describe the input and output of the end-to-end task-oriented dialogue system, and the definition of Seq2Seq task-oriented dialogue generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition", "sec_num": "2" }, { "text": "Given a dialogue between a user (u) and a system (s), we follow Eric et al. (2017) and represent the k-turned dialogue utterances as", "cite_spans": [ { "start": 64, "end": 82, "text": "Eric et al. (2017)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Dialogue History", "sec_num": "2.1" }, { "text": "{(u 1 , s 1 ), (u 2 , s 2 ), ..., (u k , s k )}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dialogue History", "sec_num": "2.1" }, { "text": "At the i th turn of the dialogue, we aggregate dialogue context which consists of the tokens of (u 1 , s 1 , ..., s i\u22121 , u i ) and use x = (x 1 , x 2 , ..., x m ) to denote the whole dialogue history word by word, where m is the number of tokens in the dialogue history.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dialogue History", "sec_num": "2.1" }, { "text": "In this paper, we assume to have the access to a relational-database-like KB B, which consists of |R| rows and |C| columns. The value of entity in the j th row and the i th column is noted as v j,i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge Base", "sec_num": "2.2" }, { "text": "We define the Seq2Seq task-oriented dialogue generation as finding the most likely response y according to the input dialogue history x and KB B. Formally, the probability of a response is defined as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seq2Seq Dialogue Generation", "sec_num": "2.3" }, { "text": "p(y | x, B) = n t=1 p(y t | y 1 , ..., y t\u22121 , x, B),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seq2Seq Dialogue Generation", "sec_num": "2.3" }, { "text": "where y t represents an output token.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seq2Seq Dialogue Generation", "sec_num": "2.3" }, { "text": "In this section, we describe our framework for end-to-end task-oriented dialogues. The architecture of our framework is demonstrated in Figure 2 , which consists of two major components including an memory network-based retriever and the seq2seq dialogue generation with KB Retriever. Our framework first uses the KB-retriever to select the most relevant KB row and further filter the irrelevant entities in a Seq2Seq response generation model to improve the consistency among the ", "cite_spans": [], "ref_spans": [ { "start": 136, "end": 144, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Our Framework", "sec_num": "3" }, { "text": "\u22ee \u22ee \u22ee \u22ee \u22ee 8 |\u211b|,$ 8 |\u211b|,% 8 |\u211b|,& 8 |\u211b|,' 8 \u211b ,# Retrieval Results 0 1 0 0 1 0 0 \u22ee 1 \u22ee 0 0 1 0 0 1 0 |\u211b|\u00d7|@| \u2a02 Memory Network-based Retriever (KB Row Selection) Vocabulary distribution \u22ef \u22ef \u2026 ck-1,5 200 A. \u2026 Vaero ck+1,1 \u2026 Constrained KB distribution \u2026 ck-1,5 200 A. \u2026 Vaero ck+1,1 \u2026 KB distribution (KB Column Selection) 0 \u22ef 1 1 1 1 1 \u22ef 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Framework", "sec_num": "3" }, { "text": "From C + k \u2212 1 \u00d7 @ to C + k\u00d7 @", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Framework", "sec_num": "3" }, { "text": "Representation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KB Row Representation Dialogue History", "sec_num": null }, { "text": "6 F G \u22ef F H \u22ef F \u211b I G \u22ef I H \u22ef I \u211b 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KB Row Representation Dialogue History", "sec_num": null }, { "text": "Figure 2: The workflow of our Seq2Seq task-oriented dialogue generation model with KB-retriever. For simplification, we draw the single-hop memory network instead of the multiple-hop one we use in our model. output entities. While in decoding, we further perform the attention mechanism to choose the most probable KB column. We will present the details of our framework in the following sections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KB Row Representation Dialogue History", "sec_num": null }, { "text": "In our encoder, we adopt the bidirectional LSTM (Hochreiter and Schmidhuber, 1997, BiLSTM) to encode the dialogue history x, which captures temporal relationships within the sequence. The encoder first map the tokens in x to vectors with embedding function \u03c6 emb , and then the BiLSTM read the vector forwardly and backwardly to produce context-sensitive hidden states (h 1 , h 2 , ..., h m ) by repeatedly applying the recurrence", "cite_spans": [ { "start": 48, "end": 90, "text": "(Hochreiter and Schmidhuber, 1997, BiLSTM)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Encoder", "sec_num": "3.1" }, { "text": "h i = BiLSTM \u03c6 emb (x i ) , h i\u22121 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder", "sec_num": "3.1" }, { "text": "Here, we follow Eric et al. (2017) to adopt the attention-based decoder to generation the response word by word. LSTM is also used to represent the partially generated output sequence", "cite_spans": [ { "start": 16, "end": 34, "text": "Eric et al. (2017)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Vanilla Attention-based Decoder", "sec_num": "3.2" }, { "text": "(y 1 , y 2 , ..., y t\u22121 ) as (h 1 ,h 2 , ...,h t ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vanilla Attention-based Decoder", "sec_num": "3.2" }, { "text": "For the generation of next token y t , their model first calculates an attentive representationh t of the dialogue history as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vanilla Attention-based Decoder", "sec_num": "3.2" }, { "text": "u t i = W 2 tanh(W 1 [h i ,h t ]), a t i = softmax(u t i ), h t = m i=1 a t i \u2022 h i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vanilla Attention-based Decoder", "sec_num": "3.2" }, { "text": "Then, the concatenation of the hidden representation of the partially outputted sequenceh t and the attentive dialogue history representationh t are projected to the vocabulary space V by U as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vanilla Attention-based Decoder", "sec_num": "3.2" }, { "text": "o t = U [h t ,h t ],", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vanilla Attention-based Decoder", "sec_num": "3.2" }, { "text": "to calculate the score (logit) for the next token generation. The probability of next token y t is finally calculated as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vanilla Attention-based Decoder", "sec_num": "3.2" }, { "text": "p(y t | y 1 , ..., y t\u22121 , x, B) = softmax(o t ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vanilla Attention-based Decoder", "sec_num": "3.2" }, { "text": "As shown in section 3.2, we can see that the generation of tokens are just based on the dialogue history attention, which makes the model ignorant to the KB entities. In this section, we present how to query the KB explicitly in two steps for improving the entity consistence, which first adopt the KBretriever to select the most relevant KB row and the generation of KB entities from the entitiesaugmented decoder is constrained to the entities within the most probable row, thus improve the entity generation consistency. Next, we perform the column attention to select the most probable KB column. Finally, we show how to use the copy mechanism to incorporate the retrieved entity while decoding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity-Consistency Augmented Decoder", "sec_num": "3.3" }, { "text": "In our framework, our KB-retriever takes the dialogue history and KB rows as inputs and selects the most relevant row. This selection process resembles the task of selecting one word from the inputs to answer questions (Sukhbaatar et al., 2015) , and we use a memory network to model this process. In the following sections, we will first describe how to represent the inputs, then we will talk about our memory network-based retriever Dialogue History Representation: We encode the dialogue history by adopting the neural bagof-words (BoW) followed the original paper (Sukhbaatar et al., 2015) . Each token in the dialogue history is mapped into a vector by another embedding function \u03c6 emb (x) and the dialogue history representation q is computed as the sum of these vectors:", "cite_spans": [ { "start": 219, "end": 244, "text": "(Sukhbaatar et al., 2015)", "ref_id": "BIBREF21" }, { "start": 569, "end": 594, "text": "(Sukhbaatar et al., 2015)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "KB Row Selection", "sec_num": "3.3.1" }, { "text": "q = m i=1 \u03c6 emb (x i ). KB Row Representation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KB Row Selection", "sec_num": "3.3.1" }, { "text": "In this section, we describe how to encode the KB row. Each KB cell is represented as the cell value v embedding as c j,k = \u03c6 value (v j,k ), and the neural BoW is also used to represent a KB row r j as r j = |C| k=1 c j,k . Memory Network-Based Retriever: We model the KB retrieval process as selecting the row that most-likely supports the response generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KB Row Selection", "sec_num": "3.3.1" }, { "text": "Memory network (Sukhbaatar et al., 2015) has shown to be effective to model this kind of selection. For a n-hop memory network, the model keeps a set of input matrices", "cite_spans": [ { "start": 15, "end": 40, "text": "(Sukhbaatar et al., 2015)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "KB Row Selection", "sec_num": "3.3.1" }, { "text": "{R 1 , R 2 , ..., R n+1 }, where each R i is a stack of |R| inputs (r i 1 , r i 2 , ..., r i |R| ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KB Row Selection", "sec_num": "3.3.1" }, { "text": "The model also keeps query q 1 as the input. A single hop memory network computes the probability a j of selecting the j th input as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KB Row Selection", "sec_num": "3.3.1" }, { "text": "\u03c0 1 = softmax((q 1 ) T R 1 ), o 1 = i \u03c0 1 i r 2 i , a = softmax(W mem (o 1 + q 1 )).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KB Row Selection", "sec_num": "3.3.1" }, { "text": "For the multi-hop cases, layers of single hop memory network are stacked and the query of the (i + 1) th layer network is computed as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KB Row Selection", "sec_num": "3.3.1" }, { "text": "q i+1 = q i + o i ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KB Row Selection", "sec_num": "3.3.1" }, { "text": "and the output of the last layer is used as the output of the whole network. For more details about memory network, please refer to the original paper (Sukhbaatar et al., 2015) .", "cite_spans": [ { "start": 151, "end": 176, "text": "(Sukhbaatar et al., 2015)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "KB Row Selection", "sec_num": "3.3.1" }, { "text": "After getting a, we represent the retrieval results as a 0-1 matrix T \u2208 {0, 1} |R|\u00d7|C| , where each element in T is calculated as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KB Row Selection", "sec_num": "3.3.1" }, { "text": "T j, * = 1[j = argmax i a i ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KB Row Selection", "sec_num": "3.3.1" }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KB Row Selection", "sec_num": "3.3.1" }, { "text": "In the retrieval result, T j,k indicates whether the entity in the j th row and the k th column is relevant to the final generation of the response. In this paper, we further flatten T to a 0-1 vector t \u2208 {0, 1} |E| (where |E| equals |R| \u00d7 |C|) as our retrieval row results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KB Row Selection", "sec_num": "3.3.1" }, { "text": "After getting the retrieved row result that indicates which KB row is the most relevant to the generation, we further perform column attention in decoding time to select the probable KB column. For our KB column selection, following the Eric et al. (2017) we use the decoder hidden state (h 1 ,h 2 , ...,h t ) to compute an attention score with the embedding of column attribute name. The attention score c \u2208 R |E| then become the logits of the column be selected, which can be calculated as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KB Column Selection", "sec_num": "3.3.2" }, { "text": "c j = W 2 tanh(W 1 [k j ,h t ]),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KB Column Selection", "sec_num": "3.3.2" }, { "text": "where c j is the attention score of the j th KB column, k j is represented with the embedding of word embedding of KB column name. W 1 , W 2 and t T are trainable parameters of the model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KB Column Selection", "sec_num": "3.3.2" }, { "text": "After the row selection and column selection, we can define the final retrieved KB entity score as the element-wise dot between the row retriever result and the column selection score, which can be calculated as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder with Retrieved Entity", "sec_num": "3.3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "v t = t * c,", "eq_num": "(2)" } ], "section": "Decoder with Retrieved Entity", "sec_num": "3.3.3" }, { "text": "where the v t indicates the final KB retrieved entity score. Finally, we follow Eric et al. (2017) to use copy mechanism to incorporate the retrieved entity, which can be defined as", "cite_spans": [ { "start": 80, "end": 98, "text": "Eric et al. (2017)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Decoder with Retrieved Entity", "sec_num": "3.3.3" }, { "text": "o t = U [h t ,h t ] + v t , where o t 's dimensionality is |V| +|E|. In v t , lower", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder with Retrieved Entity", "sec_num": "3.3.3" }, { "text": "|V| is zero and the rest|E| is retrieved entity scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder with Retrieved Entity", "sec_num": "3.3.3" }, { "text": "As mentioned in section 3.3.1, we adopt the memory network to train our KB-retriever. However, in the Seq2Seq dialogue generation, the training data does not include the annotated KB row retrieval results, which makes supervised training the KBretriever impossible. To tackle this problem, we propose two training methods for our KB-rowretriever. 1) In the first method, inspired by the recent success of distant supervision in information extraction (Zeng et al., 2015; Mintz et al., 2009; Min et al., 2013; Xu et al., 2013) , we take advantage of the similarity between the surface string of KB entries and the reference response, and design a set of heuristics to extract training data for the KB-retriever. 2) In the second method, instead of training the KB-retriever as an independent component, we train it along with the training of the Seq2Seq dialogue generation. To make the retrieval process in Equation 1 differentiable, we use Gumbel-Softmax (Jang et al., 2017) as an approximation of the argmax during training.", "cite_spans": [ { "start": 451, "end": 470, "text": "(Zeng et al., 2015;", "ref_id": "BIBREF29" }, { "start": 471, "end": 490, "text": "Mintz et al., 2009;", "ref_id": "BIBREF15" }, { "start": 491, "end": 508, "text": "Min et al., 2013;", "ref_id": "BIBREF14" }, { "start": 509, "end": 525, "text": "Xu et al., 2013)", "ref_id": "BIBREF28" }, { "start": 956, "end": 975, "text": "(Jang et al., 2017)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Training the KB-Retriever", "sec_num": "4" }, { "text": "Although it's difficult to obtain the annotated retrieval data for the KB-retriever, we can \"guess\" the most relevant KB row from the reference response, and then obtain the weakly labeled data for the retriever. Intuitively, for the current utterance in the same dialogue which usually belongs to one topic and the KB row that contains the largest number of entities mentioned in the whole dialogue should support the utterance. In our training with distant supervision, we further simplify our assumption and assume that one dialogue which is usually belongs to one topic and can be supported by the most relevant KB row, which means for a k-turned dialogue, we construct k pairs of training instances for the retriever and all the inputs", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training with Distant Supervision", "sec_num": "4.1" }, { "text": "(u 1 , s 1 , ..., s i\u22121 , u i | i \u2264 k)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training with Distant Supervision", "sec_num": "4.1" }, { "text": "are associated with the same weakly labeled KB retrieval result T * . In this paper, we compute each row's similarity to the whole dialogue and choose the most similar row as T * . We define the similarity of each row as the number of matched spans with the surface form of the entities in the row.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training with Distant Supervision", "sec_num": "4.1" }, { "text": "Taking the dialogue in Figure 1 for an example, the similarity of the 4 th row equals to 4 with \"200 Alester Ave\", \"gas station\", \"Valero\", and \"road block nearby\" matching the dialogue context; and the similarity of the 7 th row equals to 1 with only \"road block nearby\" matching.", "cite_spans": [], "ref_spans": [ { "start": 23, "end": 31, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Training with Distant Supervision", "sec_num": "4.1" }, { "text": "In our model with the distantly supervised retriever, the retrieval results serve as the input for the Seq2Seq generation. During training the Seq2Seq generation, we use the weakly labeled retrieval result T * as the input.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training with Distant Supervision", "sec_num": "4.1" }, { "text": "In addition to treating the row retrieval result as an input to the generation model, and training the kbrow-retriever independently, we can train it along with the training of the Seq2Seq dialogue generation in an end-to-end fashion. The major difficulty of such a training scheme is that the discrete retrieval result is not differentiable and the training signal from the generation model cannot be passed to the parameters of the retriever. Gumbel-softmax technique (Jang et al., 2017) has been shown an effective approximation to the discrete variable and proved to work in sentence representation. In this paper, we adopt the Gumbel-Softmax technique to train the KB retriever. We use", "cite_spans": [ { "start": 470, "end": 489, "text": "(Jang et al., 2017)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Training with Gumbel-Softmax", "sec_num": "4.2" }, { "text": "T approx j, * = exp((log(a j ) + g j )/\u03c4 ) i exp((log(a i ) + g i )/\u03c4 ) ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training with Gumbel-Softmax", "sec_num": "4.2" }, { "text": "as the approximation of T , where g j are i.i.d samples drawn from Gumbel(0, 1) 1 and \u03c4 is a constant that controls the smoothness of the distribution. T approx j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training with Gumbel-Softmax", "sec_num": "4.2" }, { "text": "replaces T j in equation 1 and goes through the same flattening and expanding process as V to get v t approx and the training signal from Seq2Seq generation is passed via the logit", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training with Gumbel-Softmax", "sec_num": "4.2" }, { "text": "o approx t = U [h t ,h t ] + v t approx .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training with Gumbel-Softmax", "sec_num": "4.2" }, { "text": "To make training with Gumbel-Softmax more stable, we first initialize the parameters by pretraining the KB-retriever with distant supervision and further fine-tuning our framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training with Gumbel-Softmax", "sec_num": "4.2" }, { "text": "We choose the InCar Assistant dataset (Eric et al., 2017) including three distinct domains: navigation, weather and calendar domain. For weather domain, we follow Wen et al. (2018) to separate the highest temperature, lowest temperature and weather attribute into three different columns. For calendar domain, there are some dialogues without a KB or incomplete KB. In this case, we padding a special token \"-\" in these incomplete KBs. Our framework is trained separately in these three domains, using the same train/validation/test split sets as Eric et al. (2017) . 2 To justify the generalization of the proposed model, we also use another public CamRest dataset (Wen et al., 2017b) and partition the datasets into training, validation and testing set in the ratio 3:1:1. 3 Especially, we hired some human experts to format the CamRest dataset by equipping the corresponding KB to every dialogues.", "cite_spans": [ { "start": 38, "end": 57, "text": "(Eric et al., 2017)", "ref_id": "BIBREF7" }, { "start": 163, "end": 180, "text": "Wen et al. (2018)", "ref_id": "BIBREF24" }, { "start": 547, "end": 565, "text": "Eric et al. (2017)", "ref_id": "BIBREF7" }, { "start": 568, "end": 569, "text": "2", "ref_id": null }, { "start": 666, "end": 685, "text": "(Wen et al., 2017b)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Settings", "sec_num": "4.3" }, { "text": "All hyper-parameters are selected according to validation set. We use a three-hop memory network to model our KB-retriever. The dimensionalities of the embedding is selected from {100, 200} and LSTM hidden units is selected from {50, 100, 150, 200, 350}. The dropout we use in our framework is selected from {0.25, 0.5, 0.75} and the batch size we adopt is selected from {1, 2}. L2 regularization is used on our model with a tension of 5 \u00d7 10 \u22126 for reducing overfitting. For training the retriever with distant supervision, we adopt the weight typing trick (Liu and Perez, 2017) . We use Adam (Kingma and Ba, 2014) to optimize the parameters in our model and adopt the suggested hyper-parameters for optimization.", "cite_spans": [ { "start": 558, "end": 579, "text": "(Liu and Perez, 2017)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Settings", "sec_num": "4.3" }, { "text": "We adopt both the automatic and human evaluations in our experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Settings", "sec_num": "4.3" }, { "text": "We compare our model with several baselines including:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Models", "sec_num": "4.4" }, { "text": "\u2022 Attn seq2seq (Luong et al., 2015b) : A model with simple attention over the input context at each time step during decoding.", "cite_spans": [ { "start": 15, "end": 36, "text": "(Luong et al., 2015b)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Models", "sec_num": "4.4" }, { "text": "\u2022 Ptr-UNK (Gulcehre et al., 2016) : Ptr-UNK is the model which augments a sequenceto-sequence architecture with attention-based copy mechanism over the encoder context.", "cite_spans": [ { "start": 10, "end": 33, "text": "(Gulcehre et al., 2016)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Models", "sec_num": "4.4" }, { "text": "\u2022 KV Net (Eric et al., 2017) : The model adopted and argumented decoder which decodes over the concatenation of vocabulary and KB entities, which allows the model to generate entities.", "cite_spans": [ { "start": 9, "end": 28, "text": "(Eric et al., 2017)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Models", "sec_num": "4.4" }, { "text": "\u2022 Mem2Seq (Madotto et al., 2018) : Mem2Seq is the model that takes dialogue history and KB entities as input and uses a pointer gate to control either generating a vocabulary word or selecting an input as the output.", "cite_spans": [ { "start": 10, "end": 32, "text": "(Madotto et al., 2018)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Models", "sec_num": "4.4" }, { "text": "\u2022 DSR (Wen et al., 2018) : DSR leveraged dialogue state representation to retrieve the KB implicitly and applied copying mechanism to retrieve entities from knowledge base while decoding.", "cite_spans": [ { "start": 6, "end": 24, "text": "(Wen et al., 2018)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Models", "sec_num": "4.4" }, { "text": "In InCar dataset, for the Attn seq2seq, Ptr-UNK and Mem2seq, we adopt the reported results from Madotto et al. (2018) . In CamRest dataset, for the Mem2Seq, we adopt their open-sourced code to get the results while for the DSR, we run their code on the same dataset to obtain the results. 4", "cite_spans": [ { "start": 96, "end": 117, "text": "Madotto et al. (2018)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Models", "sec_num": "4.4" }, { "text": "Follow the prior works (Eric et al., 2017; Madotto et al., 2018; Wen et al., 2018) , we adopt the BLEU and the Micro Entity F1 to evaluate our model performance. The experimental results are illustrated in Table 1 . In the first block of Table 1 , we show the Human, rule-based and KV Net (with*) result which are reported from Eric et al. (2017) . We argue that their results are not directly comparable because their work uses the entities in thier canonicalized forms, which are not calculated based on real entity value. It's noticing that our framework with two methods still outperform KV Net in In-Car dataset on whole BLEU and Entity F metrics, which demonstrates the effectiveness of our framework. (Eric et al., 2017) 6.6 43.8 40.4 39.5 61.3 --KV Net* (Eric et al., 2017) 13.2 48.0 41.3 47.0 62.9 --Attn seq2seq (Luong et al., 2015b) 9.3 11.9 10.8 25.6 23.4 --Ptr-UNK (Gulcehre et al., 2016) 8.3 22.7 14.9 26.7 26.9 --Mem2Seq (Madotto et al., 2018) 12.6 33.4 20.0 32.8 49.3 16.6 42.4 DSR (Wen et al., 2018) 12.7 51.9 52.0 50.4 In the second block of Table 1 , we can see that our framework trained with both the distant supervision and the Gumbel-Softmax beats all existing models on two datasets. Our model outperforms each baseline on both BLEU and F1 metrics. In InCar dataset, Our model with Gumbel-Softmax has the highest BLEU compared with baselines, which which shows that our framework can generate more fluent response. Especially, our framework has achieved 2.5% improvement on navigate domain, 1.8% improvement on weather domain and 3.5% improvement on calendar domain on F1 metric. It indicates that the effectiveness of our KB-retriever module and our framework can retrieve more correct entity from KB. In CamRest dataset, the same trend of improvement has been witnessed, which further show the effectiveness of our framework.", "cite_spans": [ { "start": 23, "end": 42, "text": "(Eric et al., 2017;", "ref_id": "BIBREF7" }, { "start": 43, "end": 64, "text": "Madotto et al., 2018;", "ref_id": "BIBREF13" }, { "start": 65, "end": 82, "text": "Wen et al., 2018)", "ref_id": "BIBREF24" }, { "start": 328, "end": 346, "text": "Eric et al. (2017)", "ref_id": "BIBREF7" }, { "start": 708, "end": 727, "text": "(Eric et al., 2017)", "ref_id": "BIBREF7" }, { "start": 762, "end": 781, "text": "(Eric et al., 2017)", "ref_id": "BIBREF7" }, { "start": 822, "end": 843, "text": "(Luong et al., 2015b)", "ref_id": "BIBREF12" }, { "start": 878, "end": 901, "text": "(Gulcehre et al., 2016)", "ref_id": "BIBREF5" }, { "start": 936, "end": 958, "text": "(Madotto et al., 2018)", "ref_id": "BIBREF13" }, { "start": 998, "end": 1016, "text": "(Wen et al., 2018)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 206, "end": 213, "text": "Table 1", "ref_id": null }, { "start": 238, "end": 245, "text": "Table 1", "ref_id": null }, { "start": 1060, "end": 1067, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Besides, we observe that the model trained with Gumbel-Softmax outperforms with distant supervision method. We attribute this to the fact that the KB-retriever and the Seq2Seq module are finetuned in an end-to-end fashion, which can refine the KB-retriever and further promote the dialogue generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "In this section, we verify our assumption by examining the proportion of responses that can be supported by a single row. We define a response being supported by the most relevant KB row as all the responded entities are included by that row. We study the proportion of these responses over the test set. The number is 95% for the navigation domain, 90% for the CamRest dataset and 80% for the weather domain. This confirms our assumption that most responses can be supported by the relevant KB row. Correctly retrieving the supporting row should be beneficial.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The proportion of responses that can be supported by a single KB row", "sec_num": "5.1" }, { "text": "We further study the weather domain to see the rest 20% exceptions. Instead of being supported by multiple rows, most of these exceptions cannot be supported by any KB row. For example, there is one case whose reference response is \"It 's not rainy today\", and the related KB entity is sunny. These cases provide challenges beyond the scope of this paper. If we consider this kind of cases as being supported by a single row, such proportion in the weather domain is 99%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The proportion of responses that can be supported by a single KB row", "sec_num": "5.1" }, { "text": "In this paper, we expect the consistent generation from our model. To verify this, we compute the consistency recall of the utterances that have multiple entities. An utterance is considered as consistent if it has multiple entities and these entities belong to the same row which we annotated with distant supervision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generation Consistency", "sec_num": "5.2" }, { "text": "The consistency result is shown in Table 2 . From this table, we can see that incorporating retriever in the dialogue generation improves the consistency.", "cite_spans": [], "ref_spans": [ { "start": 35, "end": 42, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Generation Consistency", "sec_num": "5.2" }, { "text": "To further explore the correlation between the number of KB rows and generation consistency, we conduct experiments with distant manner to study the correlation between the number of KB rows and generation consistency. We choose KBs with different number of rows on a scale from 1 to 5 for the generation. From Figure 3 , as the number of KB rows increase, we can see a decrease in generation consistency. This indicates that irrelevant information would harm the dialogue generation consistency.", "cite_spans": [], "ref_spans": [ { "start": 311, "end": 319, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Correlation between the number of KB rows and generation consistency", "sec_num": "5.3" }, { "text": "To gain more insights into how the our retriever module influences the whole KB score distribution, we visualized the KB entity probability at the decoding position where we generate the entity 200 Alester Ave. From the example (Fig 4) , we can see the 4 th row and the 1 th column has the highest probabilities for generating 200 Alester Ave, which verify the effectiveness of firstly selecting the most relevant KB row and further selecting the most relevant KB column.", "cite_spans": [], "ref_spans": [ { "start": 228, "end": 235, "text": "(Fig 4)", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Visualization", "sec_num": "5.4" }, { "text": "We provide human evaluation on our framework and the compared models. These responses are based on distinct dialogue history. We hire several human experts and ask them to judge the quality of the responses according to correctness, fluency, and humanlikeness on a scale from 1 to 5. In each judgment, the expert is presented with the dialogue history, an output of a system with the name anonymized, and the gold response.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human Evaluation", "sec_num": "5.5" }, { "text": "The evaluation results are illustrated in Table 2 . Our framework outperforms other baseline models on all metrics according to Table 2 . The most significant improvement is from correctness, indicating that our model can retrieve accurate entity from KB and generate more informative information that the users want to know.", "cite_spans": [], "ref_spans": [ { "start": 42, "end": 49, "text": "Table 2", "ref_id": "TABREF4" }, { "start": 128, "end": 135, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Human Evaluation", "sec_num": "5.5" }, { "text": "Sequence-to-sequence (Seq2Seq) models in text generation (Sutskever et al., 2014; Bahdanau et al., 2014; Luong et al., 2015a; Nallapati et al., 2016b,a) 2015; Serban et al., 2016) in the end-to-end training method. Recently, the Seq2Seq can be used for learning task oriented dialogs and how to query the structured KB is the remaining challenges.", "cite_spans": [ { "start": 57, "end": 81, "text": "(Sutskever et al., 2014;", "ref_id": "BIBREF22" }, { "start": 82, "end": 104, "text": "Bahdanau et al., 2014;", "ref_id": "BIBREF0" }, { "start": 105, "end": 125, "text": "Luong et al., 2015a;", "ref_id": "BIBREF11" }, { "start": 126, "end": 152, "text": "Nallapati et al., 2016b,a)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Properly querying the KB has long been a challenge in the task-oriented dialogue system. In the pipeline system, the KB query is strongly correlated with the design of language understanding, state tracking, and policy management. Typically, after obtaining the dialogue state, the policy management module issues an API call accordingly to query the KB. With the development of neural network in natural language processing, efforts have been made to replacing the discrete and predefined dialogue state with the distributed representation (Bordes and Weston, 2017; Wen et al., 2017b,a; Liu and Lane, 2017) . In our framework, our retrieval result can be treated as a numeric representation of the API call return.", "cite_spans": [ { "start": 541, "end": 566, "text": "(Bordes and Weston, 2017;", "ref_id": "BIBREF1" }, { "start": 567, "end": 587, "text": "Wen et al., 2017b,a;", "ref_id": null }, { "start": 588, "end": 607, "text": "Liu and Lane, 2017)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Instead of interacting with the KB via API calls, more and more recent works tried to incorporate KB query as a part of the model. The most popular way of modeling KB query is treating it as an attention network over the entire KB entities (Eric et al., 2017; Dhingra et al., 2017; Reddy et al., 2018; Raghu et al., 2019; Wu et al., 2019) and the return can be a fuzzy summation of the entity representations. Madotto et al. (2018) 's practice of modeling the KB query with memory network can also be considered as learning an attentive prefer-ence over these entities. Wen et al. (2018) propose the implicit dialogue state representation to query the KB and achieve the promising performance. Different from their modes, we propose the KB-retriever to explicitly query the KB, and the query result is used to filter the irrelevant entities in the dialogue generation to improve the consistency among the output entities.", "cite_spans": [ { "start": 240, "end": 259, "text": "(Eric et al., 2017;", "ref_id": "BIBREF7" }, { "start": 260, "end": 281, "text": "Dhingra et al., 2017;", "ref_id": "BIBREF2" }, { "start": 282, "end": 301, "text": "Reddy et al., 2018;", "ref_id": "BIBREF19" }, { "start": 302, "end": 321, "text": "Raghu et al., 2019;", "ref_id": "BIBREF18" }, { "start": 322, "end": 338, "text": "Wu et al., 2019)", "ref_id": "BIBREF27" }, { "start": 410, "end": 431, "text": "Madotto et al. (2018)", "ref_id": "BIBREF13" }, { "start": 570, "end": 587, "text": "Wen et al. (2018)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "In this paper, we propose a novel framework to improve entities consistency by querying KB in two steps. In the first step, inspired by the observation that a response can usually be supported by a single KB row, we introduce the KB retriever to return the most relevant KB row, which is used to filter the irrelevant KB entities and encourage consistent generation. In the second step, we further perform attention mechanism to select the most relevant KB column. Experimental results show the effectiveness of our method. Extensive analysis further confirms the observation and reveal the correlation between the success of KB query and the success of task-oriented dialogue generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "We sample g by drawing u \u223c Uniform(0, 1) then computing g = \u2212 log(\u2212 log(u)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We obtain the BLEU and Entity F1 score on the whole InCar dataset by mixing all generated response and evaluating them together.3 The dataset can be available at: https://github. com/yizhen20133868/Retriever-Dialogue", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We adopt the same pre-processed dataset fromMadotto et al. (2018). We can find that experimental results is slightly different with their reported performance(Wen et al., 2018) because of their different tokenized utterances and normalization for entities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank the anonymous reviewers for their helpful comments and suggestions. This work was supported by the National Natural Science Foundation of China (NSFC) via grant 61976072, 61632011 and 61772153.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1409.0473" ] }, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Learning end-to-end goal-oriented dialog", "authors": [ { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2017, "venue": "Proc. of ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antoine Bordes and Jason Weston. 2017. Learning end-to-end goal-oriented dialog. In Proc. of ICLR.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Towards end-to-end reinforcement learning of dialogue agents for information access", "authors": [ { "first": "Bhuwan", "middle": [], "last": "Dhingra", "suffix": "" }, { "first": "Lihong", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xiujun", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Yun-Nung", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Faisal", "middle": [], "last": "Ahmed", "suffix": "" }, { "first": "Li", "middle": [], "last": "Deng", "suffix": "" } ], "year": 2017, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bhuwan Dhingra, Lihong Li, Xiujun Li, Jianfeng Gao, Yun-Nung Chen, Faisal Ahmed, and Li Deng. 2017. Towards end-to-end reinforcement learning of dia- logue agents for information access. In Proc. of ACL.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Key-value retrieval networks for task-oriented dialogue", "authors": [ { "first": "Mihail", "middle": [], "last": "Eric", "suffix": "" }, { "first": "Lakshmi", "middle": [], "last": "Krishnan", "suffix": "" }, { "first": "Francois", "middle": [], "last": "Charette", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2017, "venue": "Proc. of SIGDial", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mihail Eric, Lakshmi Krishnan, Francois Charette, and Christopher D Manning. 2017. Key-value retrieval networks for task-oriented dialogue. In Proc. of SIGDial.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A copyaugmented sequence-to-sequence architecture gives good performance on task-oriented dialogue", "authors": [ { "first": "Mihail", "middle": [], "last": "Eric", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2017, "venue": "Proc. of EACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mihail Eric and Christopher Manning. 2017. A copy- augmented sequence-to-sequence architecture gives good performance on task-oriented dialogue. In Proc. of EACL.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Pointing the unknown words", "authors": [ { "first": "Caglar", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "Sungjin", "middle": [], "last": "Ahn", "suffix": "" }, { "first": "Ramesh", "middle": [], "last": "Nallapati", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2016, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proc. of ACL.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Categorical reparameterization with gumbel-softmax", "authors": [ { "first": "Eric", "middle": [], "last": "Jang", "suffix": "" }, { "first": "Shixiang", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Poole", "suffix": "" } ], "year": 2017, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Jang, Shixiang Gu, and Ben Poole. 2017. Cate- gorical reparameterization with gumbel-softmax. In ICLR.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6980" ] }, "num": null, "urls": [], "raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "An end-to-end trainable neural network model with belief tracking for taskoriented dialog", "authors": [ { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Lane", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bing Liu and Ian Lane. 2017. An end-to-end trainable neural network model with belief tracking for task- oriented dialog. In Interspeech 2017.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Gated end-to-end memory networks", "authors": [ { "first": "Fei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Perez", "suffix": "" } ], "year": 2017, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fei Liu and Julien Perez. 2017. Gated end-to-end memory networks. In Proc. of ACL.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Effective approaches to attentionbased neural machine translation", "authors": [ { "first": "Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proc. of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015a. Effective approaches to attention- based neural machine translation. In Proc. of EMNLP.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Effective approaches to attentionbased neural machine translation", "authors": [ { "first": "Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proc. of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015b. Effective approaches to attention- based neural machine translation. In Proc. of EMNLP.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Mem2seq: Effectively incorporating knowledge bases into end-to-end task-oriented dialog systems", "authors": [ { "first": "Andrea", "middle": [], "last": "Madotto", "suffix": "" }, { "first": "Chien-Sheng", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Pascale", "middle": [], "last": "Fung", "suffix": "" } ], "year": 2018, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2018. Mem2seq: Effectively incorporating knowl- edge bases into end-to-end task-oriented dialog sys- tems. In Proc. of ACL.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Distant supervision for relation extraction with an incomplete knowledge base", "authors": [ { "first": "Bonan", "middle": [], "last": "Min", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "Li", "middle": [], "last": "Wan", "suffix": "" }, { "first": "Chang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "David", "middle": [], "last": "Gondek", "suffix": "" } ], "year": 2013, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bonan Min, Ralph Grishman, Li Wan, Chang Wang, and David Gondek. 2013. Distant supervision for relation extraction with an incomplete knowledge base. In Proc. of ACL.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Distant supervision for relation extraction without labeled data", "authors": [ { "first": "Mike", "middle": [], "last": "Mintz", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bills", "suffix": "" } ], "year": 2009, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Dan Juraf- sky. 2009. Distant supervision for relation extrac- tion without labeled data. In Proc. of ACL.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Sequence-to-sequence rnns for text summarization", "authors": [ { "first": "Ramesh", "middle": [], "last": "Nallapati", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Xiang", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ramesh Nallapati, Bing Xiang, and Bowen Zhou. 2016a. Sequence-to-sequence rnns for text summa- rization.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Abstractive text summarization using sequence-to-sequence rnns and beyond", "authors": [ { "first": "Ramesh", "middle": [], "last": "Nallapati", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Caglar", "middle": [], "last": "Cicero Dos Santos", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "", "middle": [], "last": "Xiang", "suffix": "" } ], "year": 2016, "venue": "Proc. of SIGNLL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gulcehre, and Bing Xiang. 2016b. Abstrac- tive text summarization using sequence-to-sequence rnns and beyond. In Proc. of SIGNLL.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Disentangling Language and Knowledge in Task-Oriented Dialogs", "authors": [ { "first": "Dinesh", "middle": [], "last": "Raghu", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Mausam", "middle": [], "last": "", "suffix": "" } ], "year": 2019, "venue": "Proc. of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dinesh Raghu, Nikhil Gupta, and Mausam. 2019. Disentangling Language and Knowledge in Task- Oriented Dialogs. In Proc. of NAACL.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Multi-level memory for task oriented dialogs", "authors": [ { "first": "Revanth", "middle": [], "last": "Reddy", "suffix": "" }, { "first": "Danish", "middle": [], "last": "Contractor", "suffix": "" }, { "first": "Dinesh", "middle": [], "last": "Raghu", "suffix": "" }, { "first": "Sachindra", "middle": [], "last": "Joshi", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.10647" ] }, "num": null, "urls": [], "raw_text": "Revanth Reddy, Danish Contractor, Dinesh Raghu, and Sachindra Joshi. 2018. Multi-level memory for task oriented dialogs. arXiv preprint arXiv:1810.10647.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Building end-to-end dialogue systems using generative hierarchical neural network models", "authors": [ { "first": "Alessandro", "middle": [], "last": "Iulian V Serban", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Sordoni", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Joelle", "middle": [], "last": "Courville", "suffix": "" }, { "first": "", "middle": [], "last": "Pineau", "suffix": "" } ], "year": 2016, "venue": "Proc. of AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hier- archical neural network models. In Proc. of AAAI.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "End-to-end memory networks", "authors": [ { "first": "Sainbayar", "middle": [], "last": "Sukhbaatar", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Fergus", "suffix": "" } ], "year": 2015, "venue": "NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In NIPS.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Sequence to sequence learning with neural networks", "authors": [ { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Le", "suffix": "" } ], "year": 2014, "venue": "NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- works. In NIPS.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A neural conversational model", "authors": [ { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Quoc", "middle": [], "last": "Le", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1506.05869" ] }, "num": null, "urls": [], "raw_text": "Oriol Vinyals and Quoc Le. 2015. A neural conversa- tional model. arXiv preprint arXiv:1506.05869.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Sequence-to-sequence learning for task-oriented dialogue with dialogue state representation", "authors": [ { "first": "Haoyang", "middle": [], "last": "Wen", "suffix": "" }, { "first": "Yijia", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "" }, { "first": "Libo", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2018, "venue": "Proc. of COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haoyang Wen, Yijia Liu, Wanxiang Che, Libo Qin, and Ting Liu. 2018. Sequence-to-sequence learning for task-oriented dialogue with dialogue state represen- tation. In Proc. of COLING.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Latent intention dialogue models", "authors": [ { "first": "Yishu", "middle": [], "last": "Tsung-Hsien Wen", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Miao", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Blunsom", "suffix": "" }, { "first": "", "middle": [], "last": "Young", "suffix": "" } ], "year": 2017, "venue": "ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsung-Hsien Wen, Yishu Miao, Phil Blunsom, and Steve Young. 2017a. Latent intention dialogue mod- els. In ICML.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A networkbased end-to-end trainable task-oriented dialogue system", "authors": [ { "first": "David", "middle": [], "last": "Tsung-Hsien Wen", "suffix": "" }, { "first": "Nikola", "middle": [], "last": "Vandyke", "suffix": "" }, { "first": "Milica", "middle": [], "last": "Mrk\u0161i\u0107", "suffix": "" }, { "first": "Lina", "middle": [ "M" ], "last": "Gasic", "suffix": "" }, { "first": "Pei-Hao", "middle": [], "last": "Rojas Barahona", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Su", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Ultes", "suffix": "" }, { "first": "", "middle": [], "last": "Young", "suffix": "" } ], "year": 2017, "venue": "Proc. of EACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsung-Hsien Wen, David Vandyke, Nikola Mrk\u0161i\u0107, Milica Gasic, Lina M. Rojas Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2017b. A network- based end-to-end trainable task-oriented dialogue system. In Proc. of EACL.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Global-to-local memory pointer networks for task-oriented dialogue", "authors": [ { "first": "Chien-Sheng", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1901.04713" ] }, "num": null, "urls": [], "raw_text": "Chien-Sheng Wu, Richard Socher, and Caiming Xiong. 2019. Global-to-local memory pointer net- works for task-oriented dialogue. arXiv preprint arXiv:1901.04713.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Filling knowledge base gaps for distant supervision of relation extraction", "authors": [ { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Raphael", "middle": [], "last": "Hoffmann", "suffix": "" }, { "first": "Le", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2013, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Xu, Raphael Hoffmann, Le Zhao, and Ralph Gr- ishman. 2013. Filling knowledge base gaps for dis- tant supervision of relation extraction. In Proc. of ACL.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Distant supervision for relation extraction via piecewise convolutional neural networks", "authors": [ { "first": "Daojian", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2015, "venue": "Proc. of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In Proc. of EMNLP.", "links": null } }, "ref_entries": { "FIGREF1": { "uris": null, "type_str": "figure", "text": "Correlation between the number of KB rows and generation consistency on navigation domain.", "num": null }, "FIGREF2": { "uris": null, "type_str": "figure", "text": "KB score distribution. The distribution is the timestep when generate entity 200 Alester Ave for response \" Valero is located at 200 Alester Ave\"", "num": null }, "TABREF3": { "type_str": "table", "num": null, "html": null, "content": "
Model Cons.Human Evaluation Cor. Flu. Hum.
Copy Net 21.24.14 4.40 4.36
Mem2Seq 38.14.29 4.29 4.27
DSR 70.34.59 4.71 4.65
w/ distant supervision 65.84.53 4.71 4.64
w/ Gumble-Softmax 72.14.64 4.73 4.69
", "text": "has gained more popular and they are applied for the open-domain dialogs(Vinyals and Le," }, "TABREF4": { "type_str": "table", "num": null, "html": null, "content": "
High(1.00)Low(0.00)
AddressDistance POI typePOITraffic info
638 Amherst St3 miles grocery storeSigona Farmers Marketcar collision nearby
269 Alger Dr1 miles coffee or tea place Cafe Venetiacar collision nearby
5672 barringer street 5 miles certain address5672 barringer streetno traffic
200 Alester Ave2 miles gas stationValeroroad block nearby
899 Ames Ct5 miles hospitalStanford Childrens Health moderate traffic
481 Amaranta Ave 1 miles parking garagePalo Alto Garage Rmoderate traffic
145 Amherst St1 miles coffee or tea place Teavanaroad block nearby
409 Bollard St5 miles grocery storeWillows Marketno traffic
200 Alester Ave2 miles gas stationValeroroad block nearby
0.8170.0170.0520.0710.043
", "text": "The generation consistency and Human Evaluation on navigation domain. Cons. represents Consistency. Cor. represents Correctness. Flu. represents Fluency and Hum. represents Humanlikeness." } } } }