{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:50:45.175683Z" }, "title": "Slot-consistent NLG for Task-oriented Dialogue Systems with Iterative Rectification Network", "authors": [ { "first": "Yangming", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "Ant Financial Services Group, Alibaba Group", "institution": "", "location": {} }, "email": "" }, { "first": "Kaisheng", "middle": [], "last": "Yao", "suffix": "", "affiliation": { "laboratory": "Ant Financial Services Group, Alibaba Group", "institution": "", "location": {} }, "email": "kaisheng.yao@antfin.com" }, { "first": "Libo", "middle": [], "last": "Qin", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harbin Institute of Technology", "location": {} }, "email": "lbqin@ir.hit.edu.cn" }, { "first": "Wangxiang", "middle": [], "last": "Che", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harbin Institute of Technology", "location": {} }, "email": "" }, { "first": "Xiaolong", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "Ant Financial Services Group, Alibaba Group", "institution": "", "location": {} }, "email": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harbin Institute of Technology", "location": {} }, "email": "tliu@ir.hit.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Data-driven approaches using neural networks have achieved promising performances in natural language generation (NLG). However, neural generators are prone to make mistakes, e.g., neglecting an input slot value and generating a redundant slot value. Prior works refer this to hallucination phenomenon. In this paper, we study slot consistency for building reliable NLG systems with all slot values of input dialogue act (DA) properly generated in output sentences. We propose Iterative Rectification Network (IRN) for improving general NLG systems to produce both correct and fluent responses. It applies a bootstrapping algorithm to sample training candidates and uses reinforcement learning to incorporate discrete reward related to slot inconsistency into training. Comprehensive studies have been conducted on multiple benchmark datasets, showing that the proposed methods have significantly reduced the slot error rate (ERR) for all strong baselines. Human evaluations also have confirmed its effectiveness.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Data-driven approaches using neural networks have achieved promising performances in natural language generation (NLG). However, neural generators are prone to make mistakes, e.g., neglecting an input slot value and generating a redundant slot value. Prior works refer this to hallucination phenomenon. In this paper, we study slot consistency for building reliable NLG systems with all slot values of input dialogue act (DA) properly generated in output sentences. We propose Iterative Rectification Network (IRN) for improving general NLG systems to produce both correct and fluent responses. It applies a bootstrapping algorithm to sample training candidates and uses reinforcement learning to incorporate discrete reward related to slot inconsistency into training. Comprehensive studies have been conducted on multiple benchmark datasets, showing that the proposed methods have significantly reduced the slot error rate (ERR) for all strong baselines. Human evaluations also have confirmed its effectiveness.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Natural Language Generation (NLG), as a critical component of task-oriented dialogue systems, converts a meaning representation, i.e., dialogue act (DA), into natural language sentences. Traditional methods (Stent et al., 2004; Konstas and Lapata, 2013; Wong and Mooney, 2007) are mostly pipeline-based, dividing the generation process into sentence planing and surface realization. Despite their robustness, they heavily rely on handcrafted rules and domain-specific knowledge. In addition, the generated sentences of rule-based approaches are rather rigid, without the variance of human language. More recently, neural network based models (Wen et al., 2015a,b; Du\u0161ek and Jur\u010d\u00ed\u010dek, Tran and Nguyen, 2017a) have attracted much attention. They implicitly learn sentence planning and surface realisation end-to-end with cross entropy objectives. For example, Du\u0161ek and Jur\u010d\u00ed\u010dek (2016) employ an attentive encoder-decoder model, which applies attention mechanism over input slot value pairs. Although neural generators can be trained end-to-end, they suffer from hallucination phenomenon (Balakrishnan et al., 2019) . Examples in Table 1 show a misplacement error of an unseen slot AREA and a missing error of slot NAME by an end-to-end trained model, when compared against its input DA. Motivated by this observation, in this paper, we define slot consistency of NLG systems as all slot values of input DAs shall appear in output sentences without misplacement. We also observe that, for task-oriented dialogue systems, input DAs are mostly with simple logic forms, therefore enabling retrieval-based methods e.g. K-Nearest Neighbour (KNN) to handle the majority of test cases. Furthermore, there exists a discrepancy between the training criterion of cross entropy loss and evaluation metric of slot error rate (ERR), similarly to that observed in neural machine translation (Ranzato et al., 2015) . Therefore, it is beneficial to use training methods that integrate the evaluation metrics in their objectives.", "cite_spans": [ { "start": 207, "end": 227, "text": "(Stent et al., 2004;", "ref_id": "BIBREF11" }, { "start": 228, "end": 253, "text": "Konstas and Lapata, 2013;", "ref_id": "BIBREF5" }, { "start": 254, "end": 276, "text": "Wong and Mooney, 2007)", "ref_id": "BIBREF20" }, { "start": 642, "end": 663, "text": "(Wen et al., 2015a,b;", "ref_id": null }, { "start": 664, "end": 683, "text": "Du\u0161ek and Jur\u010d\u00ed\u010dek,", "ref_id": "BIBREF3" }, { "start": 858, "end": 883, "text": "Du\u0161ek and Jur\u010d\u00ed\u010dek (2016)", "ref_id": "BIBREF3" }, { "start": 1086, "end": 1113, "text": "(Balakrishnan et al., 2019)", "ref_id": "BIBREF1" }, { "start": 1613, "end": 1638, "text": "K-Nearest Neighbour (KNN)", "ref_id": null }, { "start": 1875, "end": 1897, "text": "(Ranzato et al., 2015)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 1128, "end": 1135, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose Iterative Rectification Network (IRN) to improve slot consistency for general NLG systems. IRN consists of a pointer rewriter and an experience replay buffer. Pointer rewriter iteratively rectifies slot-inconsistent generations from KNN or data-driven NLG systems. Experience replay buffer of a fixed size collects candidates, which consist of mistaken cases, for training IRN. Leveraging the above observations, we further introduce a retrieval-based bootstrapping to sample pseudo mistaken cases as candidates for enriching the training data. To foster consistency between training objective and evaluation metrics, we use REINFORCE (Williams, 1992) to incorporate slot consistency and other discrete rewards into training objectives.", "cite_spans": [ { "start": 661, "end": 677, "text": "(Williams, 1992)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Extensive experiments show that, the proposed model, KNN + IRN, significantly outperforms all previous strong approaches. When applying IRN to improve slot consistency of prior NLG baselines, we notice large reductions of their slot error rates. Finally, the effectiveness of the proposed methods are further confirmed using BLEU scores, case analysis and human evaluations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Inputs to NLG are structured meaning representations, i.e., DA, which consists of an act type and a list of slot value pairs. Each slot value pair represents the type of information and its content while the act type control the style of sentence. To improve generalization capability of DA, delexicalization technique (Wen et al., 2015a,b; Du\u0161ek and Jur\u010d\u00ed\u010dek, 2016; Tran and Nguyen, 2017a) is widely used to replace all values in reference sentence by their corresponding slot in DA, creating pairs of delexicalized input DAs and output templates.", "cite_spans": [ { "start": 319, "end": 340, "text": "(Wen et al., 2015a,b;", "ref_id": null }, { "start": 341, "end": 366, "text": "Du\u0161ek and Jur\u010d\u00ed\u010dek, 2016;", "ref_id": "BIBREF3" }, { "start": 367, "end": 390, "text": "Tran and Nguyen, 2017a)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Delexicalization", "sec_num": "2.1" }, { "text": "Hence the most important step in NLG is to generate templates correctly given an input DA. However, this step can introduce missing and misplaced slots, because of modeling errors or unaligned training data (Balakrishnan et al., 2019; Nie et al., 2019; Juraska et al., 2018) . Lexicalization is followed after a template is generated, replacing slots in template with corresponding values in DA.", "cite_spans": [ { "start": 207, "end": 234, "text": "(Balakrishnan et al., 2019;", "ref_id": "BIBREF1" }, { "start": 235, "end": 252, "text": "Nie et al., 2019;", "ref_id": "BIBREF7" }, { "start": 253, "end": 274, "text": "Juraska et al., 2018)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Delexicalization", "sec_num": "2.1" }, { "text": "Formally, we denote a delexicalized input DA as a set ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "2.2" }, { "text": "x = {x 1 , x 2 , \u2022 \u2022 \u2022 , x N }", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "2.2" }, { "text": "= [y 1 , y 2 , \u2022 \u2022 \u2022 , y M ] from NLG systems f (x)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "2.2" }, { "text": "is a sequence of tokens (words and slots).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "2.2" }, { "text": "We define a slot extraction function g as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "g(z) = {t | t \u2208 z; t \u2208 S}.", "eq_num": "(1)" } ], "section": "Problem Statement", "sec_num": "2.2" }, { "text": "where z consists of the DA x and elements of the template y.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "2.2" }, { "text": "A slot-consistent NLG system f (x) satisfies the following constraint:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "g(f (x)) = g(x).", "eq_num": "(2)" } ], "section": "Problem Statement", "sec_num": "2.2" }, { "text": "To avoid trivial solutions, we require that f (x) =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "2.2" }, { "text": "x. However, due to the hallucination phenomenon, it is possible to miss or misplace slot value in generated templates (Wen et al., 2015a), which is hard to avoid in neural-based approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "2.2" }, { "text": "A KNN-based NLG system f KNN is composed of a distant function \u03c1 and a template set", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KNN-based NLG System", "sec_num": "2.3" }, { "text": "Y = {y 1 , y 2 , \u2022 \u2022 \u2022 , y Q } which is collected from Q delex- icalized sentences in training corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KNN-based NLG System", "sec_num": "2.3" }, { "text": "Given input DA x, the distance is defined as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KNN-based NLG System", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c1(x, y i ) = #({s | s = t; t \u2208 y i ; s \u2208 x}),", "eq_num": "(3)" } ], "section": "KNN-based NLG System", "sec_num": "2.3" }, { "text": "where function # computes the size of a set. During evaluation, system f KNN first ranks the templates in set Y by distant function \u03c1 and then selects the top k (beam size) templates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KNN-based NLG System", "sec_num": "2.3" }, { "text": "3 Architeture Figure 1 shows the architecture of Iterative Rectification Network. It consists of two components: a pointer rewriter to produce templates with improved performance metrics and an experience replay buffer to gather and sample training data. The improvements on slot consistency are obtained via an iterative rewriting process. Assume, at iteration k, we have a template y (k) that is not slot consistent with input DA, i.e., g(y (k) ) = g(x). Then, a pointer rewriter iteratively rewrites it as", "cite_spans": [], "ref_spans": [ { "start": 14, "end": 22, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "KNN-based NLG System", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y (k+1) = \u03c6 PR (x, y (k) ).", "eq_num": "(4)" } ], "section": "KNN-based NLG System", "sec_num": "2.3" }, { "text": "Above recursion ends once g(y (k) ) = g(x) or a certain number of iterations is reached. Figure 1 : IRN consists of two modules: an experience replay buffer and a pointer rewriter. The experience replay buffer collects mistaken cases from NLG baseline, template and IRN itself (the red dashed arrow) whereas the pointer network outputs templates with improved performance metrics. In each epoch of rectification, IRN obtains samples of cases for training from the buffer and trains a pointer rewriter with metrics such as slot consistency using a policy-based reinforcement learning technique. We omit some trivial connections for brevity.", "cite_spans": [], "ref_spans": [ { "start": 89, "end": 97, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "KNN-based NLG System", "sec_num": "2.3" }, { "text": "The pointer rewriter \u03c6 PR is trained to iteratively correct the candidate y (k) given a DA x. This correction operation is conducted time-recurrently. At each position j of rewriting a template, there is a state h j to represent the past history of the pointer rewriter and an action a j to take according to a policy \u03c0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pointer Rewriter", "sec_num": "3.1" }, { "text": "We use an autoregressive model, in particular LSTM to compute state h j , given its past state h j\u22121 , input x and its past output y", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "State", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(k) j\u22121 h j = \u03c6 LSTM (h j\u22121 , [x; y (k) j\u22121 ; c j ]),", "eq_num": "(5)" } ], "section": "State", "sec_num": null }, { "text": "where DA x is represented by one-hot representation (Wen et al., 2015a,b) . c j is a context representation over input template y (k) , to be described in Eq. (6). The operation [; ] means vector concatenation.", "cite_spans": [ { "start": 52, "end": 73, "text": "(Wen et al., 2015a,b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "State", "sec_num": null }, { "text": "Action For position j in the output template y (k) , its action a j is in a space consisting of two categories: template copy, c(i), to copy a token from the template y (k) at i, and word and slot generation, w, to generate a word or a slot at the position. For a length-M input template y (k) , the action a j is therefore in a set of {w,", "cite_spans": [ { "start": 290, "end": 293, "text": "(k)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "State", "sec_num": null }, { "text": "c(1), \u2022 \u2022 \u2022 , c(M )}. The action sequence a for a length-N output template is [a 1 , \u2022 \u2022 \u2022 , a N ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "State", "sec_num": null }, { "text": "Template Copy The model \u03c6 PR for template copy uses attentive pointer to decide, for position j, what token to copy from the candidate y (k) . Each token y", "cite_spans": [ { "start": 137, "end": 140, "text": "(k)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "State", "sec_num": null }, { "text": "(k) i in candidate y (k) is represented using an embedding y (k)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "State", "sec_num": null }, { "text": "i . For position j in the output template, this model utilizes the above hidden state h j and computes attentive weights to all of the tokens in y (k) , with weight to token embedding y (k) i as follows:", "cite_spans": [ { "start": 147, "end": 150, "text": "(k)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "State", "sec_num": null }, { "text": "\uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u03c6 PR (h j , y (k) i ) = v T a \u03c3(W h * h j + W y * y (k) i ) p PR ij = Softmax(\u03c6 PR (h j , y (k) i )) c j = 1\u2264i\u2264M p PR ij y i , (6) where v a , W h , W y are learnable parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "State", "sec_num": null }, { "text": "Word and Slot Generation Another candidate for position j is a word or a slot key from a predefined vocabulary. The action w computes a distribution of words and slot keys below", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "State", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p Vocab j = Softmax(W v * h j ),", "eq_num": "(7)" } ], "section": "State", "sec_num": null }, { "text": "where this distribution is dependent on the state h j and matrix W v is learnable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "State", "sec_num": null }, { "text": "Algorithm 1: Interactive Data Aggregation Input: template-DB, T ; baseline NLG system, b; pointer rewriter, \u03c6 PR ; total epoch number, K; candidate set size, U Output: ideal pointer rewriter,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "State", "sec_num": null }, { "text": "\u03c6 PR . 1 B, C \u2190 {}, {} 2 epoch \u2190 0 3 for x, z \u2208 T do 4 y \u2190 b(x) 5 if g(z) = g(y) then 6 C \u2190 C + (x, y, z) 7 end 8 end 9 while epoch < K do \u2126 \u2190 Bootstrapping(T, U \u2212 |C|) B \u2190 C + \u2126 Training(\u03c6 PR , B) C \u2190 {} for x, y, z \u2208 B do 15\u0177 \u2190 \u03c6 PR (x, y) 16 if g(y) = g(\u0177) then 17 C \u2190 C + (x,\u0177, z) 18 end end epoch \u2190 epoch + 1 end Policy", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "State", "sec_num": null }, { "text": "The probabilities for the above actions can be computed as follows", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "State", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c0(c(i)|h j ) = \u03bb j * p PR j (i) \u03c0(w|h j ) = (1 \u2212 \u03bb j ) * p Vocab j ,", "eq_num": "(8)" } ], "section": "State", "sec_num": null }, { "text": "where \u03c0(c(i)|h j ) is the probability of copying the i-th token from input template y (k) to position j. \u03c0(w|h j ) is the probability to use words or slot keys predicted from the distribution p Vocab j in Eq. 7. The weight \u03bb j is a real value between 0 and 1. It is computed from a Sigmoid operation as \u03bb j = Sigmoid(v h * h j ). With the policy, the pointer rewriter does greedy search to decide whether copying or generating a token.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "State", "sec_num": null }, { "text": "The experience replay buffer aims at providing training samples for IRN. It has three sources of samples. The first is from off-the-shelf NLG systems. The second is from the pointer rewriter in the last iteration. Both of them are real mistaken Algorithm 2: Bootstrapping via Retrieval Input: template-DB, T ; total sample number, V ; maximum tolerance (default 2), . Output: pseudo sample set, \u2126.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experience Replay Buffer", "sec_num": "3.2" }, { "text": "1 \u2126 \u2190 {} 2 while |\u2126| < V do 3 x, z \u2190 RandomSelect(T) 4 Z \u2190 {} 5 forx,\u1e91 \u2208 T do 6 p \u2190 g(z) 7 q \u2190 g(\u1e91) 8 if p = q \u2229 |p \u2212 q| < then 9 Z \u2190 Z + (x,\u1e91, z) 10 end 11 end 12 \u2126 \u2190 \u2126 + RandomSelect(Z)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experience Replay Buffer", "sec_num": "3.2" }, { "text": "13 end samples. They are stored in a case set C in the buffer. These samples are off-policy as the case set C can contain samples from many iterations before. The third source is sampled from a bootstrapping algorithm. They are stored in a set \u2126.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experience Replay Buffer", "sec_num": "3.2" }, { "text": "Iterative Data Aggregation The replay experiences should be progressive, reflecting improvements in the iterative training of IRN. Therefore, we design an iterative data aggregation algorithm in Algorithm 1. In the algorithm, the experience replay buffer B is defined as a fixed size set of B = C + \u2126. For a total epoch number of E, it randomly provides mistaken samples for training pointer rewriter \u03c6 PR at each epoch. Importantly, both content of C and \u2126 are varying from each epoch. For C, it initially consists of real mistaken samples from the baseline system (line 3-th to line 8-th). Later on, it's gradually filled by the samples from the IRN (line 14-th to line 19-th). For \u2126, its samples reflect a general distribution of training samples from a template database T (line 10-th). Finally, the algorithm aggregates these two groups of mistaken samples (line 11-th) and use them to train the model \u03c6 PR (line 12-th).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experience Replay Buffer", "sec_num": "3.2" }, { "text": "Bootstrapping via Retrieval Relying solely on the real mistaken samples exposes the system to data scarcity problem. It is easy to observe that real samples are heavily biased towards certain slots, and the number of real mistaken samples can be small. To address this problem, we introduce a bootstrapping algorithm, described in Algorithm 2. It uses a template database T , built from delexicalized NLG training corpus and organized by pairs of DA and reference template (x, z).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experience Replay Buffer", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "$NAME$ is $PHONE$ 1 1 1 0 1 0 1 1 0 3 4 -1 0 -1 5 6 (0) (3) (4) g(of) (0) g($NAME$)", "eq_num": "(5)" } ], "section": "Experience Replay Buffer", "sec_num": "3.2" }, { "text": "At each turn of the algorithm, it first randomly samples (line 3-th) a pair (x, z), from training template data base of T . Then for every pair (x,\u1e91) in T , it measures if the pair (x,\u1e91) is slot-inconsistent with respect to (x, z), and adds the pair that is within a certain distance (a hyper parameter) to a set Z (line 5-th to 11-th). is usually set to a small number so that the selected samples are close enough to (x, z). In practice, we set it to 2. Finally, it does a random sampling (line 12-th) on Z and insert its return into the output set \u2126. Such bootstrapping process stops when the number of generated samples reaches a certain limit K.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experience Replay Buffer", "sec_num": "3.2" }, { "text": "These samples, which we refer them as pseudo samples in the following, represent a wider coverage of training samples than the real mistaken samples. Because they are sampled from general distribution of the templates, some of semantics are not seen in the real mistaken cases. We will demonstrate through experiments that it effectively addresses data scarcity problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experience Replay Buffer", "sec_num": "3.2" }, { "text": "One key idea behind the proposed IRN model is to conduct distant supervision on the actions of template copy and generation. We diagram its motivation in Figure 2 . During training, only candidate y and its reference z are given. The exact actions that convert template y to z have to be inferred from the two templates. Here we use simple rules for the inference. Firstly, the rules check if reference token z j exists in the candidate y. The output is a label d c consisting of 1s and 0s, representing whether tokens in the reference template are existent/absent in the candidate. Secondly, the rules locate the orig-inal position d l j in the candidate for each token j in the reference template if d c = 1 and use -1 for d c = 0. Finally, the action label d \u03c0 for policy is inferred, with w for d l j = \u22121 and c(i) for d l j = i. We may use the extracted tags to do supervised learning. The loss to be minimized is as follows", "cite_spans": [], "ref_spans": [ { "start": 154, "end": 162, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Training with Supervised Learning and Distant Supervision", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "J SL = \u2212 L j=1 log \u03c0(d \u03c0 j |h j ),", "eq_num": "(9)" } ], "section": "Training with Supervised Learning and Distant Supervision", "sec_num": "4" }, { "text": "where L is the length of ground truth. \u03c0(d \u03c0 j |h j ) computes the likelihood of action d \u03c0 j at position j given state h j .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training with Supervised Learning and Distant Supervision", "sec_num": "4" }, { "text": "However, there are following issues when attempting to utilize the labels produced by distant supervision for training. Firstly, the importance of every token in candidate is different. For example, noun phrase (colored in blue) is critical and should be copied. Function words (colored in red) is of little relevance and can be generated by IRN itself. However, distant supervision treats them the same. Secondly, rule-based matching may cause semantic ambiguity (dashed line colored in black). Lastly, the training criterion of cross entropy is not directly relevant to the evaluation metric using slot error rate. To address these issues, we use reinforcement learning to obtain the optimal actions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training with Supervised Learning and Distant Supervision", "sec_num": "4" }, { "text": "In this section, we describe another method to train IRN. We apply policy gradient (Williams, 1992) to optimize models with discrete rewards.", "cite_spans": [ { "start": 83, "end": 99, "text": "(Williams, 1992)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Training with Policy-based Reinforcement Learning", "sec_num": "5" }, { "text": "Slot Consistency This reward is related to the correctness of output templates. Given the set of slot-value pairs g(y) from the output template generated by IRN and the set of slot-value pairs g(x) extracted from input DA, the reward is zero when they are equal; otherwise, it is negative with value set to the cardinality of the difference between the two sets as follows", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rewards", "sec_num": "5.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "r SC = \u2212|g(y) \u2212 g(x)|.", "eq_num": "(10)" } ], "section": "Rewards", "sec_num": "5.1" }, { "text": "Language Fluency This reward is related to the naturalness of the realized surface form from a response generation method. Following (Wen et al., 2015a,b), we first train a backward language model on the reference texts from training data. Then, the perplexity (PPL) of the surface form after lexicalization of the output template\u0177 is measured using the language model. This PPL is used for the reward for language fluency as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rewards", "sec_num": "5.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "r LM = \u2212PPL(y).", "eq_num": "(11)" } ], "section": "Rewards", "sec_num": "5.1" }, { "text": "Distant Supervision We also measure the reward from using distant supervision in Section 4. For a length-N reference template, the reward is given as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rewards", "sec_num": "5.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "r DS = \u2212 L j=1 log \u03c0(d \u03c0 j |h j ),", "eq_num": "(12)" } ], "section": "Rewards", "sec_num": "5.1" }, { "text": "where d \u03c0 j is the inferred action label. The final reward for action a is a weighted sum of the rewards discussed above:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rewards", "sec_num": "5.1" }, { "text": "r(a) = \u03b3 SC r SC + \u03b3 LM r LM + \u03b3 DS r DS (13)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rewards", "sec_num": "5.1" }, { "text": "where \u03b3 SC +\u03b3 LM +\u03b3 DS = 1. We set them to equal value in this work. A reward is observed after the last token of the utterance is generated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rewards", "sec_num": "5.1" }, { "text": "We utilize supervised learning in Eq. (9) to initialize our model with the labels extracted from distant supervision. After its convergence, we continuously tune the model using policy gradient described in this section. The policy model in \u03c6 PR itself generates a sequence of actions a, that are not necessarily the same as d \u03c0 , and this produces an output template y to compute slot consistency reward in Eq. (10) and language fluency reward in Eq. (11). With these rewards, the final reward is computed in (13). The gradient to back propagate is estimated using REINFORCE as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Policy Gradient", "sec_num": "5.2" }, { "text": "\u2207J RL (\u03b8) = (r(a) \u2212 b) * N j=1 \u2207 log \u03c0(a j |h j ), (14)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Policy Gradient", "sec_num": "5.2" }, { "text": "where \u03b8 denotes model parameters. r(a) \u2212 b is the advantage function per REINFORCE. b is a baseline. Through experiments, we find that b = BLEU(y, z) performs better (Weaver and Tao, 2001 ) than tricks such as simple averaging of the likelihood 1 N N j=1 log \u03c0(a j |h j ).", "cite_spans": [ { "start": 166, "end": 187, "text": "(Weaver and Tao, 2001", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Policy Gradient", "sec_num": "5.2" }, { "text": "We assess the model performances on four NLG datasets of different domains. The SF Hotel and SF Restaurant benchmarks are collected in (Wen et al., 2015a) while Laptop and TV benchmarks are released by (Wen et al., 2016) . Each dataset is evaluated with five strong baseline methods, including HLSTM (Wen et al., 2015a), SC-LSTM (Wen et al., 2015b), TGen (Du\u0161ek and Jur\u010d\u00ed\u010dek, 2016) , ARoA (Tran and Nguyen, 2017b) and RALSTM (Tran and Nguyen, 2017a). Following these prior works, the evaluation metrics consist of BLEU and slot error rate (ERR), which is computed as", "cite_spans": [ { "start": 202, "end": 220, "text": "(Wen et al., 2016)", "ref_id": "BIBREF17" }, { "start": 355, "end": 381, "text": "(Du\u0161ek and Jur\u010d\u00ed\u010dek, 2016)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "6.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "ERR = p + q N ,", "eq_num": "(15)" } ], "section": "Experiment Setup", "sec_num": "6.1" }, { "text": "where N is the total number of slots in the DA, and p, q is the number of missing and redundant slots in the generated template, respectively. We follow all baseline performances reported in (Tran and Nguyen, 2017b) and use open source toolkits, RNNLG 1 and Tgen 2 to build NLG systems, HLSTM, SCLSTM and TGen. We reimplement the baselines ARoA and RALSTM since their source codes are not available.", "cite_spans": [ { "start": 191, "end": 215, "text": "(Tran and Nguyen, 2017b)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "6.1" }, { "text": "We first compare our model, i.e., IRN + KNN with all those strong baselines metioned above. Figure 2 shows that the proposed model significantly outperforms previous baselines on both BLEU score and ERR. Compared with current state-of-the-art model, RALSTM, it achieves reductions of 1.45, 1.38, 1.45 and 1.80 times for SF Restaurant, SF Hotel, Laptop, and Television datasets, respectively. Furthermore, it improves 3.59%, 1.45%, 2.29% and 3.33% of BLEU scores on these datasets, respectively. This improvements of BLEU score can be contributed from language fluency reward r LM .", "cite_spans": [], "ref_spans": [ { "start": 92, "end": 101, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Main Results", "sec_num": "6.2" }, { "text": "To verify whether IRN helps improve slot consistency of general NLG models, we further equip strong baselines, including HLSTM, TGen and RALSTM, with IRN. We evaluate their performances on SF Restaurant and Television datasets. As shown in Table 3 , the methods consistently reduce ERRs and also improve BLEU scores for all 1 https://github.com/shawnwun/RNNLG. 2 https://github.com/UFAL-DSG/tgen. baselines on both datasets.", "cite_spans": [], "ref_spans": [ { "start": 240, "end": 247, "text": "Table 3", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Main Results", "sec_num": "6.2" }, { "text": "In conclusion, our model, IRN (+ KNN), not only has achieved the state-of-the-art performances but also can contribute to improvements of slot consistency for general NLG systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Main Results", "sec_num": "6.2" }, { "text": "We perform a set of ablation experiments on the SCLSTM+IRN models on Laptop dataset to understand the relative contribution of data aggregation algorithms in Sec. 3.2 and rewards in Sec. 5.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation Study", "sec_num": "6.3" }, { "text": "The results in Table 4 show that removal of slot consistency reward r SC or distant supervision reward r DS from advantage function dramatically degrades SER performance. Language fluency related information from baseline BLEU and reward r LM also have positive impact on BLEU and SER, though they are smaller than using r SC or r DS .", "cite_spans": [], "ref_spans": [ { "start": 15, "end": 22, "text": "Table 4", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Effect of Reward Designs", "sec_num": "6.3.1" }, { "text": "Using only candidates from baselines degrades performance to approximately that of the baseline SCLSTM. This shows that incorporating candidates from IRN is important. The model without bootstrapping, even including candidates from IRN, has worse performance than SCLSTM in Table 3 . This shows that bootstrapping to include generic samples from templates database is critical.", "cite_spans": [], "ref_spans": [ { "start": 274, "end": 281, "text": "Table 3", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Effect of Data Algorithms", "sec_num": "6.3.2" }, { "text": "We evaluate IRN and some strong baselines on TV dataset. Given an input DAs, we ask human eval-Input DA recommend(NAME = crios 93, FAMILY = l1, AUDIO= nicam stereo, SIZE = large)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human Evaluation", "sec_num": "6.4" }, { "text": "Reference Text the large crios 93 television in the l1 family features nicam stereo", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human Evaluation", "sec_num": "6.4" }, { "text": "Mistaken Generation the $NAME$ is in $FAMILY$ with $SIZE$ screen and cost about $PRICE$ [AUDIO, PRICE] 1-st IRN Revision the $NAME$ is a nice television in $FAMILY$ with a $SIZE$ screen [AUDIO] 2-st IRN Revision the $NAME$ is very nice in $FAMILY$ with a $SIZE$ screen size [AUDIO] 3-st IRN Revision the $NAME$ is very nice in the $FAMILY$ family with a $SIZE$ screen size and $AUDIO$ Lexicalized Form the crios 93 is very nice in the l1 family with a large screen size and nicam stereo uator to score generated surface realizations from our model and other baselines in terms of informativeness and naturalness. Here informativeness measures whether output utterance contains all the information specified in the DA without insertion of extra slots or missing an input slot. The naturalness is defined as whether it mimics a response from a human (both ratings are out of 5). Table 5 shows that RALSTM + IRN outperforms RALSTM notably in informativeness relatively by 4.97%, from 4.63 to 4.86. In terms of naturalness, the improvement is from 4.01 to 4.07, relative by 1.50%. Meanwhile, IRN helps to improve the performances of TGen by 5.12% on informativeness and 3.23% on naturalness.", "cite_spans": [ { "start": 88, "end": 102, "text": "[AUDIO, PRICE]", "ref_id": null }, { "start": 186, "end": 193, "text": "[AUDIO]", "ref_id": null }, { "start": 274, "end": 281, "text": "[AUDIO]", "ref_id": null } ], "ref_spans": [ { "start": 877, "end": 884, "text": "Table 5", "ref_id": "TABREF12" } ], "eq_spans": [], "section": "Human Evaluation", "sec_num": "6.4" }, { "text": "These subjective assessments are consistent to the observations in Table 3 , which both have verified the effectiveness of proposed method. Table 6 presents a sample on TV dataset and shows a progress made by IRN. Given an input DA, the baseline HLSTM outputs in the third row a template that misses slot $AUDIO$ but inserts slot $PRICE$. The output template from the first iteration of IRN has a removal of the inserted $PRICE$ slot. The second iteration has improved language fluency but no progress in slot-inconsistency. The third iteration achieves slot consistency, after which a natural language, though slightly different from the reference text, is generated via lexicalization.", "cite_spans": [], "ref_spans": [ { "start": 67, "end": 74, "text": "Table 3", "ref_id": "TABREF9" }, { "start": 140, "end": 147, "text": "Table 6", "ref_id": "TABREF13" } ], "eq_spans": [], "section": "Human Evaluation", "sec_num": "6.4" }, { "text": "Conventional approaches for solving NLG task are mostly pipeline-based, dividing it into sentence planning and surface realisation (Dethlefs et al., 2013; Stent et al., 2004; Walker et al., 2002) . Oh and Rudnicky (2000) introduce a class-based ngram language model and a rule-based reranker. Ratnaparkhi (2002) address the limitations of n-gram language models by using more sophisticated syntactic dependency trees. Mairesse and Young (2014) employ a phrase-based generator that learn from a semantically aligned corpus. Despite their robustness, these models are costly to create and maintain as they heavily rely on handcrafted rules.", "cite_spans": [ { "start": 131, "end": 154, "text": "(Dethlefs et al., 2013;", "ref_id": "BIBREF2" }, { "start": 155, "end": 174, "text": "Stent et al., 2004;", "ref_id": "BIBREF11" }, { "start": 175, "end": 195, "text": "Walker et al., 2002)", "ref_id": "BIBREF14" }, { "start": 293, "end": 311, "text": "Ratnaparkhi (2002)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "Recent works (Wen et al., 2015b; Du\u0161ek and Jur\u010d\u00ed\u010dek, 2016; Tran and Nguyen, 2017a) build data-driven models based on end-to-end learning. Wen et al. (2015a) combine two recurrent neural network (RNN) based models with a CNN reranker to generate required utterances. Wen et al. 2015bintroduce a novel SC-LSTM with an additional reading cell to jointly learn gating mechanism and language model. Du\u0161ek and Jur\u010d\u00ed\u010dek (2016) present an attentive neural generator to apply attention mechanism over input DA. Tran and Nguyen (2017b,a) employ a refiner component to select and aggregate the semantic elements produced by the encoder. More recently, domain adaptation (Wen et al., 2016) and unsupervised learning (Bahuleyan et al., 2018) for NLG also receive much attention.", "cite_spans": [ { "start": 13, "end": 32, "text": "(Wen et al., 2015b;", "ref_id": "BIBREF18" }, { "start": 33, "end": 58, "text": "Du\u0161ek and Jur\u010d\u00ed\u010dek, 2016;", "ref_id": "BIBREF3" }, { "start": 59, "end": 82, "text": "Tran and Nguyen, 2017a)", "ref_id": "BIBREF12" }, { "start": 394, "end": 419, "text": "Du\u0161ek and Jur\u010d\u00ed\u010dek (2016)", "ref_id": "BIBREF3" }, { "start": 659, "end": 677, "text": "(Wen et al., 2016)", "ref_id": "BIBREF17" }, { "start": 704, "end": 728, "text": "(Bahuleyan et al., 2018)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "We are also inspired by the post-edit paradigm (Xia et al., 2017) , which uses a second-pass decoder to improve the translation quality.", "cite_spans": [ { "start": 47, "end": 65, "text": "(Xia et al., 2017)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "A recent method in (Wu et al., 2019) defines an auxiliary loss that checks if the object words exist in the expected system response of a task-oriented dialogue system. It would be interesting to apply this auxiliary loss in the proposed method. On the other hand, the REINFORCE (Williams, 1992) algorithm applied in this paper is more general than (Wu et al., 2019) to incorporate other metrics, such as BLEU.", "cite_spans": [ { "start": 19, "end": 36, "text": "(Wu et al., 2019)", "ref_id": "BIBREF21" }, { "start": 279, "end": 295, "text": "(Williams, 1992)", "ref_id": "BIBREF19" }, { "start": 349, "end": 366, "text": "(Wu et al., 2019)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "Nevertheless, end-to-end neural-based generators suffer from hallucination problem and are hard to avoid generating slot-inconsistent utterance (Balakrishnan et al., 2019) . Balakrishnan et al. (2019) attempts to alleviate this issue by employing a treestructured meaning representation and constrained decoding technique. However, the tree-shaped structure requires additional human annotation.", "cite_spans": [ { "start": 144, "end": 171, "text": "(Balakrishnan et al., 2019)", "ref_id": "BIBREF1" }, { "start": 174, "end": 200, "text": "Balakrishnan et al. (2019)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "We have proposed Iterative Rectification Network (IRN) to improve slot consistency of general NLG systems. In this method, a retrieval-based bootstrapping is introduced to sample pseudo mistaken cases from training corpus to enrich the original training data. We also employ policy-based reinforcement learning to enable training the models with discrete rewards that are consistent to evaluation metrics. Extensive experiments show that the proposed model significantly outperforms previous methods. These improvements include both of correctness measured with slot error rates and naturalness measured with BLEU scores. Human evaluation and case study also confirm the effectiveness of the proposed method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" } ], "back_matter": [ { "text": "This work was supported by the National Natural Science Foundation of China (NSFC) via grant 61976072, 61632011 and 61772153. This work was done while the first author did internship at Ant Financial. We thank anonymous reviewers for valuable suggestions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Probabilistic natural language generation with wasserstein autoencoders", "authors": [ { "first": "Hareesh", "middle": [], "last": "Bahuleyan", "suffix": "" }, { "first": "Lili", "middle": [], "last": "Mou", "suffix": "" }, { "first": "Kartik", "middle": [], "last": "Vamaraju", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Vechtomova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1806.08462" ] }, "num": null, "urls": [], "raw_text": "Hareesh Bahuleyan, Lili Mou, Kartik Vamaraju, Hao Zhou, and Olga Vechtomova. 2018. Probabilistic natural language generation with wasserstein autoen- coders. arXiv preprint arXiv:1806.08462.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Constrained decoding for neural NLG from compositional representations in task-oriented dialogue", "authors": [ { "first": "Anusha", "middle": [], "last": "Balakrishnan", "suffix": "" }, { "first": "Jinfeng", "middle": [], "last": "Rao", "suffix": "" }, { "first": "Kartikeya", "middle": [], "last": "Upasani", "suffix": "" }, { "first": "Michael", "middle": [], "last": "White", "suffix": "" }, { "first": "Rajen", "middle": [], "last": "Subba", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anusha Balakrishnan, Jinfeng Rao, Kartikeya Upasani, Michael White, and Rajen Subba. 2019. Con- strained decoding for neural NLG from composi- tional representations in task-oriented dialogue. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics. To appear.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Conditional random fields for responsive surface realisation using global features", "authors": [ { "first": "Nina", "middle": [], "last": "Dethlefs", "suffix": "" }, { "first": "Helen", "middle": [], "last": "Hastie", "suffix": "" }, { "first": "Heriberto", "middle": [], "last": "Cuay\u00e1huitl", "suffix": "" }, { "first": "Oliver", "middle": [], "last": "Lemon", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1254--1263", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nina Dethlefs, Helen Hastie, Heriberto Cuay\u00e1huitl, and Oliver Lemon. 2013. Conditional random fields for responsive surface realisation using global features. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1254-1263, Sofia, Bulgaria. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Sequenceto-sequence generation for spoken dialogue via deep syntax trees and strings", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" }, { "first": "Filip", "middle": [], "last": "Jur\u010d\u00ed\u010dek", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1606.05491" ] }, "num": null, "urls": [], "raw_text": "Ond\u0159ej Du\u0161ek and Filip Jur\u010d\u00ed\u010dek. 2016. Sequence- to-sequence generation for spoken dialogue via deep syntax trees and strings. arXiv preprint arXiv:1606.05491.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A deep ensemble model with slot alignment for sequence-tosequence natural language generation", "authors": [ { "first": "Juraj", "middle": [], "last": "Juraska", "suffix": "" }, { "first": "Panagiotis", "middle": [], "last": "Karagiannis", "suffix": "" }, { "first": "Kevin", "middle": [ "K" ], "last": "Bowden", "suffix": "" }, { "first": "Marilyn", "middle": [ "A" ], "last": "Walker", "suffix": "" } ], "year": 2018, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "152--162", "other_ids": {}, "num": null, "urls": [], "raw_text": "Juraj Juraska, Panagiotis Karagiannis, Kevin K. Bow- den, and Marilyn A. Walker. 2018. A deep en- semble model with slot alignment for sequence-to- sequence natural language generation. In Proceed- ings of NAACL-HLT, pages 152-162.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A global model for concept-to-text generation", "authors": [ { "first": "Ioannis", "middle": [], "last": "Konstas", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2013, "venue": "Journal of Artificial Intelligence Research", "volume": "48", "issue": "", "pages": "305--346", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ioannis Konstas and Mirella Lapata. 2013. A global model for concept-to-text generation. Journal of Ar- tificial Intelligence Research, 48:305-346.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Stochastic language generation in dialogue using factored language models", "authors": [ { "first": "Fran\u00e7ois", "middle": [], "last": "Mairesse", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Young", "suffix": "" } ], "year": 2014, "venue": "Computational Linguistics", "volume": "40", "issue": "4", "pages": "763--799", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fran\u00e7ois Mairesse and Steve Young. 2014. Stochas- tic language generation in dialogue using fac- tored language models. Computational Linguistics, 40(4):763-799.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A simple recipe towards reducing hallucination in neural surface realisation", "authors": [ { "first": "Feng", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Jin-Ge", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Jinpeng", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57nd Annual Meeting of the Association for Computational Linguistics (ACL-19)", "volume": "", "issue": "", "pages": "2673--2679", "other_ids": {}, "num": null, "urls": [], "raw_text": "Feng Nie, Jin-Ge Yao, Jinpeng Wang, Rong Pan1, and Chin-Yew Lin. 2019. A simple recipe towards re- ducing hallucination in neural surface realisation. In Proceedings of the 57nd Annual Meeting of the As- sociation for Computational Linguistics (ACL-19), page 2673-2679, Florence, Italy.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Stochastic language generation for spoken dialogue systems", "authors": [ { "first": "H", "middle": [], "last": "Alice", "suffix": "" }, { "first": "Alexander", "middle": [ "I" ], "last": "Oh", "suffix": "" }, { "first": "", "middle": [], "last": "Rudnicky", "suffix": "" } ], "year": 2000, "venue": "ANLP-NAACL 2000 Workshop: Conversational Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alice H Oh and Alexander I Rudnicky. 2000. Stochas- tic language generation for spoken dialogue systems. In ANLP-NAACL 2000 Workshop: Conversational Systems.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Sequence level training with recurrent neural networks", "authors": [ { "first": "Aurelio", "middle": [], "last": "Marc", "suffix": "" }, { "first": "Sumit", "middle": [], "last": "Ranzato", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "Wojciech", "middle": [], "last": "Auli", "suffix": "" }, { "first": "", "middle": [], "last": "Zaremba", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1511.06732" ] }, "num": null, "urls": [], "raw_text": "Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level train- ing with recurrent neural networks. arXiv preprint arXiv:1511.06732.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Trainable approaches to surface natural language generation and their application to conversational dialog systems", "authors": [ { "first": "Adwait", "middle": [], "last": "Ratnaparkhi", "suffix": "" } ], "year": 2002, "venue": "Computer Speech & Language", "volume": "16", "issue": "3-4", "pages": "435--455", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adwait Ratnaparkhi. 2002. Trainable approaches to surface natural language generation and their appli- cation to conversational dialog systems. Computer Speech & Language, 16(3-4):435-455.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Trainable sentence planning for complex information presentations in spoken dialog systems", "authors": [ { "first": "Amanda", "middle": [], "last": "Stent", "suffix": "" }, { "first": "Rashmi", "middle": [], "last": "Prasad", "suffix": "" }, { "first": "Marilyn", "middle": [], "last": "Walker", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04)", "volume": "", "issue": "", "pages": "79--86", "other_ids": { "DOI": [ "10.3115/1218955.1218966" ] }, "num": null, "urls": [], "raw_text": "Amanda Stent, Rashmi Prasad, and Marilyn Walker. 2004. Trainable sentence planning for complex in- formation presentations in spoken dialog systems. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL- 04), pages 79-86, Barcelona, Spain.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Natural language generation for spoken dialogue system using rnn encoder-decoder networks", "authors": [ { "first": "Le-Minh", "middle": [], "last": "Van-Khanh Tran", "suffix": "" }, { "first": "", "middle": [], "last": "Nguyen", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1706.00139" ] }, "num": null, "urls": [], "raw_text": "Van-Khanh Tran and Le-Minh Nguyen. 2017a. Natu- ral language generation for spoken dialogue system using rnn encoder-decoder networks. arXiv preprint arXiv:1706.00139.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Neuralbased natural language generation in dialogue using rnn encoder-decoder with semantic aggregation", "authors": [ { "first": "Le-Minh", "middle": [], "last": "Van-Khanh Tran", "suffix": "" }, { "first": "", "middle": [], "last": "Nguyen", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1706.06714" ] }, "num": null, "urls": [], "raw_text": "Van-Khanh Tran and Le-Minh Nguyen. 2017b. Neural- based natural language generation in dialogue us- ing rnn encoder-decoder with semantic aggregation. arXiv preprint arXiv:1706.06714.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Training a sentence planner for spoken dialogue using boosting", "authors": [ { "first": "A", "middle": [], "last": "Marilyn", "suffix": "" }, { "first": "", "middle": [], "last": "Walker", "suffix": "" }, { "first": "C", "middle": [], "last": "Owen", "suffix": "" }, { "first": "Monica", "middle": [], "last": "Rambow", "suffix": "" }, { "first": "", "middle": [], "last": "Rogati", "suffix": "" } ], "year": 2002, "venue": "Computer Speech & Language", "volume": "16", "issue": "3-4", "pages": "409--433", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marilyn A Walker, Owen C Rambow, and Monica Ro- gati. 2002. Training a sentence planner for spoken dialogue using boosting. Computer Speech & Lan- guage, 16(3-4):409-433.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The optimal reward baseline for gradient-based reinforcement learning", "authors": [ { "first": "Lex", "middle": [], "last": "Weaver", "suffix": "" }, { "first": "Nigel", "middle": [], "last": "Tao", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence", "volume": "", "issue": "", "pages": "538--545", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lex Weaver and Nigel Tao. 2001. The optimal reward baseline for gradient-based reinforcement learning. In Proceedings of the Seventeenth conference on Un- certainty in artificial intelligence, pages 538-545.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Stochastic language generation in dialogue using recurrent neural networks with convolutional sentence reranking", "authors": [ { "first": "Milica", "middle": [], "last": "Tsung-Hsien Wen", "suffix": "" }, { "first": "Dongho", "middle": [], "last": "Gasic", "suffix": "" }, { "first": "Nikola", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Pei-Hao", "middle": [], "last": "Mrksic", "suffix": "" }, { "first": "David", "middle": [], "last": "Su", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Vandyke", "suffix": "" }, { "first": "", "middle": [], "last": "Young", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1508.01755" ] }, "num": null, "urls": [], "raw_text": "Tsung-Hsien Wen, Milica Gasic, Dongho Kim, Nikola Mrksic, Pei-Hao Su, David Vandyke, and Steve Young. 2015a. Stochastic language generation in dialogue using recurrent neural networks with convolutional sentence reranking. arXiv preprint arXiv:1508.01755.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Multi-domain neural network language generation for spoken dialogue systems", "authors": [ { "first": "Milica", "middle": [], "last": "Tsung-Hsien Wen", "suffix": "" }, { "first": "Nikola", "middle": [], "last": "Gasic", "suffix": "" }, { "first": "Lina", "middle": [ "M" ], "last": "Mrksic", "suffix": "" }, { "first": "Pei-Hao", "middle": [], "last": "Rojas-Barahona", "suffix": "" }, { "first": "David", "middle": [], "last": "Su", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Vandyke", "suffix": "" }, { "first": "", "middle": [], "last": "Young", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1603.01232" ] }, "num": null, "urls": [], "raw_text": "Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina M Rojas-Barahona, Pei-Hao Su, David Vandyke, and Steve Young. 2016. Multi-domain neural network language generation for spoken di- alogue systems. arXiv preprint arXiv:1603.01232.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Semantically conditioned lstm-based natural language generation for spoken dialogue systems", "authors": [ { "first": "Milica", "middle": [], "last": "Tsung-Hsien Wen", "suffix": "" }, { "first": "Nikola", "middle": [], "last": "Gasic", "suffix": "" }, { "first": "Pei-Hao", "middle": [], "last": "Mrksic", "suffix": "" }, { "first": "David", "middle": [], "last": "Su", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Vandyke", "suffix": "" }, { "first": "", "middle": [], "last": "Young", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1508.01745" ] }, "num": null, "urls": [], "raw_text": "Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Pei- Hao Su, David Vandyke, and Steve Young. 2015b. Semantically conditioned lstm-based natural lan- guage generation for spoken dialogue systems. arXiv preprint arXiv:1508.01745.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Simple statistical gradientfollowing algorithms for connectionist reinforcement learning", "authors": [ { "first": "Ronald", "middle": [ "J" ], "last": "Williams", "suffix": "" } ], "year": 1992, "venue": "Machine Learning", "volume": "8", "issue": "", "pages": "229--256", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronald J. Williams. 1992. Simple statistical gradient- following algorithms for connectionist reinforce- ment learning. Machine Learning, 8:229-256.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Generation by inverting a semantic parser that uses statistical machine translation", "authors": [ { "first": "Yuk", "middle": [ "Wah" ], "last": "Wong", "suffix": "" }, { "first": "Raymond", "middle": [], "last": "Mooney", "suffix": "" } ], "year": 2007, "venue": "Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference", "volume": "", "issue": "", "pages": "172--179", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuk Wah Wong and Raymond Mooney. 2007. Genera- tion by inverting a semantic parser that uses statisti- cal machine translation. In Human Language Tech- nologies 2007: The Conference of the North Amer- ican Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 172-179.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Global-to-local memory pointer networks for task-oriented dialogue", "authors": [ { "first": "Chien-Sheng", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" } ], "year": 2019, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chien-Sheng Wu, Richard Socher, and Caiming Xiong. 2019. Global-to-local memory pointer networks for task-oriented dialogue. In International Conference on Learning Representations.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Deliberation networks: Sequence generation beyond one-pass decoding", "authors": [ { "first": "Yingce", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Lijun", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Jianxin", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Nenghai", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Tie-Yan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "1784--1794", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yingce Xia, Fei Tian, Lijun Wu, Jianxin Lin, Tao Qin, Nenghai Yu, and Tie-Yan Liu. 2017. Deliberation networks: Sequence generation beyond one-pass de- coding. In Advances in Neural Information Process- ing Systems, pages 1784-1794.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "text": "Correcting a candidate given a reference template. d c , d l , and d \u03c0 are inferred by simple rules.", "num": null }, "TABREF1": { "type_str": "table", "num": null, "content": "", "html": null, "text": "An exmaple (including mistaken generations) extracted from SF Hotel (Wen et al., 2015b) dataset. Errors are marked in colors (missing, misplaced)." }, "TABREF2": { "type_str": "table", "num": null, "content": "
", "html": null, "text": "that consists of an act type and some slots. Universal set S con-tains all possible slots. The output template y" }, "TABREF6": { "type_str": "table", "num": null, "content": "
ModelSF Restaurant BLEU ERR BLEU ERR BLEU ERR BLEU ERR SF Hotel Laptop Television
HLSTM (RALSTM (Tran and Nguyen, 2017a) 0.779 0.16% 0.898 0.43% 0.525 0.42% 0.541 0.63%
IRN (+ KNN)0.807 0.11% 0.911 0.32% 0.537 0.29% 0.559 0.35%
", "html": null, "text": "Wen et al., 2015a) 0.747 0.74% 0.850 2.67% 0.513 1.10% 0.525 2.50% SCLSTM(Wen et al., 2015b) 0.753 0.38% 0.848 3.07% 0.512 0.79% 0.527 2.31% TGen(Du\u0161ek and Jur\u010d\u00ed\u010dek, 2016) 0.751 0.84% 0.853 4.14% 0.515 0.87% 0.521 2.32% ARoA (Tran and Nguyen, 2017b) 0.776 0.30% 0.892 1.13% 0.522 0.50% 0.539 0.60%" }, "TABREF7": { "type_str": "table", "num": null, "content": "", "html": null, "text": "Experiment results on four datasets for all baselines and our model. Meanwhile, the improvements over all prior methods are statistically significant with p < 0.01 under t-test." }, "TABREF9": { "type_str": "table", "num": null, "content": "
MethodLaptop BLEU SER
IRN (+KNN)0.5370.29%
w/o IRN0.4140.88%
w/o reward r SC0.5260.75%
w/o reward r DS0.5270.66%
w/o reward r LM0.5290.49%
w/o baseline BLEU0.5310.37%
w/o Aggregation0.5150.48%
w/o Bootstrapping0.4640.83%
", "html": null, "text": "The up and down arrows emphasize the absolutely improved performances contributed by IRN." }, "TABREF10": { "type_str": "table", "num": null, "content": "
: Ablation study of rewards (upper part) and
training data algorithms (lower part).
", "html": null, "text": "" }, "TABREF12": { "type_str": "table", "num": null, "content": "", "html": null, "text": "Real user trial for generation quality evaluation on both informativeness and naturalness." }, "TABREF13": { "type_str": "table", "num": null, "content": "
", "html": null, "text": "A DA from Television dataset and a candidate from HLSTM on the DA. The output template from each iteration of IRN. Slot errors are marked in colors (missing, misplaced)." } } } }