{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:28:34.545389Z" }, "title": "PARENTing via Model-Agnostic Reinforcement Learning to Correct Pathological Behaviors in Data-to-Text Generation", "authors": [ { "first": "Cl\u00e9ment", "middle": [], "last": "Rebuffel", "suffix": "", "affiliation": { "laboratory": "LIP6", "institution": "Sorbonne Universit\u00e9", "location": { "country": "France" } }, "email": "" }, { "first": "Laure", "middle": [], "last": "Soulier", "suffix": "", "affiliation": { "laboratory": "LIP6", "institution": "Sorbonne Universit\u00e9", "location": { "country": "France" } }, "email": "" }, { "first": "Geoffrey", "middle": [], "last": "Scoutheeten", "suffix": "", "affiliation": { "laboratory": "", "institution": "BNP Paribas", "location": { "country": "France" } }, "email": "" }, { "first": "Patrick", "middle": [], "last": "Gallinari", "suffix": "", "affiliation": { "laboratory": "LIP6", "institution": "Sorbonne Universit\u00e9", "location": { "country": "France" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In language generation models conditioned by structured data, the classical training via maximum likelihood almost always leads models to pick up on dataset divergence (i.e., hallucinations or omissions), and to incorporate them erroneously in their own generations at inference. In this work, we build ontop of previous Reinforcement Learning based approaches and show that a model-agnostic framework relying on the recently introduced PARENT metric is efficient at reducing both hallucinations and omissions. Evaluations on the widely used WikiBIO and WebNLG benchmarks demonstrate the effectiveness of this framework compared to state-of-the-art models.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "In language generation models conditioned by structured data, the classical training via maximum likelihood almost always leads models to pick up on dataset divergence (i.e., hallucinations or omissions), and to incorporate them erroneously in their own generations at inference. In this work, we build ontop of previous Reinforcement Learning based approaches and show that a model-agnostic framework relying on the recently introduced PARENT metric is efficient at reducing both hallucinations and omissions. Evaluations on the widely used WikiBIO and WebNLG benchmarks demonstrate the effectiveness of this framework compared to state-of-the-art models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Data-to-Text aims at generating natural language descriptions from structured data (Reiter et al., 2005) ; fostered by recent advances on neural approaches and the emergence of large scale datasets made of (structured-data, reference text) pairs (Lebret et al., 2016; Gardent and Perez-Beltrachini, 2017; Wiseman et al., 2017) . Figure 1 illustrates an example from the WikiBIO dataset (Lebret et al., 2016) . These datasets are either hand-crafted via crowdworkers or automatically built by aligning sources found on the Internet. As such, reference texts might include divergences of two types, limiting the ability of generation models to produce realistic descriptions. First, reference texts might contain information not grounded in the source data; especially for automatically constructed datasets, where references were not written with the sourcedata description task in mind. For instance, the phrase \"who served as lieutenant [...] \" in Figure 1 has no basis in the associated infobox. Second, reference texts do not always cover the entirety of the table (items Battles/wars in Figure 1) . In most settings, this second point is referred to as content selection and is inherent of most data-totext tasks. However, some hand-crafted datasets are designed where annotators are asked to transcribe every fields, with models also expected to do the same. In this case, incomplete references (i.e. where some part of the source data is missing from the realization) can lead to models failing to learn to transcribe all information, and only partially cover data-sources at inference. Divergence in training examples leads to hallucinated/omitted content in model output; which is a well-known problem in neural approaches for text generation (Rohrbach et al., 2018) . This problem arises both from the training procedure (training via maximum likelihood leads to language models strongly mimicking human behaviors), and from the testing protocols. Indeed, current standard metrics only measure similarity (such as BLEU (Papineni et al., 2002) , ROUGE (Lin, 2004) , METEOR (Banerjee and Lavie, 2005) ) to ground truth reference texts and do not fully capture relevance to the source data. Thus, there is no distinction between a mismatch caused by a paraphrase, poor lexicalization of content, or made-up/incorrect statement, leading to imperfect model selection. While a number of work argue for the need for novel automatic evaluation method (Reiter and Belz, 2009; Reiter, 2018; Novikova et al., 2017) , to the best of our knowledge only Wiseman et al. (2017) and Dhingra et al. (2019) propose metrics based on both the reference and the source data. Recently, different regularization methods have also been proposed to mitigate the negative influence of divergences in reference texts. These approaches can be either at the dataset level (Du\u0161ek et al., 2019) , where authors propose techniques to clean/standardize instances; or at the training level (Tian et al., 2019) , where authors propose novel neural modules designed to limit hallucinations/omissions. However, these approaches are severely limited: e.g., they require significant annotation labor, model-specific tricks and/or manual tuning. Furthermore, virtually all proposed neural approaches still suffer from 1) exposure bias and 2) inconsistency between train/test measurement. Indeed, current neural models are trained via a mechanism called teacher forcing (Williams and Zipser, 1989) , where the decoder is fed the previous correct token, no matter its actual prediction (1), in order to maximize the log-likelihood of the target sentence (including divergent phrases), but are evaluated through the previously discussed n-gram metrics (2). See Section 3.3 for a more detailed discussion about this subject.", "cite_spans": [ { "start": 83, "end": 104, "text": "(Reiter et al., 2005)", "ref_id": "BIBREF34" }, { "start": 246, "end": 267, "text": "(Lebret et al., 2016;", "ref_id": "BIBREF13" }, { "start": 268, "end": 304, "text": "Gardent and Perez-Beltrachini, 2017;", "ref_id": "BIBREF7" }, { "start": 305, "end": 326, "text": "Wiseman et al., 2017)", "ref_id": "BIBREF44" }, { "start": 386, "end": 407, "text": "(Lebret et al., 2016)", "ref_id": "BIBREF13" }, { "start": 938, "end": 943, "text": "[...]", "ref_id": null }, { "start": 1751, "end": 1774, "text": "(Rohrbach et al., 2018)", "ref_id": "BIBREF36" }, { "start": 2028, "end": 2051, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF24" }, { "start": 2060, "end": 2071, "text": "(Lin, 2004)", "ref_id": "BIBREF15" }, { "start": 2081, "end": 2107, "text": "(Banerjee and Lavie, 2005)", "ref_id": "BIBREF2" }, { "start": 2452, "end": 2475, "text": "(Reiter and Belz, 2009;", "ref_id": "BIBREF32" }, { "start": 2476, "end": 2489, "text": "Reiter, 2018;", "ref_id": "BIBREF31" }, { "start": 2490, "end": 2512, "text": "Novikova et al., 2017)", "ref_id": "BIBREF23" }, { "start": 2549, "end": 2570, "text": "Wiseman et al. (2017)", "ref_id": "BIBREF44" }, { "start": 2575, "end": 2596, "text": "Dhingra et al. (2019)", "ref_id": "BIBREF4" }, { "start": 2851, "end": 2871, "text": "(Du\u0161ek et al., 2019)", "ref_id": null }, { "start": 2964, "end": 2983, "text": "(Tian et al., 2019)", "ref_id": null }, { "start": 3437, "end": 3464, "text": "(Williams and Zipser, 1989)", "ref_id": "BIBREF42" } ], "ref_spans": [ { "start": 329, "end": 337, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 949, "end": 957, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 1091, "end": 1100, "text": "Figure 1)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To the best of our knowledge, there have been few approaches (Liu et al., 2019a,b) focused on the training procedure. Liu et al. (2019a) train a hierarchical encoder-decoder on three auxiliary tasks (namely sequence labeling, text auto-encoder and multi-labeling classification) which are meant to guide the decoding process. Closest to our work, Liu et al. (2019b) propose a novel neural module for constrained attention, along with a reinforcement learning (RL) training procedure based on BLEU and TFIDF. In our work, to remedy the above shortcomings and building upon the work of Liu et al. (2019b) , we show that no novel neural module is necessary to handle hallucinations and omissions. We propose a model-agnostic RL framework, called PARENTing, where pretrained models are further trained with a self-critical policy gradient algorithm (Rennie et al., 2016) to limit the impact of divergences in training examples on text generation. Specifically, we use the PARENT metric (Dhingra et al., 2019) which exhibits a strong correlation with human evaluation, while being easier to use out of the box. We provide extensive automatic evaluations on two data-to-text model families (LSTMs and Transformers) on two widely used benchmarks (WikiBIO and WebNLG), as well as a more focused human evaluation on WikiBIO. We report new state of the art PARENT scores on both datasets while BLEU scores are on par with previous SOTA approaches, which shows that our framework efficiently reduces pathological behaviors while keeping generation fluent.", "cite_spans": [ { "start": 61, "end": 82, "text": "(Liu et al., 2019a,b)", "ref_id": null }, { "start": 118, "end": 136, "text": "Liu et al. (2019a)", "ref_id": "BIBREF17" }, { "start": 347, "end": 365, "text": "Liu et al. (2019b)", "ref_id": "BIBREF18" }, { "start": 584, "end": 602, "text": "Liu et al. (2019b)", "ref_id": "BIBREF18" }, { "start": 845, "end": 866, "text": "(Rennie et al., 2016)", "ref_id": "BIBREF35" }, { "start": 982, "end": 1004, "text": "(Dhingra et al., 2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Data-to-text models can be classified in two broad categories: knowledge-based models and stochastic/data-driven approaches (Gatt and Krahmer, 2018) . The former approaches (Reiter and Dale, 2000) are driven by experts' knowledge, leading to a pipeline architecture split into subtasks: content selection and text structuring (macroplanning), sentence planning (microplanning) and generating actual sentences (surface realisation). While accurate and efficient at inference time, these methods require significant manual efforts for new use-cases. In contrast, data-driven approaches tend to blur the distinction between these subtasks with end-to-end training on large corpora of aligned input data and output text (Gatt and Krahmer, 2018) . End-to-end methods have been proposed early, such as (Chen and Mooney, 2008) who apply statistical machine translation techniques to the sportcasting domain. Recent neural approaches now propose to leverage progress in deep learning to represent these data into a semantic vector space (also called embedding space) and stem from the neural machine translation domain (Lebret et al., 2016; Puduppully et al., 2019; Wiseman et al., 2017) . Particularly, Wiseman et al. (2017) propose the now by default back-bone data-to-text architecture, with an attention mechanism (Bahdanau et al., 2014) , which computes a context focused on important elements from the input, and a copy mechanism (Gulcehre et al., 2016; See et al., 2017) to deal with unknown or rare words.", "cite_spans": [ { "start": 124, "end": 148, "text": "(Gatt and Krahmer, 2018)", "ref_id": "BIBREF8" }, { "start": 173, "end": 196, "text": "(Reiter and Dale, 2000)", "ref_id": "BIBREF33" }, { "start": 716, "end": 740, "text": "(Gatt and Krahmer, 2018)", "ref_id": "BIBREF8" }, { "start": 1111, "end": 1132, "text": "(Lebret et al., 2016;", "ref_id": "BIBREF13" }, { "start": 1133, "end": 1157, "text": "Puduppully et al., 2019;", "ref_id": "BIBREF28" }, { "start": 1158, "end": 1179, "text": "Wiseman et al., 2017)", "ref_id": "BIBREF44" }, { "start": 1196, "end": 1217, "text": "Wiseman et al. (2017)", "ref_id": "BIBREF44" }, { "start": 1310, "end": 1333, "text": "(Bahdanau et al., 2014)", "ref_id": "BIBREF1" }, { "start": 1428, "end": 1451, "text": "(Gulcehre et al., 2016;", "ref_id": "BIBREF9" }, { "start": 1452, "end": 1469, "text": "See et al., 2017)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Text Generation from Structured Data", "sec_num": "2.1" }, { "text": "To address domain-specific constraints, a common approach is to build architectures that explicitly model the key-value structure of the input table (Nie et al., 2018; Liu et al., 2018 Liu et al., , 2019a Rebuffel et al., 2020) . Additional work (Puduppully et al., 2019) introduces dynamic encoding updating, where the model updates part of the source data encoding at each decoding step in order to accurately guide the decoder throughout generation. While these models produce fluent and domaincomprehensive outputs, several pathological behaviors have been identified, echoing similar issues in other text generation tasks (e.g. in image captioning (Rohrbach et al., 2018) or in summarization (Kry\u015bci\u0144ski et al., 2019) ).", "cite_spans": [ { "start": 149, "end": 167, "text": "(Nie et al., 2018;", "ref_id": "BIBREF21" }, { "start": 168, "end": 184, "text": "Liu et al., 2018", "ref_id": "BIBREF19" }, { "start": 185, "end": 204, "text": "Liu et al., , 2019a", "ref_id": "BIBREF17" }, { "start": 205, "end": 227, "text": "Rebuffel et al., 2020)", "ref_id": "BIBREF30" }, { "start": 246, "end": 271, "text": "(Puduppully et al., 2019)", "ref_id": "BIBREF28" }, { "start": 653, "end": 676, "text": "(Rohrbach et al., 2018)", "ref_id": "BIBREF36" }, { "start": 697, "end": 722, "text": "(Kry\u015bci\u0144ski et al., 2019)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Text Generation from Structured Data", "sec_num": "2.1" }, { "text": "Data-to-Text", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pathological Hallucinations in", "sec_num": "2.2" }, { "text": "Training neural model on data-to-text tasks requires large corpora (Lebret et al., 2016; Novikova et al., 2017; Gardent and Perez-Beltrachini, 2017; Wiseman et al., 2017) . Different pathological behaviors arise from the datasets, depending on the methodology underlying their construction. First, for handcrafted datasets (Novikova et al., 2017; Gardent and Perez-Beltrachini, 2017) , crowdworkers sometimes fail to cover all information from the data source in reference text. Second, automatically constructed datasets from possibly different internet sources do not guarantee data sources and texts to be aligned completely. Both these limitations induce neural generation model to omit information in the first case or suffer from hallucinations (i.e., they mistakenly learn to generate ungrounded/false statements) in the other.", "cite_spans": [ { "start": 67, "end": 88, "text": "(Lebret et al., 2016;", "ref_id": "BIBREF13" }, { "start": 89, "end": 111, "text": "Novikova et al., 2017;", "ref_id": "BIBREF23" }, { "start": 112, "end": 148, "text": "Gardent and Perez-Beltrachini, 2017;", "ref_id": "BIBREF7" }, { "start": 149, "end": 170, "text": "Wiseman et al., 2017)", "ref_id": "BIBREF44" }, { "start": 323, "end": 346, "text": "(Novikova et al., 2017;", "ref_id": "BIBREF23" }, { "start": 347, "end": 383, "text": "Gardent and Perez-Beltrachini, 2017)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Pathological Hallucinations in", "sec_num": "2.2" }, { "text": "To deal with these pathologies, previous work operate either at the dataset level, or at the training level. At the dataset level, Du\u0161ek et al. (2019) show that cleaned data can significantly improve system ability to produce fact-accurate text. In a different direction, Nie et al. 2019apply a method similar to knowledge distillation (Hinton et al., 2015) : they train a Natural Language Understanding module to reconstruct tables from text references and show that a vanilla sequence-to-sequence model trained on the refined data has improved content correctness in both human and automatic evaluations. At the training level, Wiseman et al. (2017) , for instance, propose to include a reconstruction loss aiming at reconstructing the source table from the hidden states of the decoder. In an other direction, Perez-Beltrachini and Lapata (2018) propose a classifying neural network, trained (using a manually annotated dataset) to label text tokens depending on their alignment with the associated table. They use these labels in an RL framework to generate sentences with a maximum of aligned tokens. However these approaches are either costly in human labor or specific to hand-crafted datasets where the input data matches exactly the reference texts (thus deal with omissions but not hallucinations). Indeed reconstruction tasks are not compatible with the content selection subtask of Data-to-Text.", "cite_spans": [ { "start": 131, "end": 150, "text": "Du\u0161ek et al. (2019)", "ref_id": null }, { "start": 336, "end": 357, "text": "(Hinton et al., 2015)", "ref_id": "BIBREF10" }, { "start": 630, "end": 651, "text": "Wiseman et al. (2017)", "ref_id": "BIBREF44" }, { "start": 813, "end": 848, "text": "Perez-Beltrachini and Lapata (2018)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Pathological Hallucinations in", "sec_num": "2.2" }, { "text": "Proposing both a novel coverage-constrained attention and a BLEU/TFIDF-based reward, (Liu et al., 2019b ) constitutes a first approach to a modelagnostic framework. However their proposed coverage is still task specific (and goes against contentselection): while they increase the state-of-the-art BLEU on WikiBIO, they underperfom encoderdecoder models on the PARENT benchmark.", "cite_spans": [ { "start": 85, "end": 103, "text": "(Liu et al., 2019b", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Pathological Hallucinations in", "sec_num": "2.2" }, { "text": "Until recently, the NLG research community cruelly lacked ways to automatically evaluate model outputs. Despite work on effective human evaluation (Amidei et al., 2019) , and on the need for better automated metrics (Reiter and Belz, 2009; Novikova et al., 2017) , to the best of our knowledge, only Wiseman et al. (2017) and Dhingra et al. (2019) recently proposed improvement over the widely used BLEU. Wiseman et al. (2017) propose to use an auxiliary neural model, trained to extract structured records from the generated text for evaluation. Two texts can then be compared through their sequences of extracted records. This information retrieval-based approach suffers from domain specificity, as the released model only works in the closed-domain of basketball journalistic summaries, and requires precise tagging of gold references which can be impossible to provide in most settings. Furthermore, Dhingra et al. 2019propose a new metric PARENT, and show that this metric strongly correlates with human annotators and can replace previous n-gram-and information retrieval-based metrics.", "cite_spans": [ { "start": 147, "end": 168, "text": "(Amidei et al., 2019)", "ref_id": "BIBREF0" }, { "start": 216, "end": 239, "text": "(Reiter and Belz, 2009;", "ref_id": "BIBREF32" }, { "start": 240, "end": 262, "text": "Novikova et al., 2017)", "ref_id": "BIBREF23" }, { "start": 300, "end": 321, "text": "Wiseman et al. (2017)", "ref_id": "BIBREF44" }, { "start": 326, "end": 347, "text": "Dhingra et al. (2019)", "ref_id": "BIBREF4" }, { "start": 405, "end": 426, "text": "Wiseman et al. (2017)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Pathological Hallucinations in", "sec_num": "2.2" }, { "text": "Our contribution differs from previous work in several aspects. First, our proposed framework is model-agnostic and can be used with any neural model. Second, instead of focusing on only one domain and/or one issue (e.g., omissions in handcrafted datasets or hallucinations in automatically constructed datasets), it is setting agnostic and tackles both hallucinations and omissions at once by leveraging the PARENT F-score (which combines precision and coverage against the source data). Finally, no manual preprocessing or pre-tagging is required: models are trained via a flexible training protocol and distantiate themselves from faulty training examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pathological Hallucinations in", "sec_num": "2.2" }, { "text": "3 Model-Agnostic Reinforcement Learning for Reducing Divergences", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pathological Hallucinations in", "sec_num": "2.2" }, { "text": "We propose PARENTing, a model-agnostic RL framework for data-to-text aiming at reducing divergences. It is based on the self-critical policy gradient algorithm (Paulus et al., 2018) and leverages the PARENT metric (Dhingra et al., 2019) .", "cite_spans": [ { "start": 160, "end": 181, "text": "(Paulus et al., 2018)", "ref_id": null }, { "start": 214, "end": 236, "text": "(Dhingra et al., 2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Pathological Hallucinations in", "sec_num": "2.2" }, { "text": "Notations. We consider the general setting of data-to-text and the notations introduced by Dhingra et al. (2019) . Let us consider a dataset of J pairs (structured-data, reference), denoted D =", "cite_spans": [ { "start": 106, "end": 112, "text": "(2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Background and Notations", "sec_num": "3.1" }, { "text": "{(T j , R j )} J j=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background and Notations", "sec_num": "3.1" }, { "text": "where:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background and Notations", "sec_num": "3.1" }, { "text": "\u2022 T := {r k } K", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background and Notations", "sec_num": "3.1" }, { "text": "k=1 is a collection of K records (entity, attribute, value), where K is variable among instances;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background and Notations", "sec_num": "3.1" }, { "text": "\u2022", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background and Notations", "sec_num": "3.1" }, { "text": "R := [y * 1 , ..., y * L ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background and Notations", "sec_num": "3.1" }, { "text": "is the reference text associated to T , composed of L tokens y * , where L is variable among instances; We also consider a data-to-text neural model, denoted f \u03b8 (where \u03b8 are the model parameters), pretrained to maximize the likelihood of the reference, via teacher forcing (Williams and Zipser, 1989) :", "cite_spans": [ { "start": 274, "end": 301, "text": "(Williams and Zipser, 1989)", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "Background and Notations", "sec_num": "3.1" }, { "text": "L ml = \u2212 L t=1 log f (y * t | y * t\u22121 , ..., y * 1 , T, \u03b8) (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background and Notations", "sec_num": "3.1" }, { "text": "PARENT metric. PARENT (Precision And Recall of Entailed N-grams from the Table) (Dhingra et al., 2019) aims at evaluating the precision and recall/coverage of a candidate generation G given the (source table, reference) pair (T, R) via n-grams (n = 1, ..., 4) comparison. This metric is divided in three scores:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background and Notations", "sec_num": "3.1" }, { "text": "\u2022 Entailed precision E p is the fraction of ngrams from G which are either found in R or T ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background and Notations", "sec_num": "3.1" }, { "text": "\u2022 Entailed recall/coverage E r . Recall E r (R) is the fraction of n-grams from R\u2229T which are found in G; Coverage E r (T ) is the fraction of n-grams from T which are found in G. Recall and coverage are combined using a geometric average:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background and Notations", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "E r := E r (R) \u03bb E r (T ) 1\u2212\u03bb", "eq_num": "(2)" } ], "section": "Background and Notations", "sec_num": "3.1" }, { "text": "\u2022 F-score: combination of precision and recall.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background and Notations", "sec_num": "3.1" }, { "text": "Our framework for reducing pathological behaviors is based on the following research objectives:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview and Research Objectives", "sec_num": "3.2" }, { "text": "\u2022 O1: the framework should be generic and should work with any neural model;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview and Research Objectives", "sec_num": "3.2" }, { "text": "\u2022 O2: the model should try and distantiate itself from the reference enough to stop mimicking problematic behaviors;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview and Research Objectives", "sec_num": "3.2" }, { "text": "\u2022 O3: by combining precision, recall and coverage, PARENT is a good proxy for human assessment of a candidate text against its source data and reference (Dhingra et al., 2019; Tian et al., 2019) ;", "cite_spans": [ { "start": 153, "end": 175, "text": "(Dhingra et al., 2019;", "ref_id": "BIBREF4" }, { "start": 176, "end": 194, "text": "Tian et al., 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Overview and Research Objectives", "sec_num": "3.2" }, { "text": "\u2022 O4: discrete metrics can be gamed to artificially increase the score while not gaining in readability or relevance . Therefore, we propose a training protocol similar to (Paulus et al., 2018) , with a mixed objective function combining the standard maximum-likelihood loss L ml with a custom reinforcement loss L rl . This ensures that models do not lose fluency by gaming the discrete metric (objective O4):", "cite_spans": [ { "start": 172, "end": 193, "text": "(Paulus et al., 2018)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Overview and Research Objectives", "sec_num": "3.2" }, { "text": "L := \u03b3L rl + (1 \u2212 \u03b3)L ml (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview and Research Objectives", "sec_num": "3.2" }, { "text": "where \u03b3 is a weight factor. We note that O1 is satisfied, as this loss function L rl can be applied to train any neural model f \u03b8 . In what follows, we give a description of the proposed reinforcement learning framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview and Research Objectives", "sec_num": "3.2" }, { "text": "Numerous work (Wu et al., 2016; Rennie et al., 2016) have outlined that training via teacherforcing (maximizing the log-likelihood of reference texts) does not always produce the best results on evaluation metrics. This is in part due to exposure bias (Ranzato et al., 2016) , where models are trained using the true gold sequence during training and are never exposed to their possible mistakes. We therefore propose to alter the standard rigid training protocol and further train models via reinforcement learning as a counter-measure to these issues, where models can now learn a more flexible policy based on a metric more representative of human judgment, satisfying objective O2. Following objective O3, we shape our reward around the PARENT metric which has been shown to strongly correlate with human judgement in term of precision and recall of a generated text against a source table and a reference. Models are somewhat overfitted to our training set due to pretraining, and are hence at risk of earning high rewards on easy examples (i.e. with faithfull reference targets) and poor rewards on hard examples (i.e. with divergent reference targets). To deal with this issue and ensure that the reward reflects the actual improvement made over the pretraining, we propose to follow a growing body of work in text summarization (Paulus et al., 2018; Scialom et al., 2019) and apply the self-critical policy gradient training protocol (Rennie et al., 2016), using the REINFORCE (Williams and Peng, 1991) algorithm. More particularly, models are now sampled using their Markov property (that is one token at a time, and computing the next distribution given the previous chosen token). A first candidate sequence Y c is randomly sampled following the outputed distribution. A second baseline sequence Y b is generated, this time via greedy decoding (mimicking beam search generation during inference, with a beam of size 1). This baseline sequence acts has a difficulty proxy of the current training instance. The reward given to the candidate sequence is the improvement in PARENT score it brings over the baseline sequence:", "cite_spans": [ { "start": 14, "end": 31, "text": "(Wu et al., 2016;", "ref_id": null }, { "start": 32, "end": 52, "text": "Rennie et al., 2016)", "ref_id": "BIBREF35" }, { "start": 252, "end": 274, "text": "(Ranzato et al., 2016)", "ref_id": "BIBREF29" }, { "start": 1336, "end": 1357, "text": "(Paulus et al., 2018;", "ref_id": null }, { "start": 1358, "end": 1379, "text": "Scialom et al., 2019)", "ref_id": "BIBREF37" }, { "start": 1485, "end": 1510, "text": "(Williams and Peng, 1991)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "PARENTing: Self-critical Gradient Policy Learning", "sec_num": "3.3" }, { "text": "r(Y c ) = PARENT(Y c ) \u2212 PARENT(Y b ) (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PARENTing: Self-critical Gradient Policy Learning", "sec_num": "3.3" }, { "text": "Finally, the loss to be minimized during this part of training is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PARENTing: Self-critical Gradient Policy Learning", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L rl = \u2212r(Y c ) L t=1 log f (y c t | y c t\u22121 , ..., y c 1 , T, \u03b8)", "eq_num": "(5)" } ], "section": "PARENTing: Self-critical Gradient Policy Learning", "sec_num": "3.3" }, { "text": "Minimizing Equation 5 leads to increase reward expectation. Indeed, we maximize the conditional likelihood of the candidate sequence Y c when it obtains a higher reward than the baseline sequence Y b , or on the contrary we decrease its likelihood in case of a lower reward.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PARENTing: Self-critical Gradient Policy Learning", "sec_num": "3.3" }, { "text": "WikiBIO (Lebret et al., 2016) This dataset contains 728, 321 infoboxes, automatically paired with the first sentence of the corresponding article of the English Wikipedia. we follow the data partition introduced with the dataset which yields 80% of all instances for the training set, 10% for the development set and 10% for the evaluation set. Reference texts are of average length 26 words while infoboxes have on average 12 non-empty fields. This dataset has been built automatically from sources that were not meant for a text-generation task and contains a significative amount of divergence between the source data and the target descriptions (62% of the references mention extra information not grounded in the infobox (Dhingra et al., 2019) ).", "cite_spans": [ { "start": 8, "end": 29, "text": "(Lebret et al., 2016)", "ref_id": "BIBREF13" }, { "start": 726, "end": 748, "text": "(Dhingra et al., 2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental setup 4.1 Data-to-text benchmarks", "sec_num": "4" }, { "text": "WebNLG (Gardent and Perez-Beltrachini, 2017) This dataset contains 35, 970 sets of RDF records mapped to natural language descriptions. Each set has up to 7 records, and one or more gold references of average size 22 words. We follow the partition introduced with the dataset, which yields 1612/1619 instances as a development/evaluation set. This dataset has been hand-crafted specifically for the task of surface realization and systems are expected to summarize all records. Note that here we compare ourselves on the seen partition, where every attribute is been seen during training (however, entities and values can be new).", "cite_spans": [ { "start": 7, "end": 44, "text": "(Gardent and Perez-Beltrachini, 2017)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental setup 4.1 Data-to-text benchmarks", "sec_num": "4" }, { "text": "We evaluate our approach using both automated metrics and human judgment. We report BLEU scores (Papineni et al., 2002) as well as PARENT (precision, recall and F1) scores (Dhingra et al., 2019) . For all scores higher is better. While BLEU is the historical metric in all text generation tasks, PARENT scores have a significantly stronger correlation with human evaluators (Dhingra et al., 2019) (0.478 vs. 0.913 for BLEU and PARENT resp.). We perform qualitative evaluation following the best practices outlined by (van der Lee et al., 2019). Our human annotators are males and females from several countries across Europe, between 20 and 55 years old and proficient in English. Annotators are shown a randomly selected table, together with the corresponding descriptions, both from the dataset and the models that are being evaluated. Annotators are asked, for each sentence, to score its fluency (as Fluent, Mostly fluent, or Not fluent) factualness (likewise), and coverage (in terms of the number of realized rows). Sentences are shuffled to avoid any bias. Following Tian et al. (2019) , we first tasked three expert annotators to annotate a pilot batch of 50 sentences. Once assured all Inter-Annotator Agreements were approx. 78%, we asked several annotators to annotate an additional sentence sample to reach 100 instances (where each instance consists of one table and three associated outputs). ", "cite_spans": [ { "start": 96, "end": 119, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF24" }, { "start": 172, "end": 194, "text": "(Dhingra et al., 2019)", "ref_id": "BIBREF4" }, { "start": 374, "end": 396, "text": "(Dhingra et al., 2019)", "ref_id": "BIBREF4" }, { "start": 1074, "end": 1092, "text": "Tian et al. (2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation metrics", "sec_num": "4.2" }, { "text": "We measure the impact of our framework on two families of models:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scenarios and Baselines", "sec_num": "4.3" }, { "text": "\u2022 LSTMs. Our implementation of (See et al., 2017) . It is the back-bone data-to-text model based on a bi-LSTM with attention mechanism and augmented with a conditional copy mechanism to deal with rare or unseen words.", "cite_spans": [ { "start": 31, "end": 49, "text": "(See et al., 2017)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Scenarios and Baselines", "sec_num": "4.3" }, { "text": "\u2022 Transformers. Our implementation of (Vaswani et al., 2017) , the transformer encoder-decoder, augmented with a conditional copy mechanism.", "cite_spans": [ { "start": 38, "end": 60, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Scenarios and Baselines", "sec_num": "4.3" }, { "text": "These models are denoted LSTM or Transformer when trained via maximum likelihood and LSTM+RL or Transformer+RL when further trained using the PARENTing framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scenarios and Baselines", "sec_num": "4.3" }, { "text": "We also report SOTA models for each dataset respectively (i.e., achieving the strongest score either BLEU or PARENT) :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scenarios and Baselines", "sec_num": "4.3" }, { "text": "\u2022 For WikiBIO, we report the BLEU and PAR-ENT scores of two baselines: 1) S2S+FA+RL (Liu et al., 2019b) which uses a standard encoderdecoder structure, with an attention mechanism constrained to cover all table attributes and an RL training procdure with a reward shaped by BLEU and TFIDF; 2) Confident PG (Tian et al., 2019) : a neural module which assigns a confidence score to each output words, and trims the generated sequence from any word below a specified threshold. They report higher precision but lower fluency.", "cite_spans": [ { "start": 84, "end": 103, "text": "(Liu et al., 2019b)", "ref_id": "BIBREF18" }, { "start": 306, "end": 325, "text": "(Tian et al., 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Scenarios and Baselines", "sec_num": "4.3" }, { "text": "\u2022 For WebNLG, we report the BLEU score of GCN Marcheggiani and Perez-Beltrachini (2018) . They propose a graph convolutional network which explicitly models the structure of graph-like data. For additional context, we also report a baseline score introduced by the original paper Gardent and Perez-Beltrachini (2017) . The used model Gardent-LSTM is the same as our scenario LSTM.", "cite_spans": [ { "start": 42, "end": 87, "text": "GCN Marcheggiani and Perez-Beltrachini (2018)", "ref_id": null }, { "start": 280, "end": 316, "text": "Gardent and Perez-Beltrachini (2017)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Scenarios and Baselines", "sec_num": "4.3" }, { "text": "We describe here key implementation details (other details needed for reproducibility will be given alongside the code if accepted). We set the \u03bb of Equation 2to 1 during training. This was done 1) because coverage is against the content selection task on most data-to-text tasks 2) to reduce the computing cost, as coverage is obtained by computing Longest Common Subsequence for all n-grams contained in the table. We note however that we kept \u03bb = 0.5 for evaluation following Dhingra et al. 2019; Tian et al. (2019) . Preliminary experiment on \u03b3 from Equation 3 showed that the initial value of 0.9987 proposed by Paulus et al. (2018) was not satisfying: fluency dropped drastically and while models obtained significantly higher PARENT scores, BLEU score was at less than half what previous models were able to achieve. We therefore used a more conservative value of \u03b3 = 0.9. Inputs were fed to the neural networks following Lebret et al. (2016) : each word is represented as a 4-tuple (value, field, p+, p-) where p+ (resp. p-) is the position (resp. reverse position) of value in field. For example, the line (Name, Barack Obama) is presented as [(Name, Barack 1, 2), (Name, Obama, 2, 1)]. In WebNLG, where tables include several entities, a 5 th element was introduced for entity index, as well as tokens for the entities' names. Models are first trained via maximum likelihood training. We select the best performing checkpoint given a development set and start the mixed-objective training from there. We implemented our framework using OpenNMT (Klein et al., 2017) . Data and code are available online: https://github.com/KaijuML/PARENTing-rl", "cite_spans": [ { "start": 512, "end": 518, "text": "(2019)", "ref_id": "BIBREF6" }, { "start": 631, "end": 637, "text": "(2018)", "ref_id": "BIBREF26" }, { "start": 929, "end": 949, "text": "Lebret et al. (2016)", "ref_id": "BIBREF13" }, { "start": 1554, "end": 1574, "text": "(Klein et al., 2017)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Implementations details", "sec_num": "4.4" }, { "text": "Table 1 summarizes the BLEU and PARENT scores obtained by the baselines and our scenar-ios on WikiBIO and WebNLG benchmarks. Please note that while no previous work report PARENT scores on WebNLG, our scenario LSTM is a reimplementation of the baseline Gardent-LSTM. The obtained and reported BLEU scores being very close, we can consider that our PARENT scores are also the ones of Gardent-LSTM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "From a general point of view, We can see that our PARENTed models LSTM+RL and Trans-former+RL obtain generally higher BLEU and PARENT metrics over all scenarios and baselines -except the BLEU score for WikiBIO. More particularly, we can outline the following statements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "\u2022 The comparison of our scenarios (without/with our PARENTing framework) outlines increases in score ranging from +1.6% to +3.4% on WikiBIO, and from +1.1% to +15% on WebNLG; with significant improvements in 12/16 comparison settings. This suggests that PARENTed models learn to describe source data with more precision (reduced hallucinations) and with greater details (increased recall/coverage), as can be seen in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 417, "end": 425, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "\u2022 BLEU scores are on par with baselines on Wik-iBIO, and significantly better than the strongest model on WebNLG. Despite starting close to the baseline in terms of BLEU, PARENTing our models leads to new state of the art BLEU of 63.20, compared to a previous 55.9 for GCN, representing a 13% relative increase. This shows that models learn to lexicalize content more adequately than through maximum-likelihood training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "\u2022 More importantly, PARENTed models overperfom all baselines on PARENT scores: on Wik-iBIO, our reinforced LSTM model achieves state of the art performance on F-score, increasing the previous score by 7% (from 52.81 to 56.72). PAR-ENTed models also achieve better precision than both S2S+FA+RL and Confident-PG: 80.01/80.37 (for LSTM+RL and Transformer+RL resp.) against respectively 76.1 and 79.52. Contrasting with Confident-PG which sacrifices fluency for faithfulness, our models referring more precisely to information from the S2S+FA+RL, with 40.6 against 44.02.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "\u2022 Altogether, the previous statements assess the model-agnosticity of our framework, as both model family (LSTM and Transformer-based scenarios) showed improvements on both datasets when finetuned with our PARENTing framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "\u2022 We observe that our pre-trained scenarios (LSTM and Transformer) generally obtain higher results on WebNLG than on WikiBIO. This is due to the nature of datasets: WebNLG is hand-crafted with the explicit goal of full transcription of tables while WikiBIO is build automatically without rigorous alignment of data sources and reference texts. Despite some inevitable divergences, WebNLG is thus less noisy than WikiBIO.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Qualitative Evaluations. We aim to provide insight into what our PARENTing framework brings to models. Specifically, one might assume that a model could trivially learn to shorten output in order to increase precision, or on the contrary, to increase generation length to easily increase coverage by mechanically quoting tables more. We therefore 1) check for the framework impact on global generation length; 2) provide a more detail analysis on length distribution vs. effectiveness. In this section, we focus exclusively on WikiBIO as it is the most challenging setting (larger vocabulary, more noisy, and content selection needed to generate biographies). To make results more readable, we focus on (LSTM, LSTM+RL) models. We first report in Table 2 comparative statistics of generation length and score variations.We first note that on WikiBIO, there is no significant changes in sentence lengths after RL training (19.17 vs 19.78) .", "cite_spans": [ { "start": 920, "end": 936, "text": "(19.17 vs 19.78)", "ref_id": null } ], "ref_spans": [ { "start": 746, "end": 753, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "We further find a correlation of 0.2 between the variation in length and the one in PARENT F-score which does not allow to conclude that longer texts lead to better scores. This suggests that the RLtrained model seems to have better performance even when generating shorter texts. We investigate further the impact of PARENTing on length distribution, and its influence on hallucinations/omissions (respectively measured by precision/recall). To do so, generated texts are splitted in two broad categories, short and long, using a KMeans algorithm calibrated on the length of human references. We exhibit two clusters, texts below/above 30 words, and compute PARENT scores conditioned on these clusters (see Table 3 ).", "cite_spans": [], "ref_spans": [ { "start": 708, "end": 715, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Considering hallucinations, where being precise should naturally be increasingly harder with sentence length, we first observe that for the pretrained model, precision tends to decrease with longer generation (78.75 vs. 77.76), while the RL-trained model is more robust and has constant precision independently of generation length 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Regarding omissions, Table 3 shows that while both scenarios have similar recall/coverage scores in long generations, the RL-trained model is more relevant; achieving a recall of 46.27 on short generations against 44.82 for the pretrained model. Finally, this leads to overall higher PARENT score for the PARENTed model than the pretrained model, independently of text lengths, showing that the RLtrained model choses more accurately when to stop generating early and when to pursue longer generation, adding additional information from the table to improve coverage. Interestingly, we find that no matter the generation size, the PARENTed model shows more reliance on the source table, as it tends to directly copy words more often than its pretrained version.", "cite_spans": [], "ref_spans": [ { "start": 21, "end": 28, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Human Evaluations. To better measure subtleties which would not be captured by automatic metric in model outputs, we report human ratings in Table 4 . Due to cost of human evaluation, we focus on the WikiBIO dataset, and three settings: the best model from the literature S2S+FA+RL, our model Table 4 : Results of the human evaluation on WikiBIO. The Fluency column reports the count of sentences labeled as \"fluent\" or \"mostly fluent\".", "cite_spans": [], "ref_spans": [ { "start": 141, "end": 148, "text": "Table 4", "ref_id": null }, { "start": 293, "end": 300, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "LSTM+RL and gold sentences. It is worth noting that our results align with Dhingra et al. 2019, as we found that around two thirds of gold references contain divergences from their associated tables.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "\u2022 The fluency scores highlight the need for a mixed objective loss, which leverages the MLE objective ability to produce fluent output, whereas RL alone (S2S+FA+RL) leads to less fluent output due to the discrete metric being used as a reward. Indeed, S2S+FA+RL obtains only a score 84%, compared to gold standard of nearly 92%, or our model's score of 94% 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "\u2022 Factualness scores show that both approaches greatly improve factualness over gold standard. However, S2S+FA+RL still lags behind our proposed approach, which is able to leverage the PARENT-based reward to constrain the system better than would the FA module.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "\u2022 In contrast to factualness, S2S+FA+RL obtains better coverage performance than our approach, with 4.45 vs 4.25, showing that a component dedicated to coverage (either the FA module or the TFIDF part of the reward) leads to global outputs. Despite this, coverage is on par with gold standard.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "In this work, we have proposed a model-agnostic reinforcement learning framework for data-to-text aimed at reducing hallucinations and improving recall/coverage of relevant information. We shaped the reward based on PARENT (Dhingra et al., 2019) , which is a recently proposed metric with a high correlation with human judgement. This allows for a more flexible training, where the model learns to depend less on the reference and more on the source data. Framework effectiveness is assessed via thorough experiments on two model family (RNNs and Transformers) and two benchmarks (WikiBIO and WebNLG). Furthermore, quantitative and qualitative evaluations show that our PARENTing framework obtains better results than a dedicated attention module or a less source-relying reward. However, this approach relies on the metric employed and crafting an effective metric is still an open problem. In particular, PARENT is designed for single-entity datasets, like WikiBIO and WebNLG, which is not reliable for more complex datasets containing multiple entities (i.g., the Ro-toWire dataset (Wiseman et al., 2017) ). In this setting, the sentence \"James Harden scored 20 points.\" could achieve a high PARENT score if any player had scored 20 points in the game. An interesting future work would be the design of an evaluation metric more robust to dataset peculiarities.", "cite_spans": [ { "start": 223, "end": 245, "text": "(Dhingra et al., 2019)", "ref_id": "BIBREF4" }, { "start": 1085, "end": 1107, "text": "(Wiseman et al., 2017)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Conclusion", "sec_num": "6" }, { "text": "This behavior, stopping generation early to avoid hallucinated statements, is illustrated inFigure 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Gold standard not being 100% is explained by preprocessing choices made at dataset creation (e.g. non-english languages are not always correctly transcribed).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank the H2020 project AI4EU (825619) and the ANR JCJC SESAMS (ANR-18-CE23-0001) for supporting this work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The use of rating and likert scales in natural language generation human evaluation tasks: A review and some recommendations", "authors": [ { "first": "Jacopo", "middle": [], "last": "Amidei", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Piwek", "suffix": "" }, { "first": "Alistair", "middle": [], "last": "Willis", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 12th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacopo Amidei, Paul Piwek, and Alistair Willis. 2019. The use of rating and likert scales in natural language generation human evaluation tasks: A review and some recommendations. In Proceedings of the 12th Interna- tional Conference on Natural Language Generation.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learn- ing to align and translate. Accepted at ICLR 2015 as oral presentation.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "METEOR: An automatic metric for MT evaluation with improved correlation with human judgments", "authors": [ { "first": "Satanjeev", "middle": [], "last": "Banerjee", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evalua- tion Measures for Machine Translation and/or Summa- rization. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Learning to sportscast: A test of grounded language acquisition", "authors": [ { "first": "L", "middle": [], "last": "David", "suffix": "" }, { "first": "Raymond", "middle": [ "J" ], "last": "Chen", "suffix": "" }, { "first": "", "middle": [], "last": "Mooney", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 25th International Conference on Machine Learning, ICML '08", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1145/1390156.1390173" ] }, "num": null, "urls": [], "raw_text": "David L. Chen and Raymond J. Mooney. 2008. Learn- ing to sportscast: A test of grounded language acquisi- tion. In Proceedings of the 25th International Confer- ence on Machine Learning, ICML '08. ACM.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Handling divergent reference texts when evaluating table-to-text generation", "authors": [ { "first": "Bhuwan", "middle": [], "last": "Dhingra", "suffix": "" }, { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "Ankur", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "William", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/P19-1483" ] }, "num": null, "urls": [], "raw_text": "Bhuwan Dhingra, Manaal Faruqui, Ankur Parikh, Ming-Wei Chang, Dipanjan Das, and William Cohen. 2019. Handling divergent reference texts when eval- uating table-to-text generation. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics. Association for Computational Lin- guistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Semantic Noise Matters for Neural Natural Language Generation", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Semantic Noise Matters for Neural Natural Lan- guage Generation. page 6.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A statistical, grammar-based approach to microplanning", "authors": [ { "first": "Claire", "middle": [], "last": "Gardent", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Perez-Beltrachini", "suffix": "" } ], "year": 2017, "venue": "Computational Linguistics", "volume": "43", "issue": "1", "pages": "1--30", "other_ids": { "DOI": [ "10.1162/COLI_a_00273" ] }, "num": null, "urls": [], "raw_text": "Claire Gardent and Laura Perez-Beltrachini. 2017. A statistical, grammar-based approach to microplanning. Computational Linguistics, 43(1):1-30.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Survey of the state of the art in natural language generation: Core tasks, applications and evaluation", "authors": [ { "first": "Albert", "middle": [], "last": "Gatt", "suffix": "" }, { "first": "Emiel", "middle": [], "last": "Krahmer", "suffix": "" } ], "year": 2018, "venue": "J. Artif. Int. Res", "volume": "61", "issue": "1", "pages": "65--170", "other_ids": {}, "num": null, "urls": [], "raw_text": "Albert Gatt and Emiel Krahmer. 2018. Survey of the state of the art in natural language generation: Core tasks, applications and evaluation. J. Artif. Int. Res., 61(1):65-170.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Pointing the unknown words", "authors": [ { "first": "Caglar", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "Sungjin", "middle": [], "last": "Ahn", "suffix": "" }, { "first": "Ramesh", "middle": [], "last": "Nallapati", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/P16-1014" ] }, "num": null, "urls": [], "raw_text": "Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguis- tics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Distilling the knowledge in a neural network", "authors": [ { "first": "Geoffrey", "middle": [ "E" ], "last": "Hinton", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. CoRR, abs/1503.02531.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "OpenNMT: Opensource toolkit for neural machine translation", "authors": [ { "first": "Guillaume", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Yuntian", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Senellart", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" } ], "year": 2017, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/P17-4012" ] }, "num": null, "urls": [], "raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel- lart, and Alexander M. Rush. 2017. OpenNMT: Open- source toolkit for neural machine translation. In Proc. ACL.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Evaluating the factual consistency of abstractive text summarization", "authors": [ { "first": "Wojciech", "middle": [], "last": "Kry\u015bci\u0144ski", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Mccann", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wojciech Kry\u015bci\u0144ski, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Evaluating the factual con- sistency of abstractive text summarization.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Neural text generation from structured data with application to the biography domain", "authors": [ { "first": "R\u00e9mi", "middle": [], "last": "Lebret", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1203--1213", "other_ids": { "DOI": [ "10.18653/v1/D16-1128" ] }, "num": null, "urls": [], "raw_text": "R\u00e9mi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with ap- plication to the biography domain. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 1203-1213, Austin, Texas. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Best practices for the human evaluation of automatically generated text", "authors": [ { "first": "Chris", "middle": [], "last": "Van Der Lee", "suffix": "" }, { "first": "Albert", "middle": [], "last": "Gatt", "suffix": "" }, { "first": "Sander", "middle": [], "last": "Emiel Van Miltenburg", "suffix": "" }, { "first": "Emiel", "middle": [], "last": "Wubben", "suffix": "" }, { "first": "", "middle": [], "last": "Krahmer", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 12th International Conference on Natural Language Generation, INLG 2019", "volume": "", "issue": "", "pages": "355--368", "other_ids": { "DOI": [ "10.18653/v1/W19-8643" ] }, "num": null, "urls": [], "raw_text": "Chris van der Lee, Albert Gatt, Emiel van Miltenburg, Sander Wubben, and Emiel Krahmer. 2019. Best prac- tices for the human evaluation of automatically gener- ated text. In Proceedings of the 12th International Con- ference on Natural Language Generation, INLG 2019, Tokyo, Japan, October 29 -November 1, 2019, pages 355-368.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "ROUGE: A package for automatic evaluation of summaries", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "Text Summarization Branches Out", "volume": "", "issue": "", "pages": "74--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation", "authors": [ { "first": "Chia-Wei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Lowe", "suffix": "" }, { "first": "Iulian", "middle": [], "last": "Serban", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Noseworthy", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Charlin", "suffix": "" }, { "first": "Joelle", "middle": [], "last": "Pineau", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2122--2132", "other_ids": { "DOI": [ "10.18653/v1/D16-1230" ] }, "num": null, "urls": [], "raw_text": "Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Nose- worthy, Laurent Charlin, and Joelle Pineau. 2016. How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Con- ference on Empirical Methods in Natural Language Processing, pages 2122-2132, Austin, Texas. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Hierarchical encoder with auxiliary supervision for neural table-totext generation: Learning better representation for tables", "authors": [ { "first": "Tianyu", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Fuli", "middle": [], "last": "Luo", "suffix": "" }, { "first": "Qiaolin", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Shuming", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Baobao", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Zhifang", "middle": [], "last": "Sui", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "33", "issue": "", "pages": "6786--6793", "other_ids": { "DOI": [ "10.1609/aaai.v33i01.33016786" ] }, "num": null, "urls": [], "raw_text": "Tianyu Liu, Fuli Luo, Qiaolin Xia, Shuming Ma, Baobao Chang, and Zhifang Sui. 2019a. Hierarchical encoder with auxiliary supervision for neural table-to- text generation: Learning better representation for ta- bles. Proceedings of the AAAI Conference on Artificial Intelligence, 33:6786-6793.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Towards comprehensive description generation from factual attribute-value tables", "authors": [ { "first": "Tianyu", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Fuli", "middle": [], "last": "Luo", "suffix": "" }, { "first": "Pengcheng", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Baobao", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Zhifang", "middle": [], "last": "Sui", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019", "volume": "", "issue": "", "pages": "5985--5996", "other_ids": { "DOI": [ "10.18653/v1/p19-1600" ] }, "num": null, "urls": [], "raw_text": "Tianyu Liu, Fuli Luo, Pengcheng Yang, Wei Wu, Baobao Chang, and Zhifang Sui. 2019b. Towards comprehensive description generation from factual attribute-value tables. In Proceedings of the 57th Con- ference of the Association for Computational Linguis- tics, ACL 2019, pages 5985-5996.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Table-to-text Generation by Structure-aware Seq2seq Learning", "authors": [ { "first": "Tianyu", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Kexiang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Sha", "suffix": "" }, { "first": "Baobao", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Zhifang", "middle": [], "last": "Sui", "suffix": "" } ], "year": 2018, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, and Zhifang Sui. 2018. Table-to-text Generation by Structure-aware Seq2seq Learning. In AAAI.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Deep graph convolutional encoders for structured data to text generation", "authors": [ { "first": "Diego", "middle": [], "last": "Marcheggiani", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Perez-Beltrachini", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 11th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diego Marcheggiani and Laura Perez-Beltrachini. 2018. Deep graph convolutional encoders for struc- tured data to text generation. In Proceedings of the 11th International Conference on Natural Language Generation, pages 1-9. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Operation-guided neural networks for high fidelity data-to-text generation", "authors": [ { "first": "Feng", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Jinpeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jin-Ge", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Rong", "middle": [], "last": "Pan", "suffix": "" }, { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "3879--3889", "other_ids": {}, "num": null, "urls": [], "raw_text": "Feng Nie, Jinpeng Wang, Jin-Ge Yao, Rong Pan, and Chin-Yew Lin. 2018. Operation-guided neural net- works for high fidelity data-to-text generation. In Pro- ceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 -November 4, 2018, pages 3879-3889.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A simple recipe towards reducing hallucination in neural surface realisation", "authors": [ { "first": "Feng", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Jin-Ge", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Jinpeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Rong", "middle": [], "last": "Pan", "suffix": "" }, { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2673--2679", "other_ids": { "DOI": [ "10.18653/v1/P19-1256" ] }, "num": null, "urls": [], "raw_text": "Feng Nie, Jin-Ge Yao, Jinpeng Wang, Rong Pan, and Chin-Yew Lin. 2019. A simple recipe towards reduc- ing hallucination in neural surface realisation. In Pro- ceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2673-2679. As- sociation for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Why we need new evaluation metrics for NLG", "authors": [ { "first": "Jekaterina", "middle": [], "last": "Novikova", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" }, { "first": "Amanda", "middle": [ "Cercas" ], "last": "Curry", "suffix": "" }, { "first": "Verena", "middle": [], "last": "Rieser", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2241--2252", "other_ids": { "DOI": [ "10.18653/v1/D17-1238" ] }, "num": null, "urls": [], "raw_text": "Jekaterina Novikova, Ond\u0159ej Du\u0161ek, Amanda Cer- cas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for NLG. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2241-2252. Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Bleu: A method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02", "volume": "", "issue": "", "pages": "311--318", "other_ids": { "DOI": [ "10.3115/1073083.1073135" ] }, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: A method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02, pages 311-318, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A deep reinforced model for abstractive summarization", "authors": [], "year": 2018, "venue": "6th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A deep reinforced model for abstractive summa- rization. In 6th International Conference on Learning Representations, ICLR 2018.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Bootstrapping generators from noisy data", "authors": [ { "first": "Laura", "middle": [], "last": "Perez", "suffix": "" }, { "first": "-", "middle": [], "last": "Beltrachini", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018", "volume": "", "issue": "", "pages": "1516--1527", "other_ids": { "DOI": [ "10.18653/v1/n18-1137" ] }, "num": null, "urls": [], "raw_text": "Laura Perez-Beltrachini and Mirella Lapata. 2018. Bootstrapping generators from noisy data. In Pro- ceedings of the 2018 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL- HLT 2018, pages 1516-1527. Association for Compu- tational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Data-to-text generation with entity modeling", "authors": [ { "first": "Ratish", "middle": [], "last": "Puduppully", "suffix": "" }, { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019", "volume": "", "issue": "", "pages": "2023--2035", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ratish Puduppully, Li Dong, and Mirella Lapata. 2019. Data-to-text generation with entity modeling. In Pro- ceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, pages 2023- 2035.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Sequence level training with recurrent neural networks", "authors": [ { "first": "Aurelio", "middle": [], "last": "Marc", "suffix": "" }, { "first": "Sumit", "middle": [], "last": "Ranzato", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "Wojciech", "middle": [], "last": "Auli", "suffix": "" }, { "first": "", "middle": [], "last": "Zaremba", "suffix": "" } ], "year": 2016, "venue": "4th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "A hierarchical model for data-to-text generation", "authors": [ { "first": "Cl\u00e9ment", "middle": [], "last": "Rebuffel", "suffix": "" }, { "first": "Laure", "middle": [], "last": "Soulier", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Scoutheeten", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Gallinari", "suffix": "" } ], "year": 2020, "venue": "Advances in Information Retrieval -42nd European Conference on IR Research", "volume": "2020", "issue": "", "pages": "65--80", "other_ids": { "DOI": [ "10.1007/978-3-030-45439-5_5" ] }, "num": null, "urls": [], "raw_text": "Cl\u00e9ment Rebuffel, Laure Soulier, Geoffrey Scoutheeten, and Patrick Gallinari. 2020. A hi- erarchical model for data-to-text generation. In Advances in Information Retrieval -42nd European Conference on IR Research, ECIR 2020, Lisbon, Por- tugal, April 14-17, 2020, Proceedings, Part I, pages 65-80.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "A structured review of the validity of bleu", "authors": [ { "first": "Ehud", "middle": [], "last": "Reiter", "suffix": "" } ], "year": 2018, "venue": "Computational Linguistics", "volume": "44", "issue": "3", "pages": "393--401", "other_ids": { "DOI": [ "10.1162/coli_a_00322" ] }, "num": null, "urls": [], "raw_text": "Ehud Reiter. 2018. A structured review of the validity of bleu. Computational Linguistics, 44(3):393-401.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "An investigation into the validity of some metrics for automatically evaluating natural language generation systems", "authors": [ { "first": "Ehud", "middle": [], "last": "Reiter", "suffix": "" }, { "first": "Anja", "middle": [], "last": "Belz", "suffix": "" } ], "year": 2009, "venue": "Computational Linguistics", "volume": "35", "issue": "4", "pages": "529--558", "other_ids": { "DOI": [ "10.1162/coli.2009.35.4.35405" ] }, "num": null, "urls": [], "raw_text": "Ehud Reiter and Anja Belz. 2009. An investigation into the validity of some metrics for automatically evalu- ating natural language generation systems. Computa- tional Linguistics, 35(4):529-558.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Building Natural Language Generation Systems", "authors": [ { "first": "Ehud", "middle": [], "last": "Reiter", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Dale", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ehud Reiter and Robert Dale. 2000. Building Natural Language Generation Systems.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Choosing words in computergenerated weather forecasts", "authors": [ { "first": "Ehud", "middle": [], "last": "Reiter", "suffix": "" }, { "first": "Somayajulu", "middle": [], "last": "Sripada", "suffix": "" }, { "first": "Jim", "middle": [], "last": "Hunter", "suffix": "" }, { "first": "Jin", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Davy", "suffix": "" } ], "year": 2005, "venue": "Artif. Intell", "volume": "167", "issue": "1-2", "pages": "137--169", "other_ids": { "DOI": [ "10.1016/j.artint.2005.06.006" ] }, "num": null, "urls": [], "raw_text": "Ehud Reiter, Somayajulu Sripada, Jim Hunter, Jin Yu, and Ian Davy. 2005. Choosing words in computer- generated weather forecasts. Artif. Intell., 167(1- 2):137-169.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Self-critical sequence training for image captioning", "authors": [ { "first": "J", "middle": [], "last": "Steven", "suffix": "" }, { "first": "Etienne", "middle": [], "last": "Rennie", "suffix": "" }, { "first": "Youssef", "middle": [], "last": "Marcheret", "suffix": "" }, { "first": "Jerret", "middle": [], "last": "Mroueh", "suffix": "" }, { "first": "Vaibhava", "middle": [], "last": "Ross", "suffix": "" }, { "first": "", "middle": [], "last": "Goel", "suffix": "" } ], "year": 2016, "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "1179--1195", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven J. Rennie, Etienne Marcheret, Youssef Mroueh, Jerret Ross, and Vaibhava Goel. 2016. Self-critical se- quence training for image captioning. 2017 IEEE Con- ference on Computer Vision and Pattern Recognition (CVPR), pages 1179-1195.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Object hallucination in image captioning", "authors": [ { "first": "Anna", "middle": [], "last": "Rohrbach", "suffix": "" }, { "first": "Lisa", "middle": [ "Anne" ], "last": "Hendricks", "suffix": "" }, { "first": "Kaylee", "middle": [], "last": "Burns", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Darrell", "suffix": "" }, { "first": "Kate", "middle": [], "last": "Saenko", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "4035--4045", "other_ids": { "DOI": [ "10.18653/v1/D18-1437" ] }, "num": null, "urls": [], "raw_text": "Anna Rohrbach, Lisa Anne Hendricks, Kaylee Burns, Trevor Darrell, and Kate Saenko. 2018. Object hal- lucination in image captioning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4035-4045. Association for Computational Linguistics.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Answers unite! unsupervised metrics for reinforced summarization models", "authors": [ { "first": "Thomas", "middle": [], "last": "Scialom", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Lamprier", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Piwowarski", "suffix": "" }, { "first": "Jacopo", "middle": [], "last": "Staiano", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3244--3254", "other_ids": { "DOI": [ "10.18653/v1/D19-1320" ] }, "num": null, "urls": [], "raw_text": "Thomas Scialom, Sylvain Lamprier, Benjamin Pi- wowarski, and Jacopo Staiano. 2019. Answers unite! unsupervised metrics for reinforced summarization models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3244- 3254. Association for Computational Linguistics.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Get to the point: Summarization with pointergenerator networks", "authors": [ { "first": "Abigail", "middle": [], "last": "See", "suffix": "" }, { "first": "J", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Liu", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1073--1083", "other_ids": { "DOI": [ "10.18653/v1/P17-1099" ] }, "num": null, "urls": [], "raw_text": "Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguis- tics, pages 1073-1083. Association for Computational Linguistics.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Sticking to the facts: Confident decoding for faithful data-to-text generation", "authors": [ { "first": "P", "middle": [], "last": "Ankur", "suffix": "" }, { "first": "", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ankur P. Parikh. 2019. Sticking to the facts: Confident decoding for faithful data-to-text generation. CoRR, abs/1910.08684.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "A learning algorithm for continually running fully recurrent neural networks", "authors": [ { "first": "R", "middle": [ "J" ], "last": "Williams", "suffix": "" }, { "first": "D", "middle": [], "last": "Zipser", "suffix": "" } ], "year": 1989, "venue": "Neural Computation", "volume": "1", "issue": "2", "pages": "270--280", "other_ids": { "DOI": [ "10.1162/neco.1989.1.2.270" ] }, "num": null, "urls": [], "raw_text": "R. J. Williams and D. Zipser. 1989. A learning algo- rithm for continually running fully recurrent neural net- works. Neural Computation, 1(2):270-280.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Function optimization using connectionist reinforcement learning algorithms", "authors": [ { "first": "Ronald", "middle": [ "J" ], "last": "Williams", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Peng", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronald J. Williams and Jing Peng. 1991. Function op- timization using connectionist reinforcement learning algorithms.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Challenges in data-to-document generation", "authors": [ { "first": "Sam", "middle": [], "last": "Wiseman", "suffix": "" }, { "first": "Stuart", "middle": [], "last": "Shieber", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2253--2263", "other_ids": { "DOI": [ "10.18653/v1/D17-1239" ] }, "num": null, "urls": [], "raw_text": "Sam Wiseman, Stuart Shieber, and Alexander Rush. 2017. Challenges in data-to-document generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2253- 2263. Association for Computational Linguistics.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Google's neural machine translation system: Bridging the gap between human and machine translation", "authors": [ { "first": "Mohammad", "middle": [], "last": "Le", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Norouzi", "suffix": "" }, { "first": "Maxim", "middle": [], "last": "Macherey", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Krikun", "suffix": "" }, { "first": "Qin", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Macherey", "suffix": "" }, { "first": "Apurva", "middle": [], "last": "Klingner", "suffix": "" }, { "first": "Melvin", "middle": [], "last": "Shah", "suffix": "" }, { "first": "Xiaobing", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Yoshikiyo", "middle": [], "last": "Gouws", "suffix": "" }, { "first": "Taku", "middle": [], "last": "Kato", "suffix": "" }, { "first": "Hideto", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "Keith", "middle": [], "last": "Kazawa", "suffix": "" }, { "first": "George", "middle": [], "last": "Stevens", "suffix": "" }, { "first": "Nishant", "middle": [], "last": "Kurian", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Patil", "suffix": "" }, { "first": "", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2016, "venue": "Oriol Vinyals", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridg- ing the gap between human and machine translation. CoRR, abs/1609.08144.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "Example from WikiBIO where PARENTing (LSTML+RL) led to reduce hallucinated content. In yellow: divergent phrases in the reference; in red: hallucinated phrases from LSTM (without RL training) generation." }, "TABREF0": { "content": "
ModelWikiBIOWebNLG
BLEU PARENT (Prec. / Rec. / F1) BLEU PARENT (Prec. / Rec. / F1)
BaselinesS2S+FA+RL Confident PG GCN Gardent-LSTM45.49 38.10 --76.1 79.52 --45.9 40.60 --54.8 51.38 ----55.9 54.03------------
Our modelsLSTM LSTM+RL Transformer Transformer+RL 42.40 42.80 44.17* 80.01* 46.60* 78.70 45.16 41.09 80.02 44.31 80.37 45.83*55.10 56.72* 63.20* 74.51 70.68* 54.9 73.15 69.91 54.74 53.45 75.17 64.15 56.15* 57.07* 75.06 66.70*69.67 71.27* 67.38 69.01*
Table 1:
", "text": "Evaluation on WikiBIO and WebNLG. *: p-value< 0.001 of Student T-test comparing with/without RL.", "type_str": "table", "html": null, "num": null }, "TABREF2": { "content": "", "text": "Comparative statistics between LSTM and LSTM+RL on WikiBIO. \u2206 for variation scores.", "type_str": "table", "html": null, "num": null }, "TABREF4": { "content": "
", "text": "Effectiveness analysis depending on generation size. Nb-copy: average number of words copied from the source table. *: p-value < 0.001 for Student T-test comparing short/long generations.", "type_str": "table", "html": null, "num": null } } } }