{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:28:54.523408Z" }, "title": "Data-to-Text Generation with Iterative Text Editing", "authors": [ { "first": "Zden\u011bk", "middle": [], "last": "Kasner", "suffix": "", "affiliation": { "laboratory": "", "institution": "Charles University", "location": { "country": "Czech Republic" } }, "email": "kasner@ufal.mff.cuni.cz" }, { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "", "affiliation": { "laboratory": "", "institution": "Charles University", "location": { "country": "Czech Republic" } }, "email": "odusek@ufal.mff.cuni.cz" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a novel approach to data-to-text generation based on iterative text editing. Our approach maximizes the completeness and semantic accuracy of the output text while leveraging the abilities of recent pre-trained models for text editing (LASERTAGGER) and language modeling (GPT-2) to improve the text fluency. To this end, we first transform data items to text using trivial templates, and then we iteratively improve the resulting text by a neural model trained for the sentence fusion task. The output of the model is filtered by a simple heuristic and reranked with an offthe-shelf pre-trained language model. We evaluate our approach on two major data-to-text datasets (WebNLG, Cleaned E2E) and analyze its caveats and benefits. Furthermore, we show that our formulation of data-to-text generation opens up the possibility for zero-shot domain adaptation using a general-domain dataset for sentence fusion.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "We present a novel approach to data-to-text generation based on iterative text editing. Our approach maximizes the completeness and semantic accuracy of the output text while leveraging the abilities of recent pre-trained models for text editing (LASERTAGGER) and language modeling (GPT-2) to improve the text fluency. To this end, we first transform data items to text using trivial templates, and then we iteratively improve the resulting text by a neural model trained for the sentence fusion task. The output of the model is filtered by a simple heuristic and reranked with an offthe-shelf pre-trained language model. We evaluate our approach on two major data-to-text datasets (WebNLG, Cleaned E2E) and analyze its caveats and benefits. Furthermore, we show that our formulation of data-to-text generation opens up the possibility for zero-shot domain adaptation using a general-domain dataset for sentence fusion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Data-to-text (D2T) generation is the task of transforming structured data into a natural language text which represents it (Reiter and Dale, 2000; Gatt and Krahmer, 2018) . The output text can be generated in several steps following a pipeline, or in an end-to-end (E2E) fashion. Neural-based E2E architectures recently gained attention due to their potential to reduce the human input needed for building D2T systems. A disadvantage of E2E architectures is the lack of intermediate steps, which makes it hard to control the semantic fidelity of the output (Moryossef et al., 2019b; Castro Ferreira et al., 2019) .", "cite_spans": [ { "start": 123, "end": 146, "text": "(Reiter and Dale, 2000;", "ref_id": "BIBREF34" }, { "start": 147, "end": 170, "text": "Gatt and Krahmer, 2018)", "ref_id": "BIBREF14" }, { "start": 557, "end": 582, "text": "(Moryossef et al., 2019b;", "ref_id": "BIBREF28" }, { "start": 583, "end": 612, "text": "Castro Ferreira et al., 2019)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We focus on a D2T setup where the input data is a set of RDF triples in the form of (subject, predicate, object) and the output text represents all and only facts in the data. This setup can be used by all D2T applications where the data describe relationships between entities (e.g. Gardent et al., 2017; Budzianowski et al., 2018) . 1 In order to combine the benefits of pipeline and E2E architectures, we propose to use the neural models with a limited scope. We take advantage of three facts: (1) each triple can be lexicalized using a trivial template, (2) stacking the lexicalizations one after another tends to produce an unnatural sounding but semantically accurate output, and (3) the neural model can be used for combining the lexicalizations to improve the output fluency.", "cite_spans": [ { "start": 284, "end": 305, "text": "Gardent et al., 2017;", "ref_id": "BIBREF13" }, { "start": 306, "end": 332, "text": "Budzianowski et al., 2018)", "ref_id": "BIBREF2" }, { "start": 335, "end": 336, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In traditional pipeline-based NLG systems (Reiter and Dale, 2000) , combining the lexicalizations is a non-trivial multi-stage process. Text structuring and sentence aggregation are first used to determine the order of facts and their assignment to sentences, followed by referring expression generation and linguistic realization. We argue that with a neural model, combining the lexicalizations can be simplified as several iterations of sentence fusion-a task of combining sentences into a coherent text (Barzilay and McKeown, 2005) .", "cite_spans": [ { "start": 54, "end": 65, "text": "Dale, 2000)", "ref_id": "BIBREF34" }, { "start": 507, "end": 535, "text": "(Barzilay and McKeown, 2005)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our contributions are the following: 1) We show how to reframe D2T generation as iterative text editing, which makes it independent of dataset-specific input data format and allows to control the output over a series of intermediate steps.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2) We perform initial experiments using our approach on two major D2T datasets (WebNLG and Cleaned E2E) and include a quantitative and qualitative analysis of the results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3) We perform zero-shot domain adaptation experiments and show that our approach exhibits a domain-independent behavior.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "X i-1 = Dublin is the capital of Ireland. t i = (Ireland, language, English)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "English is spoken in Ireland.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One of the languages of Ireland is English. English is the official language of Ireland. ... Figure 1 : An example of a single iteration of our algorithm for D2T generation. In Step 1, the template for the triple is selected and filled. In Step 2, the sentence is fused with the template. In Step 3, the result for the next iteration is selected from the beam by filtering and language model scoring.", "cite_spans": [], "ref_spans": [ { "start": 93, "end": 101, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "X i-1 lex(t i ) =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Improving the accuracy of neural D2T approaches has attracted a lot of research interest lately. Similarly to us, other systems use a generatethen-rerank approach (Du\u0161ek and Jur\u010d\u00ed\u010dek, 2016; Juraska et al., 2018) or a classifier to filter incorrect output (Harkous et al., 2020) . Moryossef et al. (2019a,b) split the D2T process into a symbolic text-planning stage and a neural generation stage.", "cite_spans": [ { "start": 163, "end": 189, "text": "(Du\u0161ek and Jur\u010d\u00ed\u010dek, 2016;", "ref_id": "BIBREF11" }, { "start": 190, "end": 211, "text": "Juraska et al., 2018)", "ref_id": "BIBREF18" }, { "start": 255, "end": 277, "text": "(Harkous et al., 2020)", "ref_id": "BIBREF17" }, { "start": 280, "end": 306, "text": "Moryossef et al. (2019a,b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Other works improve the robustness of the neural model (Tian et al., 2019; Kedzie and McKeown, 2019) or employ a natural language understanding model (Nie et al., 2019) to improve the faithfulness of the output. Recently, Chen et al. (2020) finetuned GPT-2 (Radford et al., 2019) for a few-shot domain adaptation. Several models were recently applied to generic text editing tasks. LASERTAGGER ), which we use in our approach, is a sequence tagging model based on the Transformer (Vaswani et al., 2017) architecture with the BERT (Devlin et al., 2019) pre-trained language model as the encoder. Other recent text-editing models without a pre-trained backbone include EditNTS (Dong et al., 2019) and Levenshtein Transformer (Gu et al., 2019) .", "cite_spans": [ { "start": 55, "end": 74, "text": "(Tian et al., 2019;", "ref_id": "BIBREF35" }, { "start": 75, "end": 100, "text": "Kedzie and McKeown, 2019)", "ref_id": "BIBREF21" }, { "start": 150, "end": 168, "text": "(Nie et al., 2019)", "ref_id": "BIBREF29" }, { "start": 257, "end": 279, "text": "(Radford et al., 2019)", "ref_id": "BIBREF32" }, { "start": 480, "end": 502, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF36" }, { "start": 530, "end": 551, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF6" }, { "start": 675, "end": 694, "text": "(Dong et al., 2019)", "ref_id": "BIBREF8" }, { "start": 723, "end": 740, "text": "(Gu et al., 2019)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Concurrently with our work, Kale and Rastogi (2020) explored using templates for dialogue response generation. They use the sequence-tosequence T5 model (Raffel et al., 2019) to generate the output text from scratch instead of iteratively editing the intermediate outputs, which leaves less control over the model.", "cite_spans": [ { "start": 153, "end": 174, "text": "(Raffel et al., 2019)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "We start from single-triple templates and iteratively fuse them into the resulting text while filtering and reranking the results. We first detail the main components of our system and then give an overall description of the decoding algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Approach", "sec_num": "3" }, { "text": "We collect a set of templates for each predicate. The templates can be either handcrafted, or automatically extracted from the lexicalizations of the single-triple examples in the training data. For unseen predicates, we add a single fallback template:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Template Extraction", "sec_num": "3.1" }, { "text": "The of is .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Template Extraction", "sec_num": "3.1" }, { "text": "We train an in-domain sentence fusion model. We select pairs pX, X 1 q of examples from the training data consisting of pn, n`1q triples and having n triples in common. This leaves us with an extra triple t present only in X 1 . To construct the training data, we use the concatenated sequence X lexptq as a source and the sequence X 1 as a target, where lexptq denotes lexicalizing the triple t using an appropriate template. As a result, the model learns to integrate X and t into a single coherent expression.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Fusion", "sec_num": "3.2" }, { "text": "We base our sentence fusion model on LASERTAGGER . LASERTAG-GER is a sequence generation model which generates outputs by tagging inputs with edit operations: KEEP a token, DELETE a token, and ADD a phrase before the token. In tasks where the output highly overlaps with the input, such as sentence fusion, LASERTAGGER is able to achieve performance comparable to state-of-the-art models with faster inference times and less training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Fusion", "sec_num": "3.2" }, { "text": "An important feature of LASERTAGGER is the limited size of its vocabulary, which consists of l most frequent (possibly multi-token) phrases used to transform inputs to outputs in the training data. After the vocabulary is precomputed, all infeasible examples in the training data are filtered out. At the cost of limiting the number of training examples, this filtering makes the training data cleaner by removing outliers. The limited vocabulary also WebNLG foundedBy was the founder of . was founded by .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Fusion", "sec_num": "3.2" }, { "text": "area+food offers cuisine in the . in serves food.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "E2E (extracted)", "sec_num": null }, { "text": "near is located near . is close to . makes the model less prone to common neural model errors such as hallucination, which allows us to control the semantic accuracy to a great extent using only simple heuristics and language model rescoring.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "E2E (custom)", "sec_num": null }, { "text": "We use an additional component for calculating an indirect measure of the text fluency. We refer to the component as the LMSCORER. In our case, LM-SCORER is a pre-trained GPT-2 language model (Radford et al., 2019) from the Transformers repository 2 (Wolf et al., 2019) wrapped in the lm-scorer 3 package. We use LMSCORER to compute the score of the input text X composed of tokens x 1 . . . x n as a geometric mean of the token conditional probability:", "cite_spans": [ { "start": 192, "end": 214, "text": "(Radford et al., 2019)", "ref_id": "BIBREF32" }, { "start": 250, "end": 269, "text": "(Wolf et al., 2019)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "LM Scoring", "sec_num": "3.3" }, { "text": "scorepXq \"\u02dcn \u017a i\"1 P px i |x 1 . . . x i\u00b41 q\u00b81 n .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM Scoring", "sec_num": "3.3" }, { "text": "The input of the algorithm (Figure 1 ) is a set of n ordered triples. First, we lexicalize the triple t 0 to get the base text X 0 . We choose the lexicalization for the triple as the filled template with the best score from LMSCORER. This promotes templates which sound more natural for particular values. In the following steps i \" 1 . . . n, we lexicalize the triple t i and append it after X i\u00b41 . We feed the joined text into the sentence fusion model and produce a beam with fusion hypotheses. We use a simple heuristic (string matching) to filter out hypotheses in the beam missing any entity from the input data. Finally, we rescore the remaining hypotheses in the beam with LMSCORER and let the hypothesis with the best score be the base text X i . In case there are no sentences left in the beam after the filtering step, we let X i be the text in which the lexicalized t i is appended after X i\u00b41 without fusion (preferring accuracy to fluency). The output of the algorithm is the base text X n from the final step.", "cite_spans": [], "ref_spans": [ { "start": 27, "end": 36, "text": "(Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Decoding Algorithm", "sec_num": "3.4" }, { "text": "The WebNLG dataset (Gardent et al., 2017) consists of sets of DBPedia RDF triples and their lexicalizations. Following previous work, we use version 1.4 from Castro Ferreira et al. 2018. The E2E dataset (Novikova et al., 2017) contains restaurant descriptions based on sets of attributes (slots). In this work, we refer to the cleaned version of the E2E dataset (Du\u0161ek et al., 2019) . For the domain adaptation experiments, we use DISCOFUSE (Geva et al., 2019) , which is a large-scale dataset for sentence fusion.", "cite_spans": [ { "start": 19, "end": 41, "text": "(Gardent et al., 2017)", "ref_id": "BIBREF13" }, { "start": 203, "end": 226, "text": "(Novikova et al., 2017)", "ref_id": "BIBREF30" }, { "start": 362, "end": 382, "text": "(Du\u0161ek et al., 2019)", "ref_id": "BIBREF10" }, { "start": 441, "end": 460, "text": "(Geva et al., 2019)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments 4.1 Datasets", "sec_num": "4" }, { "text": "For WebNLG, we extract the initial templates from the training data from examples containing only a single triple. In the E2E dataset, there are no such examples; therefore our solution is twofold: we extract the templates for pairs of predicates, using them as a starting point for the algorithm in order to leverage the lexical variability in the data (manually filtering out the templates with semantic noise), and we also create a small set of templates for each single predicate manually, using them in the subsequent steps of the algorithm (this is possible due to the low variability of the predicates in the dataset). 4 See Table 1 for examples of templates we used in our experiments.", "cite_spans": [], "ref_spans": [ { "start": 632, "end": 639, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Data Preprocessing", "sec_num": "4.2" }, { "text": "As a baseline, we generate the best templates according to LMSCORER without applying the sentence fusion (i.e. always using the fallback). For the sentence fusion experiments, we use LASERTAGGER with the autoregressive decoder Harkous et al., 2020) and the finetuned T5 model (T5; Kale, 2020) .", "cite_spans": [ { "start": 227, "end": 248, "text": "Harkous et al., 2020)", "ref_id": "BIBREF17" }, { "start": 281, "end": 292, "text": "Kale, 2020)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Setup", "sec_num": "4.3" }, { "text": "with a beam of size 10. We use all reference lexicalizations and the vocabulary size V \" 100, following our preliminary experiments, which showed that filtering the references only by limiting the vocabulary size brings the best results (see Supplementary for details). We finetune the model for 10,000 updates with batch size 32 and learning rate 2\u02c610\u00b45. For the beam filtering heuristic, we check for the presence of entities by simple string matching in WebNLG; for the E2E dataset, we use a set of regular expressions from TGen 5 (Du\u0161ek et al., 2019) . We do not use any pre-ordering steps for the triples and process them in the default order. Additionally, we conduct a zero-shot domain adaptation experiment. We train the sentence fusion model with the same setup, but instead of the in-domain datasets, we use a subset of the balanced-Wikipedia portion of the DISCOFUSE dataset. In particular, we use the discourse types which frequently occur in our datasets, filtering the discourse types which are not relevant for our use-case. See Supplementary for the full listing of the selected types.", "cite_spans": [ { "start": 534, "end": 554, "text": "(Du\u0161ek et al., 2019)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Setup", "sec_num": "4.3" }, { "text": "We compute the metrics used in the evaluation of the E2E Challenge : BLEU (Papineni et al., 2002) , NIST (Doddington, 2002) , METEOR (Banerjee and Lavie, 2005) , ROUGE L (Lin, 2004) and CIDEr (Vedantam et al., 2015) . The results are shown in Table 2 . The scores from the automatic metrics lag behind the state-of-theart, although both the fusion and the zero-shot approaches show improvements over the baseline. We examine the details in the following paragraphs, discussing the behavior of our approach, and we outline plans for improving the results in Section 6.", "cite_spans": [ { "start": 74, "end": 97, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF31" }, { "start": 105, "end": 123, "text": "(Doddington, 2002)", "ref_id": "BIBREF7" }, { "start": 133, "end": 159, "text": "(Banerjee and Lavie, 2005)", "ref_id": "BIBREF0" }, { "start": 170, "end": 181, "text": "(Lin, 2004)", "ref_id": "BIBREF24" }, { "start": 192, "end": 215, "text": "(Vedantam et al., 2015)", "ref_id": "BIBREF37" } ], "ref_spans": [ { "start": 243, "end": 250, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Analysis of Results", "sec_num": "5" }, { "text": "Accuracy vs. Variability Our approach ensures zero entity errors, since the entities are filled ver-batim into the templates and in case an entity is missing in the whole beam, a fallback is used instead. Semantic inconsistencies still occur, e.g. if a verb or function words are missing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of Results", "sec_num": "5" }, { "text": "The fused sentences in the E2E dataset, where all the objects are related to a single subject, often lean towards compact forms, e.g.: Aromi is a family friendly chinese coffee shop with a low customer rating in riverside. On the contrary, the sentence structure in WebNLG mostly follows the structure from the templates and the model performs minimal changes to fuse the sentences together. See Table 3 and Supplementary for examples of the system outputs. Out of all steps, 28% are fallbacks (no fusion is performed) in WebNLG and 54% in the E2E dataset. The higher number of fallbacks in the E2E dataset can be explained by a higher lexical variability of the references, together with a higher number of data items per example in the E2E dataset, making it harder for the model to maintain the text coherency over multiple steps.", "cite_spans": [], "ref_spans": [ { "start": 396, "end": 403, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Analysis of Results", "sec_num": "5" }, { "text": "Templates On average, there are 12.4 templates per predicate in WebNLG and 8.3 in the E2E dataset. In cases where the set of templates is more diverse, e.g. if the template for the predicate country has to be selected from { is situated within , is a dish found in }, LMSCORER helps to select the semantically accurate template for the specific entities. The literal copying of entities can be too rigid in some cases, e.g. Atat\u00fcrk Monument (\u0130zmir) is made of \"Bronze\", but these disfluencies can be improved in the fusion step.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of Results", "sec_num": "5" }, { "text": "Reordering LASERTAGGER does not allow arbitrary reordering of words in the sentence, which can limit the expressiveness of the sentence fusion model. Consider the example in Figure 1 : in order to create a sentence English is spoken in Dublin, the capital of Ireland, the model has to delete and re-insert at least one of the entities, e.g. English,", "cite_spans": [], "ref_spans": [ { "start": 174, "end": 182, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Analysis of Results", "sec_num": "5" }, { "text": "(Albert Jennings Fountain, deathPlace, New Mexico Territory); (Albert Jennings Fountain, birthPlace, New York City); (Albert Jennings Fountain, birthPlace, Staten Island)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Triples", "sec_num": null }, { "text": "Step #0 Albert Jennings Fountain died in New Mexico Territory.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Triples", "sec_num": null }, { "text": "Step #1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Triples", "sec_num": null }, { "text": "Albert Jennings Fountain, who died in New Mexico Territory, was born in New York City.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Triples", "sec_num": null }, { "text": "Step #2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Triples", "sec_num": null }, { "text": "Albert Jennings Fountain, who died in New Mexico Territory, was born in New York City, Staten Island.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Triples", "sec_num": null }, { "text": "Reference Albert Jennings Fountain was born in Staten Island, New York City and died in the New Mexico Territory. which has to be present in the vocabulary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Triples", "sec_num": null }, { "text": "The zero-shot model trained on DISCOFUSE is able to correctly pronominalize or delete repeated entities and join the sentences with conjunctives, e.g. William Anders was born in British Hong Kong, and was a member of the crew of Apollo 8. While the model makes only a limited use of sentence fusion, it makes the output more fluent while keeping strong guarantees of the output accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain Independence", "sec_num": null }, { "text": "Although the current version of our approach is not yet able to consistently produce sentences with a high degree of fluency, we believe that the approach provides a valuable starting point for controllable and domain-independent D2T generation. In this section, we outline possible directions for tackling the main drawbacks and improving the results of the model with further research. Building a high-quality sentence fusion model, which lies at the core of our approach, remains a challenge (Lebanoff et al., 2020) . Our simple extractive approach relying on existing D2T datasets may not produce sufficient amount of clean data. On the other hand, the phenomena covered in the DISCOFUSE dataset are too narrow for the fully general sentence fusion. We believe that training the sentence fusion model on a larger and more diverse sentence fusion dataset, built e.g. in an unsupervised fashion (Lebanoff et al., 2019) , is a way to improve the robustness of our approach.", "cite_spans": [ { "start": 495, "end": 518, "text": "(Lebanoff et al., 2020)", "ref_id": "BIBREF22" }, { "start": 897, "end": 920, "text": "(Lebanoff et al., 2019)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "6" }, { "text": "Fluency of the output sentences may be also improved by allowing more flexibility for the order of entities, either by including an ordering step in the pipeline (Moryossef et al., 2019b) , or by using a text-editing model that is capable of explicit reordering of words in the sentence (Mallinson et al., 2020) . Splitting the data into smaller batches (i.e. setting an upper bound for the number of sentences fused together) could also help to improve the con-sistency of results with a higher number of data items.", "cite_spans": [ { "start": 162, "end": 187, "text": "(Moryossef et al., 2019b)", "ref_id": "BIBREF28" }, { "start": 287, "end": 311, "text": "(Mallinson et al., 2020)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "6" }, { "text": "Our string matching heuristic is quite crude and may lead to a high number of fallbacks. Introducing a more precise heuristic, such as a semantic fidelity classifier (Harkous et al., 2020) , or a model trained for natural language inference (Du\u0161ek and Kasner, 2020) could help to promote lexical variability of the text.", "cite_spans": [ { "start": 166, "end": 188, "text": "(Harkous et al., 2020)", "ref_id": "BIBREF17" }, { "start": 241, "end": 265, "text": "(Du\u0161ek and Kasner, 2020)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "6" }, { "text": "Finally, we note that the text-editing paradigm allows to visualize the changes made by the model, introducing the option to accept or reject the changes at each step, and even build a set of custom rules on top of the individual edit operations based on the affected tokens. This flexibility could be useful for tweaking the model manually for a production system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "6" }, { "text": "We proposed a simple and intuitive approach for D2T generation, splitting the process into two steps: lexicalization of data and improving the text fluency. A trivial lexicalization helps to promote fidelity and domain independence while delegating the subtle work with language to neural models allows to benefit from the power of general-domain pre-training. While a straightforward application of this approach on the WebNLG and E2E datasets does not produce state-of-the-art results in terms of automatic metrics, the results still show considerable improvements above the baseline. We provided insights into the behavior of the model, highlighted its potential benefits, and proposed the directions for further improvements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "The setup can be preceded by the content selection for selecting the relevant subset of data (cf.Wiseman et al., 2017).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/huggingface/ transformers 3 https://github.com/simonepri/lm-scorer", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In the E2E dataset, the data is in the form of key-value slots. We transform the data to RDF triples by using the name of the restaurant as a subject and the rest of the slots as predicate and object. This creates n-1 triples for n slots.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/UFAL-DSG/tgen", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank the anonymous reviewers for the relevant comments. The work was supported by the Charles University grant No. 140320, the SVV project No. 260575, and the Charles University project PRIMUS/19/SCI/10.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "METEOR: An automatic metric for MT evaluation with improved correlation with human judgments", "authors": [ { "first": "Satanjeev", "middle": [], "last": "Banerjee", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization", "volume": "", "issue": "", "pages": "65--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with im- proved correlation with human judgments. In Pro- ceedings of the ACL Workshop on Intrinsic and Ex- trinsic Evaluation Measures for Machine Transla- tion and/or Summarization, pages 65-72, Ann Ar- bor, Michigan. Association for Computational Lin- guistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Sentence fusion for multidocument news summarization", "authors": [ { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Kathleen", "middle": [ "R" ], "last": "Mckeown", "suffix": "" } ], "year": 2005, "venue": "Computational Linguistics", "volume": "31", "issue": "3", "pages": "297--328", "other_ids": { "DOI": [ "10.1162/089120105774321091" ] }, "num": null, "urls": [], "raw_text": "Regina Barzilay and Kathleen R. McKeown. 2005. Sentence fusion for multidocument news summa- rization. Computational Linguistics, 31(3):297- 328.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "MultiWOZ -a large-scale multi-domain Wizard-of-Oz dataset for task-oriented dialogue modelling", "authors": [ { "first": "Pawe\u0142", "middle": [], "last": "Budzianowski", "suffix": "" }, { "first": "Tsung-Hsien", "middle": [], "last": "Wen", "suffix": "" }, { "first": "Bo-Hsiang", "middle": [], "last": "Tseng", "suffix": "" }, { "first": "I\u00f1igo", "middle": [], "last": "Casanueva", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Ultes", "suffix": "" }, { "first": "Milica", "middle": [], "last": "Osman Ramadan", "suffix": "" }, { "first": "", "middle": [], "last": "Ga\u0161i\u0107", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "5016--5026", "other_ids": { "DOI": [ "10.18653/v1/D18-1547" ] }, "num": null, "urls": [], "raw_text": "Pawe\u0142 Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, I\u00f1igo Casanueva, Stefan Ultes, Osman Ra- madan, and Milica Ga\u0161i\u0107. 2018. MultiWOZ -a large-scale multi-domain Wizard-of-Oz dataset for task-oriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 5016-5026, Brus- sels, Belgium. Association for Computational Lin- guistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Neural datato-text generation: A comparison between pipeline and end-to-end architectures", "authors": [ { "first": "Chris", "middle": [], "last": "Thiago Castro Ferreira", "suffix": "" }, { "first": "", "middle": [], "last": "Van Der Lee", "suffix": "" }, { "first": "Emiel", "middle": [], "last": "Emiel Van Miltenburg", "suffix": "" }, { "first": "", "middle": [], "last": "Krahmer", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "552--562", "other_ids": { "DOI": [ "10.18653/v1/D19-1052" ] }, "num": null, "urls": [], "raw_text": "Thiago Castro Ferreira, Chris van der Lee, Emiel van Miltenburg, and Emiel Krahmer. 2019. Neural data- to-text generation: A comparison between pipeline and end-to-end architectures. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 552-562. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Enriching the WebNLG corpus", "authors": [ { "first": "Diego", "middle": [], "last": "Thiago Castro Ferreira", "suffix": "" }, { "first": "Emiel", "middle": [], "last": "Moussallem", "suffix": "" }, { "first": "Sander", "middle": [], "last": "Krahmer", "suffix": "" }, { "first": "", "middle": [], "last": "Wubben", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 11th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "171--176", "other_ids": { "DOI": [ "10.18653/v1/W18-6521" ] }, "num": null, "urls": [], "raw_text": "Thiago Castro Ferreira, Diego Moussallem, Emiel Krahmer, and Sander Wubben. 2018. Enriching the WebNLG corpus. In Proceedings of the 11th Inter- national Conference on Natural Language Genera- tion, pages 171-176. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Few-shot NLG with pre-trained language model", "authors": [ { "first": "Zhiyu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Harini", "middle": [], "last": "Eavani", "suffix": "" }, { "first": "Wenhu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yinyin", "middle": [], "last": "Liu", "suffix": "" }, { "first": "William", "middle": [ "Yang" ], "last": "Wang", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "183--190", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.18" ] }, "num": null, "urls": [], "raw_text": "Zhiyu Chen, Harini Eavani, Wenhu Chen, Yinyin Liu, and William Yang Wang. 2020. Few-shot NLG with pre-trained language model. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 183-190, Online. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Automatic evaluation of machine translation quality using n-gram cooccurrence statistics", "authors": [ { "first": "George", "middle": [], "last": "Doddington", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Second International Conference on Human Language Technology Research", "volume": "", "issue": "", "pages": "138--145", "other_ids": {}, "num": null, "urls": [], "raw_text": "George Doddington. 2002. Automatic evaluation of machine translation quality using n-gram co- occurrence statistics. In Proceedings of the Second International Conference on Human Language Tech- nology Research, pages 138-145.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "EditNTS: An neural programmer-interpreter model for sentence simplification through explicit editing", "authors": [ { "first": "Yue", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Zichao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Mehdi", "middle": [], "last": "Rezagholizadeh", "suffix": "" }, { "first": "Jackie Chi Kit", "middle": [], "last": "Cheung", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3393--3402", "other_ids": { "DOI": [ "10.18653/v1/P19-1331" ] }, "num": null, "urls": [], "raw_text": "Yue Dong, Zichao Li, Mehdi Rezagholizadeh, and Jackie Chi Kit Cheung. 2019. EditNTS: An neural programmer-interpreter model for sentence simplifi- cation through explicit editing. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 3393-3402, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Evaluating the state-of-the-art of end-to-end natural language generation: The E2E NLG Challenge", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" }, { "first": "Jekaterina", "middle": [], "last": "Novikova", "suffix": "" }, { "first": "Verena", "middle": [], "last": "Rieser", "suffix": "" } ], "year": 2020, "venue": "Computer Speech & Language", "volume": "59", "issue": "", "pages": "123--156", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ond\u0159ej Du\u0161ek, Jekaterina Novikova, and Verena Rieser. 2020. Evaluating the state-of-the-art of end-to-end natural language generation: The E2E NLG Chal- lenge. Computer Speech & Language, 59:123-156.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Semantic noise matters for neural natural language generation", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" }, { "first": "David", "middle": [ "M" ], "last": "Howcroft", "suffix": "" }, { "first": "Verena", "middle": [], "last": "Rieser", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 12th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "421--426", "other_ids": { "DOI": [ "10.18653/v1/W19-8652" ] }, "num": null, "urls": [], "raw_text": "Ond\u0159ej Du\u0161ek, David M. Howcroft, and Verena Rieser. 2019. Semantic noise matters for neural natural lan- guage generation. In Proceedings of the 12th Inter- national Conference on Natural Language Genera- tion, pages 421-426, Tokyo, Japan. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Sequence-tosequence generation for spoken dialogue via deep syntax trees and strings", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" }, { "first": "Filip", "middle": [], "last": "Jur\u010d\u00ed\u010dek", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "45--51", "other_ids": { "DOI": [ "10.18653/v1/P16-2008" ] }, "num": null, "urls": [], "raw_text": "Ond\u0159ej Du\u0161ek and Filip Jur\u010d\u00ed\u010dek. 2016. Sequence-to- sequence generation for spoken dialogue via deep syntax trees and strings. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 45-51, Berlin. Association for Computational Lin- guistics. ArXiv:1606.05491.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Evaluating semantic accuracy of data-to-text generation with natural language inference", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" }, { "first": "Zden\u011bk", "middle": [], "last": "Kasner", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 13th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ond\u0159ej Du\u0161ek and Zden\u011bk Kasner. 2020. Evaluating se- mantic accuracy of data-to-text generation with nat- ural language inference. In Proceedings of the 13th International Conference on Natural Language Gen- eration. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The WebNLG challenge: Generating text from RDF data", "authors": [ { "first": "Claire", "middle": [], "last": "Gardent", "suffix": "" }, { "first": "Anastasia", "middle": [], "last": "Shimorina", "suffix": "" }, { "first": "Shashi", "middle": [], "last": "Narayan", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Perez-Beltrachini", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 10th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "124--133", "other_ids": { "DOI": [ "10.18653/v1/W17-3518" ] }, "num": null, "urls": [], "raw_text": "Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. The WebNLG challenge: Generating text from RDF data. In Pro- ceedings of the 10th International Conference on Natural Language Generation, pages 124-133. As- sociation for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Survey of the state of the art in natural language generation: Core tasks, applications and evaluation", "authors": [ { "first": "Albert", "middle": [], "last": "Gatt", "suffix": "" }, { "first": "Emiel", "middle": [], "last": "Krahmer", "suffix": "" } ], "year": 2018, "venue": "Journal of Artificial Intelligence Research", "volume": "61", "issue": "", "pages": "65--170", "other_ids": {}, "num": null, "urls": [], "raw_text": "Albert Gatt and Emiel Krahmer. 2018. Survey of the state of the art in natural language generation: Core tasks, applications and evaluation. Journal of Artifi- cial Intelligence Research, 61:65-170.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "DiscoFuse: A large-scale dataset for discourse-based sentence fusion", "authors": [ { "first": "Mor", "middle": [], "last": "Geva", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Malmi", "suffix": "" }, { "first": "Idan", "middle": [], "last": "Szpektor", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "3443--3455", "other_ids": { "DOI": [ "10.18653/v1/N19-1348" ] }, "num": null, "urls": [], "raw_text": "Mor Geva, Eric Malmi, Idan Szpektor, and Jonathan Berant. 2019. DiscoFuse: A large-scale dataset for discourse-based sentence fusion. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3443-3455, Min- neapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Levenshtein transformer", "authors": [ { "first": "Jiatao", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Changhan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Junbo", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "11181--11191", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiatao Gu, Changhan Wang, and Junbo Zhao. 2019. Levenshtein transformer. In Advances in Neural In- formation Processing Systems, pages 11181-11191.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Have your text and use it too! End-to-end neural data-to-text generation with semantic fidelity", "authors": [ { "first": "Hamza", "middle": [], "last": "Harkous", "suffix": "" }, { "first": "Isabel", "middle": [], "last": "Groves", "suffix": "" }, { "first": "Amir", "middle": [], "last": "Saffari", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.06577" ] }, "num": null, "urls": [], "raw_text": "Hamza Harkous, Isabel Groves, and Amir Saffari. 2020. Have your text and use it too! End-to-end neural data-to-text generation with semantic fidelity. arXiv preprint arXiv:2004.06577.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A deep ensemble model with slot alignment for sequence-to-sequence natural language generation", "authors": [ { "first": "Juraj", "middle": [], "last": "Juraska", "suffix": "" }, { "first": "Panagiotis", "middle": [], "last": "Karagiannis", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Bowden", "suffix": "" }, { "first": "Marilyn", "middle": [], "last": "Walker", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "152--162", "other_ids": { "DOI": [ "10.18653/v1/N18-1014" ] }, "num": null, "urls": [], "raw_text": "Juraj Juraska, Panagiotis Karagiannis, Kevin Bowden, and Marilyn Walker. 2018. A deep ensemble model with slot alignment for sequence-to-sequence natu- ral language generation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 152-162, New Orleans, LA, USA. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Text-to-text pre-training for data-totext tasks", "authors": [ { "first": "Mihir", "middle": [], "last": "Kale", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.10433" ] }, "num": null, "urls": [], "raw_text": "Mihir Kale. 2020. Text-to-text pre-training for data-to- text tasks. arXiv preprint arXiv:2005.10433.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Few-shot natural language generation by rewriting templates", "authors": [ { "first": "Mihir", "middle": [], "last": "Kale", "suffix": "" }, { "first": "Abhinav", "middle": [], "last": "Rastogi", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.15006" ] }, "num": null, "urls": [], "raw_text": "Mihir Kale and Abhinav Rastogi. 2020. Few-shot natural language generation by rewriting templates. arXiv preprint arXiv:2004.15006.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A good sample is hard to find: Noise injection sampling and self-training for neural language generation models", "authors": [ { "first": "Chris", "middle": [], "last": "Kedzie", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 12th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "584--593", "other_ids": { "DOI": [ "10.18653/v1/W19-8672" ] }, "num": null, "urls": [], "raw_text": "Chris Kedzie and Kathleen McKeown. 2019. A good sample is hard to find: Noise injection sampling and self-training for neural language generation models. In Proceedings of the 12th International Conference on Natural Language Generation, pages 584-593, Tokyo, Japan. Association for Computational Lin- guistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Learning to fuse sentences with transformers for summarization", "authors": [ { "first": "Logan", "middle": [], "last": "Lebanoff", "suffix": "" }, { "first": "Franck", "middle": [], "last": "Dernoncourt", "suffix": "" }, { "first": "Soon", "middle": [], "last": "Doo", "suffix": "" }, { "first": "Lidan", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Walter", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2010.03726" ] }, "num": null, "urls": [], "raw_text": "Logan Lebanoff, Franck Dernoncourt, Doo Soon Kim, Lidan Wang, Walter Chang, and Fei Liu. 2020. Learning to fuse sentences with transformers for summarization. arXiv preprint arXiv:2010.03726.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Scoring sentence singletons and pairs for abstractive summarization", "authors": [ { "first": "Logan", "middle": [], "last": "Lebanoff", "suffix": "" }, { "first": "Kaiqiang", "middle": [], "last": "Song", "suffix": "" }, { "first": "Franck", "middle": [], "last": "Dernoncourt", "suffix": "" }, { "first": "Soon", "middle": [], "last": "Doo", "suffix": "" }, { "first": "Seokhwan", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Walter", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2175--2189", "other_ids": { "DOI": [ "10.18653/v1/P19-1209" ] }, "num": null, "urls": [], "raw_text": "Logan Lebanoff, Kaiqiang Song, Franck Dernoncourt, Doo Soon Kim, Seokhwan Kim, Walter Chang, and Fei Liu. 2019. Scoring sentence singletons and pairs for abstractive summarization. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 2175-2189, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "ROUGE: A package for automatic evaluation of summaries", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "Text Summarization Branches Out", "volume": "", "issue": "", "pages": "74--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Felix: Flexible text editing through tagging and insertion", "authors": [ { "first": "Jonathan", "middle": [], "last": "Mallinson", "suffix": "" }, { "first": "Aliaksei", "middle": [], "last": "Severyn", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Malmi", "suffix": "" }, { "first": "Guillermo", "middle": [], "last": "Garrido", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2003.10687" ] }, "num": null, "urls": [], "raw_text": "Jonathan Mallinson, Aliaksei Severyn, Eric Malmi, and Guillermo Garrido. 2020. Felix: Flexible text edit- ing through tagging and insertion. arXiv preprint arXiv:2003.10687.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Encode, tag, realize: High-precision text editing", "authors": [ { "first": "Eric", "middle": [], "last": "Malmi", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Krause", "suffix": "" }, { "first": "Sascha", "middle": [], "last": "Rothe", "suffix": "" }, { "first": "Daniil", "middle": [], "last": "Mirylenka", "suffix": "" }, { "first": "Aliaksei", "middle": [], "last": "Severyn", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "5054--5065", "other_ids": { "DOI": [ "10.18653/v1/D19-1510" ] }, "num": null, "urls": [], "raw_text": "Eric Malmi, Sebastian Krause, Sascha Rothe, Daniil Mirylenka, and Aliaksei Severyn. 2019. Encode, tag, realize: High-precision text editing. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 5054-5065. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Improving quality and efficiency in planbased neural data-to-text generation", "authors": [ { "first": "Amit", "middle": [], "last": "Moryossef", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 12th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "377--382", "other_ids": { "DOI": [ "10.18653/v1/W19-8645" ] }, "num": null, "urls": [], "raw_text": "Amit Moryossef, Yoav Goldberg, and Ido Dagan. 2019a. Improving quality and efficiency in plan- based neural data-to-text generation. In Proceedings of the 12th International Conference on Natural Lan- guage Generation, pages 377-382. Association for Computational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Step-by-step: Separating planning from realization in neural data-to-text generation", "authors": [ { "first": "Amit", "middle": [], "last": "Moryossef", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2267--2277", "other_ids": { "DOI": [ "10.18653/v1/N19-1236" ] }, "num": null, "urls": [], "raw_text": "Amit Moryossef, Yoav Goldberg, and Ido Dagan. 2019b. Step-by-step: Separating planning from real- ization in neural data-to-text generation. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2267-2277. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "A simple recipe towards reducing hallucination in neural surface realisation", "authors": [ { "first": "Feng", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Jin-Ge", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Jinpeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Rong", "middle": [], "last": "Pan", "suffix": "" }, { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2673--2679", "other_ids": { "DOI": [ "10.18653/v1/P19-1256" ] }, "num": null, "urls": [], "raw_text": "Feng Nie, Jin-Ge Yao, Jinpeng Wang, Rong Pan, and Chin-Yew Lin. 2019. A simple recipe towards re- ducing hallucination in neural surface realisation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2673-2679, Florence, Italy. Association for Compu- tational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "The E2E dataset: New challenges for endto-end generation", "authors": [ { "first": "Jekaterina", "middle": [], "last": "Novikova", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" }, { "first": "Verena", "middle": [], "last": "Rieser", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue", "volume": "", "issue": "", "pages": "201--206", "other_ids": { "DOI": [ "10.18653/v1/W17-5525" ] }, "num": null, "urls": [], "raw_text": "Jekaterina Novikova, Ond\u0159ej Du\u0161ek, and Verena Rieser. 2017. The E2E dataset: New challenges for end- to-end generation. In Proceedings of the 18th An- nual SIGdial Meeting on Discourse and Dialogue, pages 201-206, Saarbr\u00fccken, Germany. Association for Computational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": { "DOI": [ "10.3115/1073083.1073135" ] }, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Language Models are Unsupervised Multitask Learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners. Tech- nical report, OpenAI.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "authors": [ { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Sharan", "middle": [], "last": "Narang", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Matena", "suffix": "" }, { "first": "Yanqi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Peter J", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.10683" ] }, "num": null, "urls": [], "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv preprint arXiv:1910.10683.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Building natural language generation systems", "authors": [ { "first": "Ehud", "middle": [], "last": "Reiter", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Dale", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ehud Reiter and Robert Dale. 2000. Building natural language generation systems. Cambridge university press.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Sticking to the facts: Confident decoding for faithful data-to-text generation", "authors": [ { "first": "Ran", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Shashi", "middle": [], "last": "Narayan", "suffix": "" }, { "first": "Thibault", "middle": [], "last": "Sellam", "suffix": "" }, { "first": "Ankur P", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.08684" ] }, "num": null, "urls": [], "raw_text": "Ran Tian, Shashi Narayan, Thibault Sellam, and Ankur P Parikh. 2019. Sticking to the facts: Con- fident decoding for faithful data-to-text generation. arXiv preprint arXiv:1910.08684.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "CIDEr: Consensus-based image description evaluation", "authors": [ { "first": "Ramakrishna", "middle": [], "last": "Vedantam", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Zitnick", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "4566--4575", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. CIDEr: Consensus-based image de- scription evaluation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition, pages 4566-4575.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Challenges in data-to-document generation", "authors": [ { "first": "Sam", "middle": [], "last": "Wiseman", "suffix": "" }, { "first": "Stuart", "middle": [], "last": "Shieber", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2253--2263", "other_ids": { "DOI": [ "10.18653/v1/D17-1239" ] }, "num": null, "urls": [], "raw_text": "Sam Wiseman, Stuart Shieber, and Alexander Rush. 2017. Challenges in data-to-document generation. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 2253-2263, Copenhagen, Denmark. Association for Computational Linguistics.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Huggingface's transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R'emi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Brew", "suffix": "" } ], "year": 2019, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.", "links": null } }, "ref_entries": { "TABREF0": { "content": "
123X i = Dublin is the capital of Ire-land, where English is spoken.
0.8?0.9
0.3?-
0.7?0.4
.........
Template SelectionLMScorerSentence FusionBeam Filtering + LMScorer
", "type_str": "table", "html": null, "text": "Dublin is the capital of Ireland. English is spoken in Ireland.Dublin is t he capital of Ireland., where English is spoken in Ireland.Dublin is the capital of Ireland., where English is spoken in Ireland. Dublin is the capital of Ireland. English is the language spoken in Ireland. ...", "num": null }, "TABREF1": { "content": "", "type_str": "table", "html": null, "text": "Examples of templates we used in our experiments. The templates for the single predicates in the WebNLG dataset and the pairs of predicates in the E2E dataset are extracted automatically from the training data; the templates for the single predicates in E2E are created manually.", "num": null }, "TABREF3": { "content": "
", "type_str": "table", "html": null, "text": "Results of automatic metrics on the WebNLG and Cleaned E2E test sets. The comparison is made with the results from the papers on the Semantic Fidelity Classifier (SFC;", "num": null }, "TABREF4": { "content": "
", "type_str": "table", "html": null, "text": "An example of correct behavior of the algorithm on the WebNLG dataset. Newly added entities are underlined, the output from Step #2 is the output text.", "num": null } } } }