{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:28:24.675893Z" }, "title": "Neural NLG for Methodius: From RST Meaning Representations to Texts *", "authors": [ { "first": "Jory", "middle": [], "last": "Symon", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Hamburg", "location": {} }, "email": "" }, { "first": "", "middle": [], "last": "Stevens-Guille", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Hamburg", "location": {} }, "email": "stevensguille.1@buckeyemail.osu.edu" }, { "first": "Aleksandre", "middle": [], "last": "Maskharashvili", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Hamburg", "location": {} }, "email": "" }, { "first": "Amy", "middle": [], "last": "Isard", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Hamburg", "location": {} }, "email": "" }, { "first": "Xintong", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Hamburg", "location": {} }, "email": "" }, { "first": "Michael", "middle": [], "last": "White", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Hamburg", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "While classic NLG systems typically made use of hierarchically structured content plans that included discourse relations as central components, more recent neural approaches have mostly mapped simple, flat inputs to texts without representing discourse relations explicitly. In this paper, we investigate whether it is beneficial to include discourse relations in the input to neural data-to-text generators for texts where discourse relations play an important role. To do so, we reimplement the sentence planning and realization components of a classic NLG system, Methodius, using LSTM sequence-to-sequence (seq2seq) models. We find that although seq2seq models can learn to generate fluent and grammatical texts remarkably well with sufficiently representative Methodius training data, they cannot learn to correctly express Methodius's SIMILARITY and CONTRAST comparisons unless the corresponding RST relations are included in the inputs. Additionally, we experiment with using self-training and reverse model reranking to better handle train/test data mismatches, and find that while these methods help reduce content errors, it remains essential to include discourse relations in the input to obtain optimal performance. * The first two authors are listed in random order (equal contribution), then the other authors are listed in alphabetical order by last name.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "While classic NLG systems typically made use of hierarchically structured content plans that included discourse relations as central components, more recent neural approaches have mostly mapped simple, flat inputs to texts without representing discourse relations explicitly. In this paper, we investigate whether it is beneficial to include discourse relations in the input to neural data-to-text generators for texts where discourse relations play an important role. To do so, we reimplement the sentence planning and realization components of a classic NLG system, Methodius, using LSTM sequence-to-sequence (seq2seq) models. We find that although seq2seq models can learn to generate fluent and grammatical texts remarkably well with sufficiently representative Methodius training data, they cannot learn to correctly express Methodius's SIMILARITY and CONTRAST comparisons unless the corresponding RST relations are included in the inputs. Additionally, we experiment with using self-training and reverse model reranking to better handle train/test data mismatches, and find that while these methods help reduce content errors, it remains essential to include discourse relations in the input to obtain optimal performance. * The first two authors are listed in random order (equal contribution), then the other authors are listed in alphabetical order by last name.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Traditional approaches to the task of natural language generation (NLG) have employed a pipeline of modules, moving from an initial abstract meaning representation (MR) to human-readable natural language (Reiter and Dale, 2000) . In the last decade, the success of neural methods in other domains of natural language processing (NLP) has led to the development of neural 'end-to-end ' (e2e) architectures in NLG (Du\u0161ek et al., 2020) , where a direct mapping from MRs to text is learned. Since target texts for training neural models are typically crowd-sourced, the neural approach promises to make it easier to scale up the development of NLG systems in comparison to classic approaches, which generally require domain-or applicationspecific rules to be developed, even if the modules themselves are reusable.", "cite_spans": [ { "start": 204, "end": 227, "text": "(Reiter and Dale, 2000)", "ref_id": "BIBREF19" }, { "start": 383, "end": 390, "text": "' (e2e)", "ref_id": null }, { "start": 412, "end": 432, "text": "(Du\u0161ek et al., 2020)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Accompanying the increase in crowd-sourced corpora has been a comparative simplification of both MRs and tasks. In particular, classic NLG systems typically made use of hierarchically structured content plans that included discourse relations as central components, where the discourse relations -often based on Rhetorical Structure Theory (RST) (Mann and Thompson, 1988; Taboada and Mann, 2006) -group together and connect elementary propositions or messages (Hovy, 1993; Stede and Umbach, 1998; Isard, 2016) . By contrast, more recent neural approaches -in particular, those developed for the E2E and WebNLG shared task challenges -have mostly mapped simple, flat inputs to texts without representing discourse relations explicitly.", "cite_spans": [ { "start": 346, "end": 371, "text": "(Mann and Thompson, 1988;", "ref_id": "BIBREF15" }, { "start": 372, "end": 395, "text": "Taboada and Mann, 2006)", "ref_id": "BIBREF21" }, { "start": 460, "end": 472, "text": "(Hovy, 1993;", "ref_id": "BIBREF5" }, { "start": 473, "end": 496, "text": "Stede and Umbach, 1998;", "ref_id": "BIBREF20" }, { "start": 497, "end": 509, "text": "Isard, 2016)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The absence of discourse relations in work on neural NLG to date is somewhat understandable given that neural systems have primarily tackled texts that merely describe entities, rather than comparing them, situating them in time, discussing causal or other contingency relations among them, or constructing persuasive arguments about them, where discourse relations are crucial for coherence (Prasad et al., 2008) . Recently, Balakrishnan et al. (2019a) have argued that discourse relations should be reintroduced into neural generation in order to enable the correct expression of these relations to be more reliably controlled. However, they do note that only 6% of the crowd-sourced E2E Challenge texts contain discourse connectives ex-pressing CONTRAST, and though they introduce a conversational weather dataset that uses both CONTRAST and JUSTIFY relations with greater frequency, it is fair to say that the use of hierarchical MRs that incorporate discourse relations remains far from common practice.", "cite_spans": [ { "start": 392, "end": 413, "text": "(Prasad et al., 2008)", "ref_id": "BIBREF17" }, { "start": 426, "end": 453, "text": "Balakrishnan et al. (2019a)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we investigate whether it is beneficial to include discourse relations in the input to neural data-to-text generators for texts where discourse relations play an important role. To do so, we reimplement the sentence planning and realization components of a classic NLG system, Methodius (Isard, 2016) , using LSTM sequenceto-sequence (seq2seq) models, since Methodius makes similarity or contrast comparisons in most of its outputs. Specifically, rather than crowd-source output texts for Methodius's content plans, we run the existing system to obtain target texts for training seq2seq models, and experiment with input MRs (derived from the content plans) that contain discourse relations as well as ones that leave them out. 1 In our experiments, we observe that the seq2seq models learn to generate fluent and grammatical texts remarkably well. As such, we focus our evaluation on the correct and coherent expression of discourse relations. Since the Methodius texts are somewhat formulaic following delexicalization and entity anonymization, it is possible to write accurate automatic correctness checks for these relations. Using these automatic checks, we find that even with sufficiently representative Methodius training data, LSTM seq2seq models cannot learn to correctly express Methodius's similarity and contrast comparisons unless the corresponding RST relations are included in the inputs. This is an at least somewhat surprising result, since these relations are easily inferred from the input facts being compared.", "cite_spans": [ { "start": 302, "end": 315, "text": "(Isard, 2016)", "ref_id": "BIBREF7" }, { "start": 743, "end": 744, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The major conclusion of our experiments is that explicitly encoding discourse information using RST relations boosts coherence by enabling rhetorical structure to be reliably lexicalized. Several techniques for improving the models are also considered, especially for situations where the training data exhibits mismatches with the test data (as can happen in practice). One technique involves outputting a beam of possible text outputs and reranking them by checking the correspondence between the input meaning representation and the meaning representation produced by using a reversed model to map texts to meaning representations. The other technique is self-training (Li and White, 2020) , i.e., using an initial model to generate additional training data. This method drastically increases the amount of training data available for what is otherwise quite a small corpus. The upshot of these techniques is moderate improvement in the performance of both models with respect to the evaluation metrics just mentioned. But the conclusion remains that the model trained on explicit RST information continues to outperform the model without explicit RST structure in the input.", "cite_spans": [ { "start": 672, "end": 692, "text": "(Li and White, 2020)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The Methodius system (Isard, 2016) was developed for multilingual text generation, based on the M-PIRO project (Isard et al., 2003; Isard, 2007) which focused on museum exhibit descriptions. Methodius consists of several components. The content module selects content from a database and creates a content plan, which is a tree where the nodes are labeled with rhetorical relations or facts, following the structures proposed in RST. Fig. 1 shows a content plan. The content plan is rewritten into a sequence of logical forms, one per sentence, by the sentence planner. The logical forms are then realized as a text by means of a Combinatory Categorial Grammar (CCG) using OpenCCG (White, 2006 ).", "cite_spans": [ { "start": 21, "end": 34, "text": "(Isard, 2016)", "ref_id": "BIBREF7" }, { "start": 111, "end": 131, "text": "(Isard et al., 2003;", "ref_id": "BIBREF8" }, { "start": 132, "end": 144, "text": "Isard, 2007)", "ref_id": "BIBREF6" }, { "start": 681, "end": 693, "text": "(White, 2006", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 434, "end": 440, "text": "Fig. 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Methodius", "sec_num": "2" }, { "text": "The Methodius system is designed to respond to the behaviour of the its intended users. Sequences of exhibits, dubbed 'chains', are constructed while the user moves through the museum. The chains control dependencies between exhibit descriptions, limit redundancy, and provide discourse continuity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodius", "sec_num": "2" }, { "text": "While RST defines a number of rhetorical relations, Methodius incorporates only four of them: ELABORATION, JOINT, SIMILARITY and CON-TRAST. ELABORATION connects the main fact about a focal entity with other, peripheral facts about that entity. JOINT connects two facts of equal status. SIMILARITY and CONTRAST each connect two facts of equal status, but they do opposite jobs: SIMILARITY is used to express the similarity of two entities in terms of a commonly shared feature, while CONTRAST is used to show that the values of a shared feature of the given entities differ. For instance, unlike the previous coins you saw, which are located in the Athens Numismatic Museum, this tetradrachm is located in the National Museum of Athens -here unlike signals CONTRAST. In the following example, like signals SIMILARITY: like the previous coins you saw, this tetradrachm is located in the National Museum of Athens.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodius", "sec_num": "2" }, { "text": "In the experiments discussed below we focus on SIMILARITY and CONTRAST because the Methodius corpus lexicalizes them. Due to the dynamic generation of the exhibit descriptions, SIMILAR-ITY and CONTRAST link information in the current exhibit to previously mentioned exhibits and their properties-as such, correctly generating such expressions is vital to maintaining the coherence of the exhibit chain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodius", "sec_num": "2" }, { "text": "3 Data Preprocessing", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodius", "sec_num": "2" }, { "text": "The textual output of Methodius is pseudo-English with some expressions replaced by canned text, the morpho-syntactic descriptions of which are not present in either the content plan or in the logical form. Instead the canned text is retrieved from the Methodius system's database by looking up the reference given in the content plan. Such canned texts might occur infrequently in a relatively small corpus. To avoid data sparsity, we substitute canned texts by their labels, cf. (1b), (1a). Note that the textual output of Methodius doesn't contain nonterminal symbols the sort used in Balakrishnan et al.'s approach. We use only special terminal symbols, which appear both in content plans (decorating terminal nodes in the tree) and in texts (representing the corresponding chunks of canned texts).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Delexicalization", "sec_num": "3.1" }, { "text": "We anonymize exhibits by replacing them with entity0, entity1, etc in both the content plans and corresponding text. In each text, there is a single focal exhibit. The focal exhibit is compared to one or many exhibits and this is expressed in text using singular and plural forms respectively (e.g. the other vessel, which originates from region1 VS the other coins, which were created in city0). We use two substitution forms: entity1 (for singular) and entityplural.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Anonymization and Augmentation", "sec_num": "3.2" }, { "text": "Content plans are augmented with relevant information concerning the types of exhibits that occur in a content plan. The type predicate relates an exhibit to the NP it corresponds to in the text. This information is encoded within the Methodius logical form and thus is available for the Methodius system when it comes to generating text. However, since we anonymize exhibits and we ignore the logical forms, we need to explicitly provide the type information of each exhibit. Methodius sometimes produces content plans in which the first FACT TYPE is missing arg2. This missing position corresponds to the focal exhibit in the text. The modified corpus regiments the input by ensuring every FACT TYPE includes arg2. For every exhibit in the the Methodius content plan not explicitly typed we add a new OPTIONAL TYPE branch to the tree which includes the type of the exhibit.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Anonymization and Augmentation", "sec_num": "3.2" }, { "text": "(1) a. This is a marriage cauldron and it was created during the classical period in between 420 and 410 B.C. Table 1 , where the first and second numbers correspond to the number of content plans including CONTRAST and SIMI-LARITY, respectively, while the third corresponds to the number of content plans which include neither of these RST types. The average lengths for input (number of tokens) and output (number of words) are shown in Table 2 . The output of Methodius is limited with respect to both the homogeneity and lengths of the texts-Methodius only infrequently produces very short or long texts, e.g. one or six sentences respectively. One of the test sets, which is described below, is explicitly constructed to determine whether the model's knowledge of discourse structure is limited by the length of the texts it sees.", "cite_spans": [], "ref_spans": [ { "start": 110, "end": 117, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 439, "end": 446, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Anonymization and Augmentation", "sec_num": "3.2" }, { "text": "In the training set, there are around 4300 examples harvested by using the Methodius system. The higher number of inputs with SIMILARITY (2911) is due to the Methodius system. This proportion of SIMILARITY persists into every split except the challenge test set, where the number of inputs of distinct RST types is more homogeneous.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Set", "sec_num": "4.1" }, { "text": "We have two splits of data for our experiments. One we dub the 'challenge split', the other the 'standard split'. The major difference between them is their average lengths. The average length of the challenge split items are roughly half the length of the training set items, while the average length of the standard split is roughly seventy five percent of the training set items.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Test sets", "sec_num": "4.2" }, { "text": "Standard In the standard split, the average length of items in the training and validation sets is roughly the same; the distribution of lengths is similar in the training, valid, and test sets but the training set still includes slightly longer sequences on average. The proportion of items with distinct RST types is roughly the same between the train, valid, and standard test sets. This test set doesn't identify possible effects of item length on correct discourse structure production.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Test sets", "sec_num": "4.2" }, { "text": "Challenge The challenge test set consists of items on average half the length of the the average lengths of items in the train and valid sets. Due to the lower frequency of short items produced by Methodius, the number of items in the challenge set is reduced. The distribution of items with CON-TRAST and SIMILARITY is homogeneous.With respect to distinguishing RST types, the challenge test set is no more difficult than the standard test set; the item length is shorter but no less structured. Moreover, the set of lexemes-including delexicalized expressions-which occur in the test set are present in the training set. However, there are patterns in the test set which are uncommon or unseen in the training set, e.g. one content plan in the challenge set begins with CONTRAST but no such items are found in training. This distinguishes possible effects of length, e.g. 'RST type X occurs in the third sentence', from effect of RST tree structure in the input for correct discourse structure production, i.e. 'RST type X must correspond to lexeme/structure Y'. These challenge test-specific content plans help to determine how well a model learns to associate certain strings with either CONTRAST or SIMILAR-ITY. If the model stumbles on shorter texts then its knowledge of RST structure might be (erroneously) conditioned on item length.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Test sets", "sec_num": "4.2" }, { "text": "5 Evaluation Methods", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Test sets", "sec_num": "4.2" }, { "text": "Since the data we generate after preprocessing contains certain expressions which we dub 'special terminals', these expressions can be tracked between the target and the hypothesis. By obtaining metrics based on the correspondence between these special terminals, we get a picture how close the hypothesis is to the target. This measure enjoys some useful properties. Firstly, it's cheap-it is defined solely in terms of expressions which occur both in the input (content plan) and in the output (text). Second, the special terminals stand for important parts of the text-those ones that are explicitly provided as values to features in the content plan (since they are terminals). Hence, having information about their presence gives us a good hint of the quality of a text. In addition to standard evaluation metrics scores such as BLEU4, we report the following metrics for each test item: 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metrics on Special Terminals", "sec_num": "5.1" }, { "text": "\u2022 Repetitions: A special terminal is present in the hypothesis n times but in the target text it occurs m times, where m < n. We calculate n\u2212m for every such special terminal and sum up.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metrics on Special Terminals", "sec_num": "5.1" }, { "text": "\u2022 Omissions: How many times special terminals occurring in the target text are not generated at all in the hypothesis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metrics on Special Terminals", "sec_num": "5.1" }, { "text": "\u2022 Hallucinations: Number of occurrences of those special terminals in the hypothesis that have no occurrence in the target.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metrics on Special Terminals", "sec_num": "5.1" }, { "text": "We also provide a count for the number of items in which (within tables in 6 Self-training", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse adverbials for Contrast and Similarity", "sec_num": "5.2" }, { "text": "With most NLG applications, large amounts of parallel data are not readily available. This is true even in the case of Methodius, because there are a finite number of exhibits and facts and thus the number of meaningful combinations which can be constructed from them is limited. In order to reduce annotated data needs, Kedzie (2020) explore self-training for the more challenging case of generating from compositional input representations. Self-training involves the construction of unlabeled data. The process of self-training is the following. First, the model is trained on the initial parallel data, i.e. the data used in the models without self-training. Subsequently, an additional set of unlabeled inputs is provided: such data might exist but be unlabeled but if no such data exists it can be generated (e.g., handcrafted using some heuristics). The unlabelled inputs are, in the present context, content plans without corresponding output text. Next the existing model is used to generate the labels for the unlabeled data. This procedure results in a new set of parallel data. Because its labels don't come from the data-since they're outputs of the model-this cannot be considered parallel data in the full sense. We dub the resulting data 'pseudo-labelled. ' We train a new model on this data. Then we reuse the genuine parallel data for fine tuning this model. This process can be repeated to generate various models. (1) describes the process in brief: Train a model on the pseudo-parallel data;", "cite_spans": [ { "start": 321, "end": 327, "text": "Kedzie", "ref_id": null }, { "start": 1272, "end": 1273, "text": "'", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Discourse adverbials for Contrast and Similarity", "sec_num": "5.2" }, { "text": "5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse adverbials for Contrast and Similarity", "sec_num": "5.2" }, { "text": "Fine-tune the model on L;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse adverbials for Contrast and Similarity", "sec_num": "5.2" }, { "text": "6 until convergence or maximum iteration;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse adverbials for Contrast and Similarity", "sec_num": "5.2" }, { "text": "7 Reranking with reverse models", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse adverbials for Contrast and Similarity", "sec_num": "5.2" }, { "text": "In the syntax-semantics interface, the parsing task is usually to build a correct semantic (or syntactic) representation of a sentence. One can consider this task with respect to neural networks-which operate on sequences-straightforwardly by reversing the order of the parallel data: the source sequence (meaning) becomes the target, and the target sequence (text) becomes the source. Following the terminology of Li and White (2020), we call such models reverse models, while models that generate text from meaning representations are forward models. 3 We can rerank the output of a forward model with the help of its corresponding reverse model. Given several outputs of a beam search of the forward model, we select the one that makes the best meaning representation if it is given to a reverse model as an input. Here, best means the one that has lowest perplexity with respect to forced decoding. One can combine self-training and reranking: Train forward and reverse models on the parallel data and then train forward and reverse models on the pseudo-parallel data. Afterwards finetune them again on the initial parallel data. Subsequently, use the reverse models to rerank the output of the forward models. Train forward and reverse models on the pseudo-parallel data;", "cite_spans": [ { "start": 553, "end": 554, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Discourse adverbials for Contrast and Similarity", "sec_num": "5.2" }, { "text": "5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse adverbials for Contrast and Similarity", "sec_num": "5.2" }, { "text": "Fine-tune both models on L;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse adverbials for Contrast and Similarity", "sec_num": "5.2" }, { "text": "6 until convergence or maximum iteration;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse adverbials for Contrast and Similarity", "sec_num": "5.2" }, { "text": "We ran self-training experiments with two sets of unlabeled data. One of them consists of the content plans generated by Methodius. The other one, dubbed 'heuristic,' is developed from the existing labeled data. The heuristic data is produced by the following method: for every content plan produced by Methodius, extract the set of subtrees of the content plan which respect some soft constraints on structure. We avoid extracting trees that start with an optional type. The subtrees are randomly selected but their distribution is required to closely follow the distribution of distinct RST types in the training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "8" }, { "text": "Since the size of the Methodius data set is limited, the heuristic data set provides useful cheap supplementary content for training (compared to the cost of eliciting text corresponding to content plans through e.g. Turkers). We are thus interested whether having genuine Methodius content plans, which are not straightforward to generate in large amounts, could be completed by a heuristic data set generated from the labeled training data set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "8" }, { "text": "The FACT models were trained on the FACT versions of the data set, which is obtained by simply deleting the RST structure from the RST data set. 4 We refer to the models (for sake of clarity) by the names in Table 3 . There are only 947 content plans for selftraining, while the training set size is 4304. The limited number of content plans for self-trainining is due to the homogeneity of the Methodius output, the intention to sync the length of training and test sets, and the finite number of exhibits in the Methodius data base. These content plans, which are harvested from Methodius, are on average just half the length of the content plans in the training set. Their shortness ensures the system is exposed to items of multiple lengths. Because of their reduced length and their production by the Methodius system, variation in the content of the short sequences is limited. The unique unlabelled data size differs between RST and FACT data sets, because the data for FACT is produced by pruning the RST data, the deletion of structure reduces the heterogeneity of data, resulting in fewer unique sequences for the FACT-LG input.", "cite_spans": [ { "start": 145, "end": 146, "text": "4", "ref_id": null } ], "ref_spans": [ { "start": 208, "end": 215, "text": "Table 3", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Experiments and Results", "sec_num": "8" }, { "text": "We trained the following models:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "8" }, { "text": "\u2022 LBL: A standard LSTM seq2seq model with attention on the labeled data, which is also the base model for the other methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "8" }, { "text": "\u2022 ST-VAN: A model trained with vanilla selftraining.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "8" }, { "text": "\u2022 ST-RMR: A model self-trained with reverse model reranking for pseudo-labeling.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "8" }, { "text": "Models were trained over several iterations, though for exposition the results reported below concern just the best model iterations. 5 BLEU4 is calculated on both the standard and challenge test sets. BLEU4, though limited in the conclusions it supports, seems informative enough to allow one to distinguish between RST and FACT models; we report it in Appendix D. BLEU4 is on average 5 or more points higher for RST models than FACT models across the test sets.", "cite_spans": [ { "start": 134, "end": 135, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "8" }, { "text": "We count the sum of repetitions, hallucinations and omissions per test set and report the average per item, simply dividing the sum by the number of test set samples. Fig. 2 and Fig. 3 show the results, chiefly the uniform improvement of the self-training and reranking models over the baseline LSTM models.", "cite_spans": [], "ref_spans": [ { "start": 167, "end": 184, "text": "Fig. 2 and Fig. 3", "ref_id": null } ], "eq_spans": [], "section": "Repetitions, Hallucinations and Omissions", "sec_num": "8.1" }, { "text": "RST-SM with self-training is the best model. RST-SM with both self-training and reverse model reranking produced some of the best results too.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Repetitions, Hallucinations and Omissions", "sec_num": "8.1" }, { "text": "RST-SM and RST-LG show similar performance when it comes to repetitions, hallucinations, and omissions on the standard test set. RST-SM outperforms RST-LG on the challenge set. RST models uniformly outperform FACT models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Repetitions, Hallucinations and Omissions", "sec_num": "8.1" }, { "text": "We observed the models sometimes produced stuttering, i.e. multiple repetition. Even one of the best models with respect to the standard test set-RST-SM-ST-VAN (see Fig. 2 )-produced two examples of stuttering (out of 799) with 57 and 59 repetitions respectively. Just these two outputs nearly doubled the average error rate of RST-SM-ST-VAN. The other models reported here did not produce such extreme stuttering. But despite stuttering, RST-SM-ST-VAN is still the best model with respect to the metrics considered here. In Appendix C, model performance is reported by simply counting the total number of test examples in which a model generates neither repetitions, nor omissions, nor hallucinations.", "cite_spans": [], "ref_spans": [ { "start": 165, "end": 171, "text": "Fig. 2", "ref_id": null } ], "eq_spans": [], "section": "Repetitions, Hallucinations and Omissions", "sec_num": "8.1" }, { "text": "The following error from FACT-LG-ST-RMR shows multiple hallucination of the exhibit item's creation time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Repetitions, Hallucinations and Omissions", "sec_num": "8.1" }, { "text": "T this is an imperial portrait and it portrays roman-emperor0 . like the coin you recently saw , this imperial portrait was created during historical-period0 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Repetitions, Hallucinations and Omissions", "sec_num": "8.1" }, { "text": "H this is an imperial portrait and it portrays roman-emperor0 . like the coin , this imperial portrait was created during historical-period0 . it was created in entity0-creation-time and it was created in entity0-creation-time .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Repetitions, Hallucinations and Omissions", "sec_num": "8.1" }, { "text": "Further errors are shown in Appendix E.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Repetitions, Hallucinations and Omissions", "sec_num": "8.1" }, { "text": "When the FACT-LBL model makes mistakes, such mistakes frequently correspond to the substitution of one lexeme marking a rhetorical relation for another marking a distinct (sometimes opposite) relation. The following hypothesis replaces the CON-TRAST in the target with a SIMILARITY, misidentifying the origin of some previous exhibit in the chain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rhetorical Relation Generation", "sec_num": "8.2" }, { "text": "T unlike the other exhibits you recently saw , which originate from region0 , this coin was originally from city0 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rhetorical Relation Generation", "sec_num": "8.2" }, { "text": "H like the other exhibits you recently saw , this coin originates from city0 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rhetorical Relation Generation", "sec_num": "8.2" }, { "text": "In the following hypothesis the erroneous substitution of SIMILARITY by CONTRAST leads to an outright contradiction:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rhetorical Relation Generation", "sec_num": "8.2" }, { "text": "T like the other exhibits you recently saw , this marriage cauldron is currently in museum0 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rhetorical Relation Generation", "sec_num": "8.2" }, { "text": "H unlike the other exhibits you recently saw , which are located in museum0 , this marriage cauldron is located in museum0 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rhetorical Relation Generation", "sec_num": "8.2" }, { "text": "Less frequently the insertion of SIMILARITY or CONTRAST compares the topic of an exhibit to itself:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rhetorical Relation Generation", "sec_num": "8.2" }, { "text": "T this is a statue and it was created during historical-period0 in entity0-creation-time .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rhetorical Relation Generation", "sec_num": "8.2" }, { "text": "H this is a statue and it was created during historical-period0 in entity0-creation-time . like the statue , this statue was created during historical-period0 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rhetorical Relation Generation", "sec_num": "8.2" }, { "text": "The details of the number of errors and successes in generating discourse connectives are reported in Appendix A. Fig. 4 and Fig. 5 show Fisher's Exact Test statistics for best performing RST (SM and LG) and FACT (SM and LG) models.", "cite_spans": [], "ref_spans": [ { "start": 114, "end": 120, "text": "Fig. 4", "ref_id": "FIGREF4" }, { "start": 125, "end": 131, "text": "Fig. 5", "ref_id": null } ], "eq_spans": [], "section": "Rhetorical Relation Generation", "sec_num": "8.2" }, { "text": "The best performances are shown by RST-SM and RST-LG. Even RST-LBL produces only 12 mistakes out of 799 test items. Production of rhetorical connectives corresponding to CONTRAST and SIM-ILARITY is uniformly correct. After fine tuning and reranking, the errors reduced to 0 and 2 respectively. With respect to the FACT models, LBL makes mistakes, but improves upon self-training and reranking. Nonetheless RST models outperform the FACT models. While the best FACT model performs well with respect to producing the correct discourse connective/structure, this model produces serious content errors that render some outputs (discussed in Section 8.1) incoherent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Standard Test", "sec_num": "8.2.1" }, { "text": "On the challenge test, no model achieved perfect accuracy. The best performances are by RST-SM and RST-LG. Their performance is similar. It is worth noting that in the case of FACT-SM, reranking with self-training gave results comparable to RST-SM (there is no significance difference in terms of Fisher's test with significance at 5%). This is not the case for FACT-LG and RST-LG models. RST-LG-ST-RMR outperforms the best FACT-LG model (see Fig. 5 ).", "cite_spans": [], "ref_spans": [ { "start": 443, "end": 449, "text": "Fig. 5", "ref_id": null } ], "eq_spans": [], "section": "Challenge Test", "sec_num": "8.2.2" }, { "text": "From these experiments, we see that on the standard test set, RST-Large and RST-Small models performed best in terms of producing the correct discourse connective for SIMILARITY (respectively CONTRAST). While errors occurred-sometimes matching the results of the corresponding FACT models-RST models correctly distinguish between producing the lexeme for SIMILARITY versus CONTRAST, while FACT models sometimes confuse SIMILARITY with CONTRAST.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Challenge Test", "sec_num": "8.2.2" }, { "text": "On the challenge data every model made errors. The RST models outperformed the corresponding FACT models, significantly in the case of RST-LG over RST-LG, as seen in Fig. 5 .", "cite_spans": [], "ref_spans": [ { "start": 166, "end": 172, "text": "Fig. 5", "ref_id": null } ], "eq_spans": [], "section": "Challenge Test", "sec_num": "8.2.2" }, { "text": "Though the RST models yielded less dramatic improvements on comparisons in the challenge set, it is worth emphasizing that the RST models produce significantly fewer repetitions, omissions and hallucination compared to the FACT models (Figs. 6 and 7, Appendix C), further supporting the conclusion that the RST input produces better output. This result is interesting, since the content plans in the FACT models are shorter than those in RST models, yet still prompt the former models to produce more words than RST models do.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Challenge Test", "sec_num": "8.2.2" }, { "text": "While traditional natural language generation systems, e.g. Methodius, often employ knowledge graphs, the use of such structure in neural NLG is underdeveloped. An exception in this respect is WebNLG (Gardent et al., 2017) , which is a multilingual corpus for natural language generation. An", "cite_spans": [ { "start": 200, "end": 222, "text": "(Gardent et al., 2017)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related and Future Work", "sec_num": "9" }, { "text": "F A C T -L B L F A C T -S M -S T -R M R F A C T -L G -S T -R M R R S T -L B L R S T -S M -S T -V A N R S T -L G -S T -R M R 0 0.5 1 1.5 Mean Errors per Item Repetitions Omissions Hallucinations Figure 2: Standard Set F A C T -L B L F A C T -S M -S T -R M R F A C T -L G -S T -R M R R S T -L B L R S T -S M -S T -V A N R S T -L G -S T -R M R 0 1 2 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related and Future Work", "sec_num": "9" }, { "text": "Mean Errors per Item entry of WebNLG is a set of RDF triples (representing subj, predicate, object) paired with the corresponding text, which is the sequences of sentences which serve as verbalization of those triples. But it is noteworthy that the main focus in WebNLG is micro-planning (sentence-level generation). Consequently, WebNLG only makes use of a single, implicit rhetorical relation, namely ELABORATION. ELABORATION is frequent in the Methodius corpus. But Methodius uses more interesting rhetorical relations, too, including CONTRAST and SIMILARITY, thus the content (both in terms of meaning representations and texts) is significantly different from WebNLG. For future work, there are number of direction we intend to explore, including the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related and Future Work", "sec_num": "9" }, { "text": "\u2022 Study whether large-scale pretrained models likewise fail to generalize well without dis- , where towards errors counts if either there is an incorrectly generated discourse cue word, or there has been a cue word generated while the target has none, or no cue word is generated but the reference contains one. The dotted line links two models if there is a significant difference between their performance in terms of Fisher's Exact Test statistics (we take the significance threshold 5%).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related and Future Work", "sec_num": "9" }, { "text": "F A C T -L B L F A C T -S M -S T -R M R F A C T -L G -S T -R M R R S T -L B L R S T -S M -S T -V A N R S T -L G -S T -V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related and Future Work", "sec_num": "9" }, { "text": "course relations in the input.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related and Future Work", "sec_num": "9" }, { "text": "\u2022 Experiment with more diverse outputs for Methodius, e.g. crowd-sourcing further outputs to express the content plans.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related and Future Work", "sec_num": "9" }, { "text": "\u2022 Study whether constrained decoding could be used to reduce discourse structure errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related and Future Work", "sec_num": "9" }, { "text": "The overall conclusion is that including RST relations in the input content plans is necessary to achieve optimum performance in correctly and coherently expressing discourse relations in the neural reimplementation of Methodius. This is somewhat surprising since the FACT-only inputs actually have all the information necessary to infer that a SIMILARITY or CONTRAST relation should be expressed, but the models nevertheless struggle to learn the desired same/different generalization. Moreover, the errors are often jarring-they produce genuine incoherence in the text. We see the best performance from the RST model with small but clean self-training data (RST-SM), as it comes from Methodius and thus follows the same general patterns as the ones in the test set. The large RST model (RST-LG) had similar Figure 5: Challenge Set: Errors in generating discourse cue words for SIMILARITY and/or CONTRAST (unlike and/or like), where an item produces an error if either there is an incorrectly generated discourse cue word, or there has been a cue word generated while the target has none, or no cue word is generated but the reference contains one. The dotted line links two models if there is a significant difference between their performance in terms of Fisher's Exact Test statistics (with significance threshold of 5%).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "10" }, { "text": "performance to the small one. FACT models, both small and large, show significant self-training improvements when reranking with reverse models. Because the RST baseline already performs relatively well, such an improvement is not observable with them. RST-SM with vanilla self-training already showed high performance. In the case of the FACT models, we saw that reranking with reverse models lowers repetitions, omissions and hallucinations in total. It was also beneficial for the RST-LG model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "10" }, { "text": "Despite the highly regular nature of the rulebased texts, even our best models do not get close to zero content errors, highlighting the importance of continued work on eliminating these errors, e.g. using pretrained models (Kale, 2020; Kale and Rastogi, Forthcoming) or constrained decoding (Balakrishnan et al., 2019b) .", "cite_spans": [ { "start": 224, "end": 236, "text": "(Kale, 2020;", "ref_id": "BIBREF9" }, { "start": 237, "end": 267, "text": "Kale and Rastogi, Forthcoming)", "ref_id": null }, { "start": 292, "end": 320, "text": "(Balakrishnan et al., 2019b)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "10" }, { "text": "The data and code for this paper can be accessed by the following link: https://github.com/ Methodius-Project/Neural-Methodius.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "DataSince one of our objectives is to compare the performance of neural networks on data with and without", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The term 'hypothesis' is used for the output of the model, following the terminology used in Fairseq.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "While this is an arbitrary choice of terminology, in the context of NLG it seems to be appropriate to call the forward model the one that generates text out of meaning representation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Deleting RST structure results in the deletion of the tree structure too.5 In addition to LSTM models, we trained a baseline transformer on the labeled data but the results were unsatisfactory.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was supported by a collaborative open science research agreement between Facebook and The Ohio State University.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Proceedings of the 1st workshop on discourse structure in neural nlg. In Proceedings of the 1st Workshop on Discourse Structure in Neural NLG", "authors": [ { "first": "Anusha", "middle": [], "last": "Balakrishnan", "suffix": "" }, { "first": "Vera", "middle": [], "last": "Demberg", "suffix": "" }, { "first": "Chandra", "middle": [], "last": "Khatri", "suffix": "" }, { "first": "Abhinav", "middle": [], "last": "Rastogi", "suffix": "" }, { "first": "Donia", "middle": [], "last": "Scott", "suffix": "" }, { "first": "Marilyn", "middle": [], "last": "Walker", "suffix": "" }, { "first": "Michael", "middle": [], "last": "White", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anusha Balakrishnan, Vera Demberg, Chandra Khatri, Abhinav Rastogi, Donia Scott, Marilyn Walker, and Michael White. 2019a. Proceedings of the 1st work- shop on discourse structure in neural nlg. In Pro- ceedings of the 1st Workshop on Discourse Structure in Neural NLG.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Constrained decoding for neural NLG from compositional representations in task-oriented dialogue", "authors": [ { "first": "Anusha", "middle": [], "last": "Balakrishnan", "suffix": "" }, { "first": "Jinfeng", "middle": [], "last": "Rao", "suffix": "" }, { "first": "Kartikeya", "middle": [], "last": "Upasani", "suffix": "" }, { "first": "Michael", "middle": [], "last": "White", "suffix": "" }, { "first": "Rajen", "middle": [], "last": "Subba", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "831--844", "other_ids": { "DOI": [ "10.18653/v1/P19-1080" ] }, "num": null, "urls": [], "raw_text": "Anusha Balakrishnan, Jinfeng Rao, Kartikeya Upasani, Michael White, and Rajen Subba. 2019b. Con- strained decoding for neural NLG from composi- tional representations in task-oriented dialogue. In Proceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics, pages 831- 844, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Evaluating the state-of-the-art of end-to-end natural language generation: The e2e nlg challenge", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" }, { "first": "Jekaterina", "middle": [], "last": "Novikova", "suffix": "" }, { "first": "Verena", "middle": [], "last": "Rieser", "suffix": "" } ], "year": 2020, "venue": "Computer Speech & Language", "volume": "59", "issue": "", "pages": "123--156", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ond\u0159ej Du\u0161ek, Jekaterina Novikova, and Verena Rieser. 2020. Evaluating the state-of-the-art of end-to-end natural language generation: The e2e nlg challenge. Computer Speech & Language, 59:123-156.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Creating training corpora for NLG micro-planners", "authors": [ { "first": "Claire", "middle": [], "last": "Gardent", "suffix": "" }, { "first": "Anastasia", "middle": [], "last": "Shimorina", "suffix": "" }, { "first": "Shashi", "middle": [], "last": "Narayan", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Perez-Beltrachini", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "179--188", "other_ids": { "DOI": [ "10.18653/v1/P17-1017" ] }, "num": null, "urls": [], "raw_text": "Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. Creating train- ing corpora for NLG micro-planners. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 179-188, Vancouver, Canada. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Revisiting self-training for neural sequence generation", "authors": [ { "first": "Junxian", "middle": [], "last": "He", "suffix": "" }, { "first": "Jiatao", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Jiajun", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Marc'aurelio", "middle": [], "last": "Ranzato", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junxian He, Jiatao Gu, Jiajun Shen, and Marc'Aurelio Ranzato. 2020. Revisiting self-training for neural sequence generation. In International Conference on Learning Representations.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Automated discourse generation using discourse structure relations", "authors": [ { "first": "Eduard", "middle": [ "H" ], "last": "Hovy", "suffix": "" } ], "year": 1993, "venue": "Artificial Intelligence", "volume": "63", "issue": "1", "pages": "341--385", "other_ids": { "DOI": [ "10.1016/0004-3702(93)90021-3" ] }, "num": null, "urls": [], "raw_text": "Eduard H. Hovy. 1993. Automated discourse gener- ation using discourse structure relations. Artificial Intelligence, 63(1):341 -385.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Choosing the best comparison under the circumstances", "authors": [ { "first": "Amy", "middle": [], "last": "Isard", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the International Workshop on Personalization Enhanced Access to Cultural Heritage (PATCH07)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amy Isard. 2007. Choosing the best comparison un- der the circumstances. In Proceedings of the In- ternational Workshop on Personalization Enhanced Access to Cultural Heritage (PATCH07), Corfu, Greece.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The methodius corpus of rhetorical discourse structures and generated texts", "authors": [ { "first": "Amy", "middle": [], "last": "Isard", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)", "volume": "", "issue": "", "pages": "1732--1736", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amy Isard. 2016. The methodius corpus of rhetori- cal discourse structures and generated texts. In Pro- ceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), pages 1732-1736, Portoro\u017e, Slovenia. European Language Resources Association (ELRA).", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Speaking the users' languages. Intelligent Systems", "authors": [ { "first": "Amy", "middle": [], "last": "Isard", "suffix": "" }, { "first": "Jon", "middle": [], "last": "Oberlander", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Matheson", "suffix": "" }, { "first": "Ion", "middle": [], "last": "Androutsopoulos", "suffix": "" } ], "year": 2003, "venue": "IEEE", "volume": "18", "issue": "1", "pages": "40--45", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amy Isard, Jon Oberlander, Colin Matheson, and Ion Androutsopoulos. 2003. Speaking the users' lan- guages. Intelligent Systems, IEEE, 18(1):40-45.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Text-to-text pre-training for data-totext tasks", "authors": [ { "first": "Mihir", "middle": [], "last": "Kale", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mihir Kale. 2020. Text-to-text pre-training for data-to- text tasks.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Forthcoming. Textto-text pre-training for data-to-text tasks", "authors": [ { "first": "Mihir", "middle": [], "last": "Kale", "suffix": "" }, { "first": "Abhinav", "middle": [], "last": "Rastogi", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mihir Kale and Abhinav Rastogi. Forthcoming. Text- to-text pre-training for data-to-text tasks.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A good sample is hard to find: Noise injection sampling and self-training for neural language generation models", "authors": [ { "first": "Chris", "middle": [], "last": "Kedzie", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/W19-8672" ] }, "num": null, "urls": [], "raw_text": "Chris Kedzie and Kathleen McKeown. 2019. A good sample is hard to find: Noise injection sampling and self-training for neural language generation models.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Proceedings of the 12th International Conference on Natural Language Generation", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "584--593", "other_ids": {}, "num": null, "urls": [], "raw_text": "In Proceedings of the 12th International Conference on Natural Language Generation, pages 584-593, Tokyo, Japan. Association for Computational Lin- guistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6980" ] }, "num": null, "urls": [], "raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Self-training for compositional neural NLG. The third annual West Coast NLP Summit (WeCNLP), poster session", "authors": [ { "first": "Xintong", "middle": [], "last": "Li", "suffix": "" }, { "first": "Michael", "middle": [], "last": "White", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xintong Li and Michael White. 2020. Self-training for compositional neural NLG. The third annual West Coast NLP Summit (WeCNLP), poster session.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Rhetorical structure theory: Toward a functional theory of text organization", "authors": [ { "first": "C", "middle": [], "last": "William", "suffix": "" }, { "first": "Sandra", "middle": [ "A" ], "last": "Mann", "suffix": "" }, { "first": "", "middle": [], "last": "Thompson", "suffix": "" } ], "year": 1988, "venue": "Text & Talk", "volume": "8", "issue": "3", "pages": "243--281", "other_ids": { "DOI": [ "10.1515/text.1.1988.8.3.243" ] }, "num": null, "urls": [], "raw_text": "William C. Mann and Sandra A. Thompson. 1988. Rhetorical structure theory: Toward a functional the- ory of text organization. Text & Talk, 8(3):243 -281.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "fairseq: A fast, extensible toolkit for sequence modeling", "authors": [ { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Alexei", "middle": [], "last": "Baevski", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Ng", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" } ], "year": 2019, "venue": "Proceedings of NAACL-HLT 2019: Demonstrations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "The penn discourse treebank 2.0", "authors": [ { "first": "Rashmi", "middle": [], "last": "Prasad", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Dinesh", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Eleni", "middle": [], "last": "Miltsakaki", "suffix": "" }, { "first": "Livio", "middle": [], "last": "Robaldo", "suffix": "" }, { "first": "K", "middle": [], "last": "Aravind", "suffix": "" }, { "first": "Bonnie", "middle": [ "L" ], "last": "Joshi", "suffix": "" }, { "first": "", "middle": [], "last": "Webber", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Milt- sakaki, Livio Robaldo, Aravind K Joshi, and Bon- nie L Webber. 2008. The penn discourse treebank 2.0. In LREC. Citeseer.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Semi-supervised neural text generation by joint learning of natural language generation and natural language understanding models", "authors": [ { "first": "Raheel", "middle": [], "last": "Qader", "suffix": "" }, { "first": "Fran\u00e7ois", "middle": [], "last": "Portet", "suffix": "" }, { "first": "Cyril", "middle": [], "last": "Labb\u00e9", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 12th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "552--562", "other_ids": { "DOI": [ "10.18653/v1/W19-8669" ] }, "num": null, "urls": [], "raw_text": "Raheel Qader, Fran\u00e7ois Portet, and Cyril Labb\u00e9. 2019. Semi-supervised neural text generation by joint learning of natural language generation and natural language understanding models. In Proceedings of the 12th International Conference on Natural Lan- guage Generation, pages 552-562, Tokyo, Japan. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Building natural language generation systems", "authors": [ { "first": "Ehud", "middle": [], "last": "Reiter", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Dale", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ehud Reiter and Robert Dale. 2000. Building natural language generation systems. Cambridge university press.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Dimlex: A lexicon of discorse markers for text generation and understanding", "authors": [ { "first": "Manfred", "middle": [], "last": "Stede", "suffix": "" }, { "first": "Carla", "middle": [], "last": "Umbach", "suffix": "" } ], "year": 1998, "venue": "36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics", "volume": "2", "issue": "", "pages": "1238--1242", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manfred Stede and Carla Umbach. 1998. Dimlex: A lexicon of discorse markers for text generation and understanding. In 36th Annual Meeting of the Asso- ciation for Computational Linguistics and 17th In- ternational Conference on Computational Linguis- tics, Volume 2, pages 1238-1242.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Rhetorical structure theory: looking back and moving ahead", "authors": [ { "first": "Maite", "middle": [], "last": "Taboada", "suffix": "" }, { "first": "William", "middle": [ "C" ], "last": "Mann", "suffix": "" } ], "year": 2006, "venue": "Discourse Studies", "volume": "8", "issue": "3", "pages": "423--459", "other_ids": { "DOI": [ "10.1177/1461445606061881" ] }, "num": null, "urls": [], "raw_text": "Maite Taboada and William C. Mann. 2006. Rhetori- cal structure theory: looking back and moving ahead. Discourse Studies, 8(3):423-459.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Efficient realization of coordinate structures in combinatory categorial grammar", "authors": [ { "first": "Michael", "middle": [], "last": "White", "suffix": "" } ], "year": 2006, "venue": "Research on Language and Computation", "volume": "4", "issue": "1", "pages": "39--75", "other_ids": { "DOI": [ "10.1007/s11168-006-9010-2" ] }, "num": null, "urls": [], "raw_text": "Michael White. 2006. Efficient realization of coordi- nate structures in combinatory categorial grammar. Research on Language and Computation, 4(1):39- 75.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "num": null, "text": "Content plan corresponding to the text (1a)" }, "FIGREF1": { "type_str": "figure", "uris": null, "num": null, "text": "Appendix A): (a) Target and Hypothesis both contain unlike; (b) Target and Hypothesis both contain like; (c) Target contains unlike but Hypothesis generates like; (d) Target contains like but Hypothesis generates unlike (Like vs. Unlike); (e) Target contains neither like nor like and the same holds of Hypothesis (No rel in both); (f) the rest of the cases." }, "FIGREF2": { "type_str": "figure", "uris": null, "num": null, "text": "and McKeown (2019), Qader et al. (2019) and He et al. (2020) propose self-training methods for NLG. Li and White" }, "FIGREF3": { "type_str": "figure", "uris": null, "num": null, "text": "Figure 3: Challenge Set" }, "FIGREF4": { "type_str": "figure", "uris": null, "num": null, "text": "Standard Set: Errors in generating discourse cue words for SIMILARITY and/or CONTRAST (unlike and/or like)" }, "TABREF1": { "text": "Distribution of RST types in content plans in train and test data", "html": null, "content": "
DataAverage Words Average Tokens
train5295
valid5296
standard test 4073
challenge test 2952
", "num": null, "type_str": "table" }, "TABREF2": { "text": "Average numbers of tokens in content plans and average numbers of words in corresponding texts in train and test data", "html": null, "content": "", "num": null, "type_str": "table" }, "TABREF6": { "text": "Models trained on training set of size 4304", "html": null, "content": "
", "num": null, "type_str": "table" } } } }