ACL-OCL / Base_JSON /prefixI /json /inlg /2021.inlg-1.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
139 kB
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:28:06.586822Z"
},
"title": "Self-Training for Compositional Neural NLG in Task-Oriented Dialogue",
"authors": [
{
"first": "Xintong",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Ohio State University",
"location": {}
},
"email": ""
},
{
"first": "Jory",
"middle": [],
"last": "Stevens-Guille",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Ohio State University",
"location": {}
},
"email": "stevensguille.1@osu.edu"
},
{
"first": "Aleksandre",
"middle": [],
"last": "Maskharashvili",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Ohio State University",
"location": {}
},
"email": "maskharashvili.1@osu.edu"
},
{
"first": "Michael",
"middle": [],
"last": "White",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Ohio State University",
"location": {}
},
"email": "mwhite@ling.osu.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Neural approaches to natural language generation in task-oriented dialogue have typically required large amounts of annotated training data to achieve satisfactory performance, especially when generating from compositional inputs. To address this issue, we show that selftraining enhanced with constrained decoding yields large gains in data efficiency on a conversational weather dataset that employs compositional meaning representations. In particular, our experiments indicate that self-training with constrained decoding can enable sequence-tosequence models to achieve satisfactory quality using vanilla decoding with five to ten times less data than with ordinary supervised baseline; moreover, by leveraging pretrained models, data efficiency can be increased further to fifty times. We confirm the main automatic results with human evaluations and show that they extend to an enhanced, compositional version of the E2E dataset. The end result is an approach that makes it possible to achieve acceptable performance on compositional NLG tasks using hundreds rather than tens of thousands of training samples.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Neural approaches to natural language generation in task-oriented dialogue have typically required large amounts of annotated training data to achieve satisfactory performance, especially when generating from compositional inputs. To address this issue, we show that selftraining enhanced with constrained decoding yields large gains in data efficiency on a conversational weather dataset that employs compositional meaning representations. In particular, our experiments indicate that self-training with constrained decoding can enable sequence-tosequence models to achieve satisfactory quality using vanilla decoding with five to ten times less data than with ordinary supervised baseline; moreover, by leveraging pretrained models, data efficiency can be increased further to fifty times. We confirm the main automatic results with human evaluations and show that they extend to an enhanced, compositional version of the E2E dataset. The end result is an approach that makes it possible to achieve acceptable performance on compositional NLG tasks using hundreds rather than tens of thousands of training samples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Neural approaches to natural language generation (NLG) have received increasing attention due to their flexibility and end-to-end trainability (Wen et al., 2016; Mei et al., 2016; Du\u0161ek and Jurcicek, 2016; Du\u0161ek et al., 2019) . However, despite using simplistic input meaning representations (MR), most neural models require large quantities of clean annotated training data in order to obtain good performance. As such, the time and expense required to obtain sufficient training data is a significant obstacle to deploying neural NLG models at scale.",
"cite_spans": [
{
"start": 143,
"end": 161,
"text": "(Wen et al., 2016;",
"ref_id": "BIBREF38"
},
{
"start": 162,
"end": 179,
"text": "Mei et al., 2016;",
"ref_id": "BIBREF25"
},
{
"start": 180,
"end": 205,
"text": "Du\u0161ek and Jurcicek, 2016;",
"ref_id": "BIBREF12"
},
{
"start": 206,
"end": 225,
"text": "Du\u0161ek et al., 2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To enable richer task-oriented dialogue, Balakrishnan et al. (2019) argue for using compositional, tree-structured MRs that include discourse rela-tions, emphasizing the need for applications to exert control over these relations when generating text. Perhaps not surprisingly, their compositional input MRs further exacerbate annotated data needs. To address this issue, introduce a novel constrained decoding technique that nearly always yields correct output even in challenging cases. However, their constrained decoding method incurs a substantial runtime cost, making it too slow to deploy in task-oriented dialogue systems where low latency is a priority. Thus, finding ways to improve data efficiency for training models that perform satisfactorily with vanilla decoding remains an important challenge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to reduce annotated data needs, Kedzie and McKeown (2019) and Qader et al. (2019) propose self-training methods for NLG, though they do not explore self-training for the more challenging case of generating from compositional input representations. Arun et al. (2020) do explore self-training with compositional inputs, but they do not consider constrained decoding. In this paper, we investigate for the first time whether constrained decoding can be used during self-training to enhance data efficiency for compositional neural NLG, since the speed of constrained decoding is much less of a concern during self-training than it is at runtime in dialogue systems. In particular, we adapt and extend He et al.'s (2020) approach to self-training for MT to the setting of neural NLG from compositional MRs, comparing vanilla self-training to self-training enhanced with constrained decoding as well as with reverse model reranking (Shen et al., 2019; Yee et al., 2019) , a simpler technique where the n-best outputs of the forward model are reranked using scores from a reverse model. In both cases, the idea is to enhance the quality of the pseudo-annotated texts created during self-training, so that self-training can more successfully avoid entrenching the model's own Balakrishnan et al.'s (2019) conversational weather dataset. In the actual dataset, discourse relations have a DS prefix (e.g., DS CONTRAST), dialog acts have a DG prefix (e.g, DG INFORM) and arguments have an ARG prefix (e.g., ARG CITY); these are elided here for brevity. mistakes. We show that self-training benefits considerably from both methods, and that constrained decoding yields especially large gains in data efficiency. In particular, our experiments indicate that using constrained decoding during self-training, rather than at runtime, enables standard sequenceto-sequence (seq2seq) models to achieve satisfactory quality with much reduced latency.",
"cite_spans": [
{
"start": 41,
"end": 66,
"text": "Kedzie and McKeown (2019)",
"ref_id": "BIBREF20"
},
{
"start": 71,
"end": 90,
"text": "Qader et al. (2019)",
"ref_id": "BIBREF31"
},
{
"start": 257,
"end": 275,
"text": "Arun et al. (2020)",
"ref_id": "BIBREF0"
},
{
"start": 708,
"end": 726,
"text": "He et al.'s (2020)",
"ref_id": null
},
{
"start": 937,
"end": 956,
"text": "(Shen et al., 2019;",
"ref_id": "BIBREF35"
},
{
"start": 957,
"end": 974,
"text": "Yee et al., 2019)",
"ref_id": "BIBREF42"
},
{
"start": 1279,
"end": 1307,
"text": "Balakrishnan et al.'s (2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions are two-fold. On Balakrishnan et al.'s (2019) conversational weather dataset, we show that using constrained decoding during selftraining and their SEQ2SEQ-TREE model at runtime yields comparable performance with 20% of the annotated data as using the full training set in supervised fashion, and by leveraging pretrained models, annotated data needs can be further reduced to 2%. We then confirm the main automatic metric results with human evaluations and show that they hold for Balakrishnan et al.'s (2019) enhanced version of the E2E dataset (Du\u0161ek et al., 2019) .",
"cite_spans": [
{
"start": 500,
"end": 528,
"text": "Balakrishnan et al.'s (2019)",
"ref_id": null
},
{
"start": 565,
"end": 585,
"text": "(Du\u0161ek et al., 2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Neural NLG seq2seq models aim to generate a natural language text",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "2"
},
{
"text": "y = y 1 , \u2022 \u2022 \u2022 , y |y| from a mean- ing representation x = x 1 , \u2022 \u2022 \u2022 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "2"
},
{
"text": "x |x| by modeling the conditional probability",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (y|x) = |y| i=1 P (y i |y <i , x) ,",
"eq_num": "(1)"
}
],
"section": "Method",
"sec_num": "2"
},
{
"text": "where y <i = y 1 , . . . , y i\u22121 denotes a prefix of y with length i \u2212 1. Usually, the model parameters are learned in supervised fashion from a set of annotated data",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "2"
},
{
"text": "L = {x k , y k } |L| k=1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "2"
},
{
"text": "Balakrishnan et al. 2019propose to generate annotated responses from compositional, treestructured MRs, as shown in Table 1 . They demonstrate that compositional MRs offer greater control over the expression of CONTRAST and JUSTIFI-CATION discourse relations and lead to improvements in semantic correctness in a human evaluation, which they argue is important for conversational systems where external knowledge like user models may inform decisions around contrast, grouping, or justifications (Carenini and Moore, 2006; Walker et al., 2007; White et al., 2010; Demberg et al., 2011) . By serializing the trees as shown, it is possible to use standard seq2seq models to effectively accomplish tree-to-tree generation. At runtime, the bracketing tokens can be straightforwardly removed to produce the final outputs.",
"cite_spans": [
{
"start": 496,
"end": 522,
"text": "(Carenini and Moore, 2006;",
"ref_id": "BIBREF3"
},
{
"start": 523,
"end": 543,
"text": "Walker et al., 2007;",
"ref_id": "BIBREF37"
},
{
"start": 544,
"end": 563,
"text": "White et al., 2010;",
"ref_id": "BIBREF40"
},
{
"start": 564,
"end": 585,
"text": "Demberg et al., 2011)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 116,
"end": 123,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Compositional Inputs",
"sec_num": "2.1"
},
{
"text": "Hiring annotators to produce large amounts of clean, parallel data is costly, but it is often possible to automatically obtain lots of unlabeled data",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vanilla Self-Training",
"sec_num": "2.2"
},
{
"text": "U = {x l } |U |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vanilla Self-Training",
"sec_num": "2.2"
},
{
"text": "l=1 . To take advantage of the large unlabeled data U, we adapt and extend He et al.'s (2020) semi-supervised self-training strategy, which has been successfully applied to MT. As shown in Algorithm 1, vanilla self-training starts from a base model trained with annotated parallel data L, then (i) iteratively applies the current model to pseudolabel the unlabeled data with its predictions, (ii) trains a new model on the pseudo-labeled data, and (iii) fine-tunes the model on L. Naturally, higherquality pseudo-labeling can be expected to lead to more effective self-training by helping the model to avoid entrenching its own mistakes; below, we consider two strategies for improving generation during the pseudo-labeling step. Train a model on the pseudo-parallel data;",
"cite_spans": [
{
"start": 75,
"end": 93,
"text": "He et al.'s (2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Vanilla Self-Training",
"sec_num": "2.2"
},
{
"text": "5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vanilla Self-Training",
"sec_num": "2.2"
},
{
"text": "Fine-tune the model on L;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vanilla Self-Training",
"sec_num": "2.2"
},
{
"text": "6 until convergence or maximum iteration;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vanilla Self-Training",
"sec_num": "2.2"
},
{
"text": "Balakrishnan et al. 2019demonstrate that constrained decoding can enhance the correctness of text generated with seq2seq models. In our experiments, we make use of an enhanced version of their constrained decoding method, both in the pseudo-labeling step of self-training as well as during runtime prediction. Balakrishnan et al.'s (2019) constrained decoding method begins by scanning the input MR tree to build constraints on coverage and ellipsis. 1 During decoding, the non-terminals in the incrementally generated candidates are checked against the input tree for validity, where an output tree (ignoring terminals) is considered valid if it is isomorphic to the input tree up to sibling order and elided arguments. After each time step of the beam search, invalid candidates are filtered out to prevent hallucinations of tree structure, and closing brackets can only be generated when the non-terminals in the current subtree have all been covered. For example, in decoding a response for the MR in Table 1 , if the prediction has followed the annotated response up until and it'll be, then a closing bracket cannot be generated at this point because the second IN-FORM is not complete, and CLOUD COVERAGE is the only non-terminal that can be validly generated here.",
"cite_spans": [
{
"start": 310,
"end": 338,
"text": "Balakrishnan et al.'s (2019)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1005,
"end": 1012,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Constrained Decoding",
"sec_num": "2.3"
},
{
"text": "A problem with this post-filtering method of constrained decoding is that it can end up filtering out all candidates in the beam search, making it impossible for the decoding to proceed forward. To avoid this issue, we instead make use of a pre-filtering constraint. Specifically, rather than checking the non-terminals in y i after generating the next token in each time step i, our pre-filtering method instead determines all non-terminals that can appear as valid next tokens with y <i , then masks out all invalid non-terminals from the vocabulary before the next decoding step (the closing bracket is treated similarly). This ensures that all candidates in the beam are valid.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained Decoding",
"sec_num": "2.3"
},
{
"text": "Another problem with Balakrishnan et al. 2019's constrained decoding is that it only constrains the generation of non-terminals. The generated terminals may be inconsistent with their parent argument non-terminals, even when placeholder terminals are used for delexicalized arguments. For example, a placeholder for city name should only be valid to generate inside an [ARG CITY ] argument instead of [ARG DAY ]. This kind of error is not common when the training data is sufficient, but it can severely harm the generation quality in data sparse situations. Therefore, in our enhanced constrained decoding, we constrain the generation of arguments by only nominating correspondingly valid placeholder terminals given a particular parent argument non-terminal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained Decoding",
"sec_num": "2.3"
},
{
"text": "While constrained decoding ensures the correctness of the partial tree structure and helps avoid inappropriate argument realizations, it does not constrain most terminals (i.e., the words themselves). As such, when the model ends up in a poorly trained part of its distribution, it can still hallucinate terminals; in particular, it can end up stuttering words until the maximum output length is reached, yielding an invalid tree structure. In these failure cases, we replace the output with the result of vanilla decoding, whose text is usually much better.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained Decoding",
"sec_num": "2.3"
},
{
"text": "As an alternative to constrained decoding's hard constraints on non-terminals, we also investigate a soft approach to favoring generated texts that correctly express the input MRs (Shen et al., 2019; Yee et al., 2019) . To score the correctness of a generated text (with non-terminals removed), we train a reverse (i.e., parsing) model to generate a mean-ing representation x from a natural language text y. Then, following beam search, the n-best generated texts are reranked with the forced decoding perplexity of the reverse model. When using reverse model reranking in self-training, the reverse model is also self-trained as shown in Algorithm 2. Train forward and reverse models on the pseudo-parallel data;",
"cite_spans": [
{
"start": 180,
"end": 199,
"text": "(Shen et al., 2019;",
"ref_id": "BIBREF35"
},
{
"start": 200,
"end": 217,
"text": "Yee et al., 2019)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reverse Model Reranking",
"sec_num": "2.4"
},
{
"text": "5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reverse Model Reranking",
"sec_num": "2.4"
},
{
"text": "Fine-tune both models on L;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reverse Model Reranking",
"sec_num": "2.4"
},
{
"text": "6 until convergence or maximum iteration;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reverse Model Reranking",
"sec_num": "2.4"
},
{
"text": "3 Experiments",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reverse Model Reranking",
"sec_num": "2.4"
},
{
"text": "We conduct experiments on the publicly available conversational weather and enriched E2E datasets from Balakrishnan et al. 2019, focusing on the more challenging weather dataset. The weather task consists of 25k parallel items for training, and 3k for both validation and test. In the weather task, there are 1.6k unique tokens in the MRs, and 1.3k in the annotated responses. The enriched E2E dataset contains Balakrishnan et al.'s (2019) automatic enhancements to the E2E texts and MRs to include CONTRAST and JUSTIFI-CATION relations as well as slot-level annotations. The E2E task consists of 42k items for training, and 4.6k for both validation and test. In the E2E task, there are 60 unique tokens in the MRs, and 2.9k in the annotated responses. All the results are reported on the test set in the following experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup Datasets",
"sec_num": "3.1"
},
{
"text": "Unlabeled MR Creation For many NLG applications, unlabeled MRs can be generated in nearly unlimited quantities with a simulator, but unfortunately, we do not have access to the MR simulators for these two datasets. Our workaround is to create unlabeled MRs by modifying the MRs we have in the parallel data. Because there are contextual dependencies in the MRs, it would be hard to get realistic MRs just by sampling elements. Therefore, we instead delete all possible combinations of removable subtrees from the MRs in order to keep the pruned MRs meaningful. The removable subtrees are defined as an unprotected DG INFORM or ARG that has at least one unprotected sibling, where protected elements are those that are manually identified as establishing context (e.g., ARG LOCATION) or are children of CON-TRAST and JUSTIFICATION relations, which have coherence-related contextual dependencies. In this way, we created 137k unlabeled MRs for weather and 143k MRs for E2E. When training a new model on pseudo-labeled data, we split 3k from each of them as validation data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup Datasets",
"sec_num": "3.1"
},
{
"text": "We report results for the following four kinds of models, where *-n means the method only uses n% of the parallel data from the full training set (three iterations of self-training were used, unless otherwise specified):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": null
},
{
"text": "\u2022 LBL-n: A seq2seq model (LSTM with attention or BART), which is also the base model for the other methods",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": null
},
{
"text": "\u2022 ST-VAN-n: A model trained with vanilla selftraining",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": null
},
{
"text": "\u2022 ST-CD-n: A model self-trained with constrained decoding for pseudo-labeling",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": null
},
{
"text": "\u2022 ST-RMR-n: A model self-trained with reverse model reranking for pseudo-labeling",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": null
},
{
"text": "Metrics We report the automatic metrics listed below on the raw model predictions, which have delexicalized fields (e.g., ARG CITY). Nonterminal annotations are stripped when calculating BLEU-4 and auto-tree accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": null
},
{
"text": "\u2022 BLEU-4 (Papineni et al., 2002) : The BLEU evaluation from e2e-metrics (Du\u0161ek et al., 2018) .",
"cite_spans": [
{
"start": 9,
"end": 32,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF29"
},
{
"start": 72,
"end": 92,
"text": "(Du\u0161ek et al., 2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": null
},
{
"text": "\u2022 Tree accuracy : The ratio of annotated responses that pass the validity constraints specified by the input MR. Note that if constrained decoding terminates successfully, it is guaranteed to pass the tree accuracy check, but vanilla decoding comes with no such guarantee.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": null
},
{
"text": "\u2022 Auto-tree accuracy: Tree accuracy after using a reverse model (trained on all the paired data) to parse the text. Note that parse errors make auto-tree accuracy less accurate than tree accuracy, but this method can be used with plain text output. Implementation Our implementation 2 of selftraining, constrained decoding and reverse model reranking is based on the same one-layer LSTM with attention approach as in , with the same configuration of hyperparameters. The experiments with pretrained models implement all above mentioned methods with BART (Lewis et al., 2020) .",
"cite_spans": [
{
"start": 554,
"end": 574,
"text": "(Lewis et al., 2020)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": null
},
{
"text": "We use the open source fairseq implementation (Ott et al., 2019) . More specific configuration details of these two models are listed in Appendix A. Figures 1 and 2 show the comparisons among the four training strategies on tree accuracy and BLEU score as a function of the amount of parallel data available. We can clearly see that ST-CD always surpasses the other three self-training methods. Meanwhile, the ST-CD lines are much flatter, indicating better data-efficiency, especially for tree accuracy with less parallel data. In particular, ST-CD achieves a considerable tree accuracy of 90% and 97% with LSTM and BART respectively, using only 1% of the parallel data (253 items). Using 100% of the data, ST-CD sets a new state-of-the-art in tree accuracy and BLEU, exceeding Rao et al.'s (2019) more complex tree-to-sequence method. 3 Notably, with LSTM vanilla decoding, ST-CD needs only 20% of the parallel data to achieve com-2 Code is available at https://github.com/znculee/TreeNLG and https://github.com/znculee/TreeNLG-BART. See appendix for further details to enhance reproducibility.",
"cite_spans": [
{
"start": 46,
"end": 64,
"text": "(Ott et al., 2019)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 149,
"end": 164,
"text": "Figures 1 and 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Models",
"sec_num": null
},
{
"text": "3 Results of using constrained decoding at runtime are shown in Figure 5 and Figure 6 in the appendix. parable performance to LBL trained on all the parallel data. 4 More remarkably, BART vanilla decoding ST-CD needs only 2% of the parallel data to achieve essentially comparable performance to LBL trained on all the parallel data. 5 At this data efficiency level, tree accuracy exceeds 97% using just over 500 training samples, while Arun et al.'s (2020) results on the same dataset are under 90% despite using four times as much data. This is a key result since vanilla decoding is so much faster than constrained decoding, and latency is an important consideration for dialogue systems. For example, in our experiments using a single NVIDIA V100, the speed of LSTM vanilla decoding was 925.01 responses/s, or 37,973.22 tokens/s, while the speed of constrained decoding was 12.76 responses/s, or 532.61 tokens/s. This translates to an average of 80ms per response for constrained decoding, which is a barrier to production for systems with a strict latency budget. For BART, the speed of vanilla decoding was 25.17 responses/s, or 1565.75 tokens/s, while the speed of constrained decoding was 1.82 responses/s, or 113.92 tokens/s. As such, BART with vanilla decoding could be suitable in some production settings; alternatively, one could pursue knowledge distillation techniques as in Arun et al. (2020) .",
"cite_spans": [
{
"start": 1389,
"end": 1407,
"text": "Arun et al. (2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 64,
"end": 72,
"text": "Figure 5",
"ref_id": null
},
{
"start": 77,
"end": 85,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Efficiency Study",
"sec_num": "3.2"
},
{
"text": "Although not as effective as ST-CD, ST-RMR also consistently surpasses ST-VAN and LBL. Moreover, it can also be used in more conventional settings where the response text in the training data has no semantic annotations, and thus decoding is into plain text. As shown in Figure 3 (appendix), using auto-tree accuracy, ST-RMR can improve data efficiency when constrained decoding cannot be used. Note, however, that decoding into plain text consistently trails in auto-tree accuracy compared to decoding into annotated text.",
"cite_spans": [],
"ref_spans": [
{
"start": 271,
"end": 279,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Efficiency Study",
"sec_num": "3.2"
},
{
"text": "Theoretically, self-training should be more helpful when the base model can produce higher quality pseudo-labeled data. As shown in Figures 1 and 2 , tree accuracy on pseudo-labeled samples generated by ST-CD is much higher than other self-training methods, which illustrates why it yields much better tree accuracy and BLEU scores on the test set. Also note that the pseudo-labeled tree accuracy is much lower than the test tree accuracy for ST-VAN and ST-RMR. This may be because the unlabeled MRs are created by deletion and thus are somewhat atypical in comparison to the train and test sets.",
"cite_spans": [],
"ref_spans": [
{
"start": 132,
"end": 147,
"text": "Figures 1 and 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "How Does Self-Training Help?",
"sec_num": "3.3"
},
{
"text": "Although the gains in tree accuracy are large with vanilla decoding, to confirm that the gains in Figure 1 and 2 are significant, we have run McNemar's test (McNemar, 1947) comparing ST-CD against LBL as well as ST-VAN. Even when using LSTMs with 100% of the labeled data, the gain in tree accuracy from 94.2% with LBL to 96.6% with ST-CD is highly significant (p=4.30e-15), as is the gain from 95.7% with ST-VAN to 96.6% with ST-CD (p=0.0003). For BART, when using 100% of the labeled data, the gain in tree accuracy from 98.01%",
"cite_spans": [
{
"start": 142,
"end": 172,
"text": "McNemar's test (McNemar, 1947)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 98,
"end": 106,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Significance Tests",
"sec_num": "3.4"
},
{
"text": "with LBL to 99.26% with ST-CD is highly significant (p=1.52e-7), as is the gain from 98.53% with ST-VAN to 99.26% with ST-CD (p=2.94e-4). Naturally, the gains when using less labeled data are also highly significant. Most interestingly, using only 2% of the labeled data with BART ST-CD is not significantly different than using 100% of the labeled data with BART LBL (p=0.68285).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Significance Tests",
"sec_num": "3.4"
},
{
"text": "In their experiments, found that tree accuracy differences reliably indicated differences in human evaluations of correctness, and in particular that tree accuracy failures nearly always indicated actual correctness errors. To verify these findings in our own targeted expert evaluation, we had two authors (both linguists) judge the correctness of the LSTM and BART models self-trained with constrained decoding using partial parallel data against the supervised baseline using the same partial parallel data and the best supervised model using all the parallel data, where the judges were blind to which model was which. Correctness was judged against the reference text for 50 randomly selected pairs in each condition where the items differed in tree accuracy. For each pair, the judges indicated whether the first item was better than, the same as or worse than the second item. 3-way agreement was 79% for correctness between the judges; moreover, when excluding any 'same' judgments, the judges agreed in all but one case. After the judgments were collected, we calculated how well they agreed with tree accuracy, excluding the indeterminate 'same' judgments. Agreement was quite high, reaching 90% for one judge and 88% for the other. (Further details are in Appendix B.) Given this high level of agreement with the automatic tree accuracy measure along with the highly significant differences in tree accuracy, we focused our human evaluation on investigating whether the observed differences in BLEU scores indicated important differences in grammaticality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expert Evaluation of Correctness",
"sec_num": "3.5"
},
{
"text": "While the BART ST-CD-02 and LSTM ST-CD-20 models achieved comparable or better levels of tree accuracy in comparison to their LBL-100 (fulldata) counterparts, they trailed somewhat in BLEU scores. Looking at the outputs of the self-trained models with the worst BLEU scores, we found that the responses were mostly good, only suffering from clear grammaticality issues infrequently. To confirm these observations, we conducted a human evaluation using the responses generated by the BART ST-CD-02 and LSTM ST-CD-20 models on 333 randomly selected test items, along with the responses for the same items for the best and worst supervised models by BLEU score, namely BART LBL-100 and LSTM LBL-01. Mixed in with the responses of each model were 75 check items, 25 of which were grammatical and 50 of which we intentionally made ungrammatical. Using these samples, we ran an experiment on Amazon Mechanical Turk involving 16 unique participants. The participants in the experiment were pre-filtered by selecting those with an approval rate of at least 95%. Each participant was shown our grammaticality guidelines, which were based on Arun et al.'s (2020) and available for review at all times during the experiment. They were subsequently asked to take a quiz. Those who scored 80% or more on the quiz were selected for further participation. To encourage careful engagement with the task, we offered bonus payments to those who performed well on the check items. The experiments were carried out with Institutional Review Board approval, and all participants were paid above minimum wage for our locale.",
"cite_spans": [
{
"start": 1132,
"end": 1152,
"text": "Arun et al.'s (2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation of Grammaticality",
"sec_num": "3.6"
},
{
"text": "Agreement with the check items was quite robust, with all participants well above chance, though there were some outliers with respect to check item agreement. This indicates that the judgments were somewhat noisy. Each item received 3 judgments, and the items were assigned the majority judgment for analysis purposes. Judgments of ungrammaticality were accompanied by brief reasons; discrepancies between judgments primarily reflected difficulty in applying the guidelines regarding punctuation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation of Grammaticality",
"sec_num": "3.6"
},
{
"text": "Our results indicate that 4.8% of the BART ST-CD-02 items were judged ungrammatical, not far from the error rates of 3.9% for LSTM ST-CD-20 and 3.0% for BART LBL-100. By contrast, 11.4% of the LSTM LBL-01 items were judged ungrammatical. Pairwise comparisons using Mc-Nemar's test only revealed statistically significant differences for the LSTM LBL-01 model: it was judged significantly worse than the 3 other models (p < 0.003 in all cases), while none of the other systems were significantly different (p > 0.3 in all cases).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation of Grammaticality",
"sec_num": "3.6"
},
{
"text": "The most frequent grammaticality issue, especially for LSTM ST-CD-20, was missing punctuation between independent clauses, as shown in (a) in Table 2. Other errors included occasional agreement errors or missing constituents, as in (b). Example correctness errors appear in Table 3 ; they usually involved missing information, but sometimes also repeated or extra information.",
"cite_spans": [],
"ref_spans": [
{
"start": 274,
"end": 281,
"text": "Table 3",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "3.7"
},
{
"text": "We also evaluate our strategies on the enhanced E2E dataset. As shown in Figure 4 in Appendix, we can draw the same general conclusions regarding data efficiency as with the conversational weather dataset. 6 Both constrained decoding and reverse model reranking improve upon vanilla self-training, with constrained decoding being more effective when using less parallel data. Notably, for LSTM models, with vanilla decoding at runtime, tree accuracy and BLEU of using self-training with constrained decoding and 20% of the parallel data (ST-CD-20) are essentially identical to the supervised model using all the available data (LBL-100). For BART models, the performance of ST-CD-02 is also very similar to the one of LBL-100: While the BLEU score of ST-CD-02 is slightly lower than that of LBL-100, it is still very high, and the tree accuracy of ST-CD-02 is slightly higher than the tree accuracy of LBL-100. Likewise, our general approach to self-training (He et al., 2020 ) is much simpler than in Chang et al.'s (2021) work, where they generate new text samples using GPT-2 (unconditioned on any input) then pair them with data samples. Earlier, Chisholm et al. (2017) train NLG and NLU models that share parameters to reduce the risk of hallucination. Our iterative method of training forward and reverse seq2seq models instead draws from Yee et al.'s (2019) reverse model reranking approach and is much simpler to implement. Additionally, Nie et al. (2019) apply self-training to a NLU model to reduce the noise in the original MR-text pairs in order to reduce the hallucination problem in NLG, but they do not investigate data efficiency issues. Also related is work on back-translation (Sennrich et al., 2016) in MT, which starts from the assumption that there is much target side data; by contrast, selftraining assumes there is much source side data, as is the case with our task (where new unlabeled MRs can be easily created).",
"cite_spans": [
{
"start": 206,
"end": 207,
"text": "6",
"ref_id": null
},
{
"start": 959,
"end": 975,
"text": "(He et al., 2020",
"ref_id": "BIBREF18"
},
{
"start": 1002,
"end": 1023,
"text": "Chang et al.'s (2021)",
"ref_id": null
},
{
"start": 1151,
"end": 1173,
"text": "Chisholm et al. (2017)",
"ref_id": "BIBREF8"
},
{
"start": 1345,
"end": 1364,
"text": "Yee et al.'s (2019)",
"ref_id": null
},
{
"start": 1695,
"end": 1718,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [
{
"start": 73,
"end": 81,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "E2E Confirmation",
"sec_num": "3.8"
},
{
"text": "More recent work takes advantage of pre-trained language models to develop few-shot NLG methods. Chen et al. (2019) show impressive results with just 200 training items using a specialized table encoder with GPT-2, while Peng et al. (2020) use cross-domain training (an orthogonal approach) together with GPT-2; neither investigates more challenging compositional inputs. Although Arun et al. (2020) do use BART on compositional inputs, their tree accuracy levels are much lower even when using considerably more data.",
"cite_spans": [
{
"start": 97,
"end": 115,
"text": "Chen et al. (2019)",
"ref_id": "BIBREF7"
},
{
"start": 221,
"end": 239,
"text": "Peng et al. (2020)",
"ref_id": "BIBREF30"
},
{
"start": 381,
"end": 399,
"text": "Arun et al. (2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "E2E Confirmation",
"sec_num": "3.8"
},
{
"text": "More generally, reverse (or reconstructor) models have taken on greater theoretical interest thanks to Rational Speech Act (RSA) theory (Frank et al., 2009) and have recently proved useful in NLG (Fried et al., 2018; Shen et al., 2019) . Our approach differs in using reverse models during selftraining rather than at runtime. Work on combining parsing and generation for ambiguity avoidance goes back much farther (Neumann and van Noord, 1992) , with managing the trade-off between fluency and ambiguity avoidance a more recent topic (Duan and White, 2014) that we also leave for future work. Constrained decoding (Balakrishnan et al., 2019) is inspired by coverage tracking in grammar-based approaches to realization (Kay, 1996; Carroll and Oepen, 2005; White, 2006) ; its use during self-training is novel to this work.",
"cite_spans": [
{
"start": 136,
"end": 156,
"text": "(Frank et al., 2009)",
"ref_id": "BIBREF16"
},
{
"start": 196,
"end": 216,
"text": "(Fried et al., 2018;",
"ref_id": "BIBREF17"
},
{
"start": 217,
"end": 235,
"text": "Shen et al., 2019)",
"ref_id": "BIBREF35"
},
{
"start": 415,
"end": 444,
"text": "(Neumann and van Noord, 1992)",
"ref_id": "BIBREF26"
},
{
"start": 535,
"end": 557,
"text": "(Duan and White, 2014)",
"ref_id": "BIBREF10"
},
{
"start": 719,
"end": 730,
"text": "(Kay, 1996;",
"ref_id": "BIBREF19"
},
{
"start": 731,
"end": 755,
"text": "Carroll and Oepen, 2005;",
"ref_id": "BIBREF4"
},
{
"start": 756,
"end": 768,
"text": "White, 2006)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "E2E Confirmation",
"sec_num": "3.8"
},
{
"text": "In this paper, we have shown that using selftraining with constrained decoding in compositional neural NLG can deliver large gains in data efficiency, enabling seq2seq models to achieve satisfactory quality using vanilla decoding with much less annotated data. The idea of using constrained decoding with self-training rather than for runtime inference is a very simple one, but ours is the first paper to investigate the idea, and we show via thorough experimentation and evaluation that it works remarkably well. In our experiments, we found that LSTM models trained from scratch can increase data efficiency by a factor of at least 5, while pretrained BART models yielded a 50 times increase, achieving essentially comparable levels of correctness and grammaticality using only 2% of the existing training data. As such, the approach promises to help pave the way towards developing systems with mere hundreds rather than tens of thousands of annotated samples, potentially eliminating the need for crowdsourcing in system development.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "In future work, it would be exploring ways of at least partially automatically adding semantic annotations to the target texts using methods that treat such annotations as latent Xu et al., 2021) to facilitate using our approach on a new task or dataset.",
"cite_spans": [
{
"start": 179,
"end": 195,
"text": "Xu et al., 2021)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "be released upon acceptance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "The dependencies are specified in requirements.txt. Code usage instructions are in README.md and self-training/README.md. Table 4 shows the detailed breakdown of agreement between the expert judges and tree accuracy. We can observe that agreement with tree accuracy is higher with LSTM models than with BART, and higher where there is a significant difference in tree accuracy than in the one case where there was no significant difference (BART ST-CD-02 vs. BART LBL-100). For this comparison, there were relatively few discrepancies in tree accuracy to sample from, and the items in question likely represent somewhat unusual cases. In examining the handful of cases where the judges agreed but did not agree with tree accuracy, about half were real errors where BART's words did not match the nonterminals (influenced by its pre-trained knowledge), while the other half had (presumably rare) errors in the input or reference. It is not surprising that tree accuracy would be somewhat less reliable with BART, as it relies on its pre-trained knowledge as well as the input in making generation choices. For example, in one case the BART ST-CD-02 model output, \"It's not expected to be warm tomorrow morning in your area. The temperature will drop to ARG TEMP tomorrow.\" Here, it seems that BART inferred that if it won't be warm tomorrow, that may well be because the temperature is dropping. However, \"will drop\" is not part of the input and may or may not be factual. Since these words appear outside of the non-terminal signaling the low temperature in the output, they are not checked by tree accuracy, and thus this error is missed.",
"cite_spans": [],
"ref_spans": [
{
"start": 122,
"end": 129,
"text": "Table 4",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "ST-CD-20 vs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSTM",
"sec_num": null
},
{
"text": "Judge ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LBL-20 LBL-100",
"sec_num": null
},
{
"text": "Arguments appearing multiple times in the input MR are only required to appear once in the output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "With 20% of the parallel data, ST-CD exceeds LBL in tree accuracy while trailing it slightly in BLEU.5 Confirmed in significance tests on tree accuracy and human evaluation later in this section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that the BLEU scores here are calculated in the same generous way as inBalakrishnan et al.'s (2019) evaluation. In particular, since multiple test MRs in the enhanced data have the same original MR, we select the best generation of the same original MR using NLTK's(Bird et al., 2009) implementation of sentence BLEU on multi-references.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/pytorch/fairseq/tree/master/examples/bart",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank that the Ohio Super Computer Center (Center, 1987) supports us sufficient computational devices for training many large models in our experiments. This research was supported by a collaborative open science research agreement between Facebook and The Ohio State University. The last author is a paid consultant for Facebook.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "For LSTM models, the word embedding and hidden size dimensions are 300 and 128 respectively, and the decoder output embedding size is 512. The dropout rate for both encoder and decoder is 0.2. There are no more than 128 sentences in a batch. Training uses early stopping when the validation loss has not improved for the last 20 epochs. The learning rate is 0.001, and the scheduler is ReduceL-ROnPlateau whose factor is 0.1 and patience is 3. The maximum output length is 2 times source length plus 50, and the beam size is 5. The loss function is optimized with Adam (Kingma and Ba, 2014), where \u03b2 1 = 0.9, \u03b2 2 = 0.999 and = 10 \u22128 .For BART models, we use the BART-Large model available in the fairseq, which 12 encoder and decoder layers. 7 The dropout rate for both encoder and decoder is 0.1. There are no more than 2048 tokens in a batch. Training uses early stopping when the validation loss has not improved for the last 20 epochs. The learning rate is 0.00003, and the scheduler is polynomial decay with 1000 warm updates. The maximum output length is 1024. The loss function is optimized with Adam (Kingma and Ba, 2014), where \u03b2 1 = 0.9, \u03b2 2 = 0.999 and = 10 \u22128 . For every experiment, the computing infrastructure we used is an NVIDIA V100 GPU and an Intel(R) Xeon(R) Platinum 8268 CPU @ 2.90GHz CPU. The numbers of trainable parameters of LSTM models for weather and E2E datasets are 2,212,928 and 3,079,256 respectively. Training a LSTM model on the full weather dataset takes around 0.5k seconds for 38 epochs. Training a LSTM model on the pseudo-labeled weather dataset takes around 3.4k seconds for 57 epochs. Training and validation loss at convergence is around 1.8. The speed of vanilla decoding was 37,973 tokens/s, and the speed of constrained decoding was 532.61 tokens/s. The numbers of trainable parameters of BART models for weather and E2E datasets are both 406,290,432. Training a BART model on the full weather dataset takes around 10k seconds for 21 epochs. Training a BART model on the pseudo-labeled weather dataset takes around 42k seconds for 20 epochs. Training and validation loss at convergence is around 2.1. Figure 5 : Tree accuracy and BLEU scores of LSTM and two self-training strategies by parallel training data size with constrained decoding at runtime on the conversational weather dataset and the enhanced E2E dataset. The self-training results of the enhanced E2E dataset are measured on the first iteration. Figure 6 : Tree accuracy and BLEU scores of BART and two self-training strategies by parallel training data size with constrained decoding at runtime on the conversational weather dataset and the enhanced E2E dataset. The self-training results of the enhanced E2E dataset are measured on the first iteration.",
"cite_spans": [
{
"start": 742,
"end": 743,
"text": "7",
"ref_id": null
}
],
"ref_spans": [
{
"start": 2146,
"end": 2154,
"text": "Figure 5",
"ref_id": null
},
{
"start": 2455,
"end": 2463,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Reproducibility Details",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Best practices for data-efficient modeling in NLG:how to train production-ready neural models with less data",
"authors": [
{
"first": "Ankit",
"middle": [],
"last": "Arun",
"suffix": ""
},
{
"first": "Soumya",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Vikas",
"middle": [],
"last": "Bhardwaj",
"suffix": ""
},
{
"first": "Ashwini",
"middle": [],
"last": "Challa",
"suffix": ""
},
{
"first": "Pinar",
"middle": [],
"last": "Donmez",
"suffix": ""
},
{
"first": "Peyman",
"middle": [],
"last": "Heidari",
"suffix": ""
},
{
"first": "Hakan",
"middle": [],
"last": "Inan",
"suffix": ""
},
{
"first": "Shashank",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Anuj",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Shawn",
"middle": [],
"last": "Mei",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Mohan",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "White",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics: Industry Track",
"volume": "",
"issue": "",
"pages": "64--77",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-industry.7"
]
},
"num": null,
"urls": [],
"raw_text": "Ankit Arun, Soumya Batra, Vikas Bhardwaj, Ashwini Challa, Pinar Donmez, Peyman Heidari, Hakan Inan, Shashank Jain, Anuj Kumar, Shawn Mei, Karthik Mohan, and Michael White. 2020. Best practices for data-efficient modeling in NLG:how to train production-ready neural models with less data. In Proceedings of the 28th International Conference on Computational Linguistics: Industry Track, pages 64-77, Online. International Committee on Compu- tational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Constrained decoding for neural NLG from compositional representations in task-oriented dialogue",
"authors": [
{
"first": "Anusha",
"middle": [],
"last": "Balakrishnan",
"suffix": ""
},
{
"first": "Jinfeng",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Kartikeya",
"middle": [],
"last": "Upasani",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "White",
"suffix": ""
},
{
"first": "Rajen",
"middle": [],
"last": "Subba",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "831--844",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1080"
]
},
"num": null,
"urls": [],
"raw_text": "Anusha Balakrishnan, Jinfeng Rao, Kartikeya Upasani, Michael White, and Rajen Subba. 2019. Constrained decoding for neural NLG from compositional repre- sentations in task-oriented dialogue. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 831-844, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Natural language processing with Python: analyzing text with the natural language toolkit",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Ewan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bird, Ewan Klein, and Edward Loper. 2009. Nat- ural language processing with Python: analyzing text with the natural language toolkit. \" O'Reilly Media, Inc.\".",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Generating and evaluating evaluative arguments",
"authors": [
{
"first": "Giuseppe",
"middle": [],
"last": "Carenini",
"suffix": ""
},
{
"first": "Johanna",
"middle": [
"D"
],
"last": "Moore",
"suffix": ""
}
],
"year": 2006,
"venue": "Artificial Intelligence",
"volume": "170",
"issue": "",
"pages": "925--952",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Giuseppe Carenini and Johanna D. Moore. 2006. Gener- ating and evaluating evaluative arguments. Artificial Intelligence, 170:925-952.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "High efficiency realization for a wide-coverage unification grammar",
"authors": [
{
"first": "John",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Oepen",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. IJCNLP-05",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Carroll and Stefan Oepen. 2005. High efficiency realization for a wide-coverage unification grammar. In Proc. IJCNLP-05.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Neural data-to-text generation with LM-based text augmentation",
"authors": [
{
"first": "Ernie",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Xiaoyu",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Dawei",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Vera",
"middle": [],
"last": "Demberg",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Su",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "758--768",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ernie Chang, Xiaoyu Shen, Dawei Zhu, Vera Demberg, and Hui Su. 2021. Neural data-to-text generation with LM-based text augmentation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Vol- ume, pages 758-768, Online. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Few-shot nlg with pre-trained language model",
"authors": [
{
"first": "Zhiyu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Harini",
"middle": [],
"last": "Eavani",
"suffix": ""
},
{
"first": "Wenhu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yinyin",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "William",
"middle": [
"Yang"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiyu Chen, Harini Eavani, Wenhu Chen, Yinyin Liu, and William Yang Wang. 2019. Few-shot nlg with pre-trained language model.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Learning to generate one-sentence biographies from Wikidata",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Chisholm",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Hachey",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "633--642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Chisholm, Will Radford, and Ben Hachey. 2017. Learning to generate one-sentence biographies from Wikidata. In Proceedings of the 15th Conference of the European Chapter of the Association for Compu- tational Linguistics: Volume 1, Long Papers, pages 633-642, Valencia, Spain. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A strategy for information presentation in spoken dialog systems",
"authors": [
{
"first": "Vera",
"middle": [],
"last": "Demberg",
"suffix": ""
},
{
"first": "Andi",
"middle": [],
"last": "Winterboer",
"suffix": ""
},
{
"first": "Johanna",
"middle": [
"D"
],
"last": "Moore",
"suffix": ""
}
],
"year": 2011,
"venue": "Computational Linguistics",
"volume": "37",
"issue": "3",
"pages": "489--539",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vera Demberg, Andi Winterboer, and Johanna D Moore. 2011. A strategy for information presentation in spoken dialog systems. Computational Linguistics, 37(3):489-539.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "That's not what I meant! Using parsers to avoid structural ambiguities in generated text",
"authors": [
{
"first": "Manjuan",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "White",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.3115/v1/P14-1039"
]
},
"num": null,
"urls": [],
"raw_text": "Manjuan Duan and Michael White. 2014. That's not what I meant! Using parsers to avoid structural ambi- guities in generated text. In Proceedings of the 52nd",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Annual Meeting of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "1",
"issue": "",
"pages": "413--423",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 413-423, Baltimore, Maryland. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Sequence-tosequence generation for spoken dialogue via deep syntax trees and strings",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Du\u0161ek",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Jurcicek",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/P16-2008"
]
},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Du\u0161ek and Filip Jurcicek. 2016. Sequence-to- sequence generation for spoken dialogue via deep syntax trees and strings. In Proceedings of the 54th",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Annual Meeting of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "2",
"issue": "",
"pages": "45--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 45-51. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Findings of the E2E NLG Challenge",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Du\u0161ek",
"suffix": ""
},
{
"first": "Jekaterina",
"middle": [],
"last": "Novikova",
"suffix": ""
},
{
"first": "Verena",
"middle": [],
"last": "Rieser",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of the 11th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "322--328",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6539"
]
},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Du\u0161ek, Jekaterina Novikova, and Verena Rieser. 2018. Findings of the E2E NLG Challenge. In Proc. of the 11th International Conference on Natu- ral Language Generation, pages 322-328, Tilburg, The Netherlands. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Evaluating the state-of-the-art of end-to-end natural language generation",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Du\u0161ek",
"suffix": ""
},
{
"first": "Jekaterina",
"middle": [],
"last": "Novikova",
"suffix": ""
},
{
"first": "Verena",
"middle": [],
"last": "Rieser",
"suffix": ""
}
],
"year": 2019,
"venue": "The E2E NLG Challenge",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1901.11528"
]
},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Du\u0161ek, Jekaterina Novikova, and Verena Rieser. 2019. Evaluating the state-of-the-art of end-to-end natural language generation: The E2E NLG Chal- lenge. arXiv preprint arXiv:1901.11528.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Informative communication in word production and word learning",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Tenenbaum",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Annual Meeting of the Cognitive Science Society",
"volume": "",
"issue": "",
"pages": "1228--1233",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Frank, Noah Goodman, Peter Lai, and Joshua Tenenbaum. 2009. Informative communication in word production and word learning. In Proceedings of the Annual Meeting of the Cognitive Science Soci- ety, pages 1228-1233.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Unified pragmatic models for generating and following instructions",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Fried",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Andreas",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1951--1963",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1177"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Fried, Jacob Andreas, and Dan Klein. 2018. Uni- fied pragmatic models for generating and following instructions. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long Papers), pages 1951-1963, New Orleans, Louisiana. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Revisiting self-training for neural sequence generation",
"authors": [
{
"first": "Junxian",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junxian He, Jiatao Gu, Jiajun Shen, and Marc'Aurelio Ranzato. 2020. Revisiting self-training for neural sequence generation. In International Conference on Learning Representations.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Chart generation",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Kay",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "200--204",
"other_ids": {
"DOI": [
"10.3115/981863.981890"
]
},
"num": null,
"urls": [],
"raw_text": "Martin Kay. 1996. Chart generation. In Proceedings of the 34th Annual Meeting of the Association for Com- putational Linguistics, pages 200-204, Santa Cruz, California, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A good sample is hard to find: Noise injection sampling and self-training for neural language generation models",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Kedzie",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 12th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "584--593",
"other_ids": {
"DOI": [
"10.18653/v1/W19-8672"
]
},
"num": null,
"urls": [],
"raw_text": "Chris Kedzie and Kathleen McKeown. 2019. A good sample is hard to find: Noise injection sampling and self-training for neural language generation models. In Proceedings of the 12th International Conference on Natural Language Generation, pages 584-593, Tokyo, Japan. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Abdelrahman",
"middle": [],
"last": "Mohamed",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7871--7880",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.703"
]
},
"num": null,
"urls": [],
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and com- prehension. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Effective self-training for parsing",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Human Language Technology Conference of the NAACL, Main Conference",
"volume": "",
"issue": "",
"pages": "152--159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In Proceed- ings of the Human Language Technology Conference of the NAACL, Main Conference, pages 152-159, New York City, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Note on the sampling error of the difference between correlated proportions or percentages",
"authors": [
{
"first": "Quinn",
"middle": [],
"last": "Mcnemar",
"suffix": ""
}
],
"year": 1947,
"venue": "Psychometrika",
"volume": "12",
"issue": "2",
"pages": "153--157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quinn McNemar. 1947. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika, 12(2):153-157.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "What to talk about and how? selective generation using lstms with coarse-to-fine alignment",
"authors": [
{
"first": "Hongyuan",
"middle": [],
"last": "Mei",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "R. Matthew",
"middle": [],
"last": "Walter",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "720--730",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1086"
]
},
"num": null,
"urls": [],
"raw_text": "Hongyuan Mei, Mohit Bansal, and R. Matthew Wal- ter. 2016. What to talk about and how? selective generation using lstms with coarse-to-fine alignment. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 720-730. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Selfmonitoring with reversible grammars",
"authors": [
{
"first": "Gunter",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Gertjan",
"middle": [],
"last": "Van Noord",
"suffix": ""
}
],
"year": 1992,
"venue": "The 15th International Conference on Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "700--706",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gunter Neumann and Gertjan van Noord. 1992. Self- monitoring with reversible grammars. In COLING 1992 Volume 2: The 15th International Conference on Computational Linguistics, pages 700-706.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A simple recipe towards reducing hallucination in neural surface realisation",
"authors": [
{
"first": "Feng",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Jin-Ge",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Jinpeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Rong",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2673--2679",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1256"
]
},
"num": null,
"urls": [],
"raw_text": "Feng Nie, Jin-Ge Yao, Jinpeng Wang, Rong Pan, and Chin-Yew Lin. 2019. A simple recipe towards re- ducing hallucination in neural surface realisation. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 2673- 2679, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "fairseq: A fast, extensible toolkit for sequence modeling",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of NAACL-HLT 2019: Demonstrations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting on association for computational linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting on association for compu- tational linguistics, pages 311-318. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Few-shot natural language generation for taskoriented dialog",
"authors": [
{
"first": "Baolin",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Chenguang",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Chunyuan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiujun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jinchao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baolin Peng, Chenguang Zhu, Chunyuan Li, Xiujun Li, Jinchao Li, Michael Zeng, and Jianfeng Gao. 2020. Few-shot natural language generation for task- oriented dialog.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Semi-supervised neural text generation by joint learning of natural language generation and natural language understanding models",
"authors": [
{
"first": "Raheel",
"middle": [],
"last": "Qader",
"suffix": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Portet",
"suffix": ""
},
{
"first": "Cyril",
"middle": [],
"last": "Labb\u00e9",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 12th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "552--562",
"other_ids": {
"DOI": [
"10.18653/v1/W19-8669"
]
},
"num": null,
"urls": [],
"raw_text": "Raheel Qader, Fran\u00e7ois Portet, and Cyril Labb\u00e9. 2019. Semi-supervised neural text generation by joint learn- ing of natural language generation and natural lan- guage understanding models. In Proceedings of the 12th International Conference on Natural Language Generation, pages 552-562, Tokyo, Japan. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A tree-to-sequence model for neural NLG in taskoriented dialog",
"authors": [
{
"first": "Jinfeng",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Kartikeya",
"middle": [],
"last": "Upasani",
"suffix": ""
},
{
"first": "Anusha",
"middle": [],
"last": "Balakrishnan",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "White",
"suffix": ""
},
{
"first": "Anuj",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Rajen",
"middle": [],
"last": "Subba",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 12th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "95--100",
"other_ids": {
"DOI": [
"10.18653/v1/W19-8611"
]
},
"num": null,
"urls": [],
"raw_text": "Jinfeng Rao, Kartikeya Upasani, Anusha Balakrishnan, Michael White, Anuj Kumar, and Rajen Subba. 2019. A tree-to-sequence model for neural NLG in task- oriented dialog. In Proceedings of the 12th Interna- tional Conference on Natural Language Generation, pages 95-100, Tokyo, Japan. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Improving neural machine translation models with monolingual data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1009"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Annual Meeting of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "1",
"issue": "",
"pages": "86--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Pragmatically informative text generation",
"authors": [
{
"first": "Sheng",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Fried",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Andreas",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4060--4067",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1410"
]
},
"num": null,
"urls": [],
"raw_text": "Sheng Shen, Daniel Fried, Jacob Andreas, and Dan Klein. 2019. Pragmatically informative text gener- ation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4060-4067, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Neural data-to-text generation via jointly learning the segmentation and correspondence",
"authors": [
{
"first": "Xiaoyu",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Ernie",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Cheng",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Dietrich",
"middle": [],
"last": "Klakow",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7155--7165",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.641"
]
},
"num": null,
"urls": [],
"raw_text": "Xiaoyu Shen, Ernie Chang, Hui Su, Cheng Niu, and Di- etrich Klakow. 2020. Neural data-to-text generation via jointly learning the segmentation and correspon- dence. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7155-7165, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Individual and domain adaptation in sentence planning for dialogue",
"authors": [
{
"first": "Marilyn",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Stent",
"suffix": ""
},
{
"first": "Francois",
"middle": [],
"last": "Mairesse",
"suffix": ""
},
{
"first": "Rashmi",
"middle": [],
"last": "Prasad",
"suffix": ""
}
],
"year": 2007,
"venue": "Journal of Artificial Intelligence Research (JAIR)",
"volume": "30",
"issue": "",
"pages": "413--456",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marilyn Walker, Amanda Stent, Francois Mairesse, and Rashmi Prasad. 2007. Individual and domain adap- tation in sentence planning for dialogue. Journal of Artificial Intelligence Research (JAIR), 30:413-456.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Multi-domain neural network language generation for spoken dialogue systems",
"authors": [
{
"first": "Milica",
"middle": [],
"last": "Tsung-Hsien Wen",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Ga\u0161i\u0107",
"suffix": ""
},
{
"first": "M",
"middle": [
"Lina"
],
"last": "Mrk\u0161i\u0107",
"suffix": ""
},
{
"first": "Pei-Hao",
"middle": [],
"last": "Rojas-Barahona",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Vandyke",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "120--129",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1015"
]
},
"num": null,
"urls": [],
"raw_text": "Tsung-Hsien Wen, Milica Ga\u0161i\u0107, Nikola Mrk\u0161i\u0107, M. Lina Rojas-Barahona, Pei-Hao Su, David Vandyke, and Steve Young. 2016. Multi-domain neu- ral network language generation for spoken dialogue systems. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 120-129. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Efficient realization of coordinate structures in combinatory categorial grammar. Research on Language and Computation",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "White",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "4",
"issue": "",
"pages": "39--75",
"other_ids": {
"DOI": [
"10.1007/s11168-006-9010-2"
]
},
"num": null,
"urls": [],
"raw_text": "Michael White. 2006. Efficient realization of coordinate structures in combinatory categorial grammar. Re- search on Language and Computation, 4(1):39-75.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Generating tailored, comparative descriptions with contextually appropriate intonation",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "White",
"suffix": ""
},
{
"first": "A",
"middle": [
"J"
],
"last": "Robert",
"suffix": ""
},
{
"first": "Johanna",
"middle": [
"D"
],
"last": "Clark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Moore",
"suffix": ""
}
],
"year": 2010,
"venue": "Computational Linguistics",
"volume": "36",
"issue": "2",
"pages": "159--201",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael White, Robert A. J. Clark, and Johanna D. Moore. 2010. Generating tailored, comparative de- scriptions with contextually appropriate intonation. Computational Linguistics, 36(2):159-201.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "AggGen: Ordering and aggregating while generating",
"authors": [
{
"first": "Xinnuo",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Du\u0161ek",
"suffix": ""
},
{
"first": "Verena",
"middle": [],
"last": "Rieser",
"suffix": ""
},
{
"first": "Ioannis",
"middle": [],
"last": "Konstas",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1419--1434",
"other_ids": {
"DOI": [
"10.18653/v1/2021.acl-long.113"
]
},
"num": null,
"urls": [],
"raw_text": "Xinnuo Xu, Ond\u0159ej Du\u0161ek, Verena Rieser, and Ioannis Konstas. 2021. AggGen: Ordering and aggregating while generating. In Proceedings of the 59th Annual Meeting of the Association for Computational Lin- guistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1419-1434, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Simple and effective noisy channel modeling for neural machine translation",
"authors": [
{
"first": "Kyra",
"middle": [],
"last": "Yee",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Dauphin",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5696--5701",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1571"
]
},
"num": null,
"urls": [],
"raw_text": "Kyra Yee, Yann Dauphin, and Michael Auli. 2019. Simple and effective noisy channel modeling for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5696-5701, Hong Kong, China. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"html": null,
"num": null,
"type_str": "table",
"text": "Example compositional MR and annotated response from",
"content": "<table/>"
},
"TABREF4": {
"html": null,
"num": null,
"type_str": "table",
"text": "Tree accuracy and BLEU scores of the LSTM base model and three self-training strategies by parallel training data size with vanilla decoding on the conversational weather dataset. Tree accuracy on pseudo-labeled data is indicated by the same color dashed lines. Performance of the supervised model (LBL) using all of the labeled data is indicated by the gray dashed lines.",
"content": "<table><tr><td/><td/><td/><td/><td/><td/><td/><td/><td>80</td><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">100</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td>75</td><td/><td/><td/><td/><td/></tr><tr><td/><td>80</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>Tree Accuracy</td><td>40 60</td><td/><td/><td/><td/><td>LBL</td><td>BLEU</td><td>65 70</td><td/><td/><td/><td/><td>LBL</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">ST-VAN</td><td>60</td><td/><td/><td/><td/><td colspan=\"2\">ST-VAN</td></tr><tr><td/><td>20</td><td/><td/><td/><td/><td colspan=\"2\">ST-RMR ST-CD</td><td>55</td><td/><td/><td/><td/><td colspan=\"2\">ST-RMR ST-CD</td></tr><tr><td/><td>0</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>253</td><td>507</td><td>1269</td><td>2539</td><td>5078</td><td>12695</td><td>25390</td><td>253</td><td>507</td><td>1269</td><td>2539</td><td>5078</td><td>12695</td><td>25390</td></tr><tr><td/><td>%1</td><td>%2</td><td>%5</td><td>%10</td><td>%20</td><td>%50</td><td>%100</td><td>%1</td><td>%2</td><td>%5</td><td>%10</td><td>%20</td><td>%50</td><td>%100</td></tr><tr><td/><td/><td/><td colspan=\"3\">#Training Samples</td><td/><td/><td/><td/><td colspan=\"3\">#Training Samples</td><td/></tr><tr><td>Figure 1:</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>"
},
"TABREF7": {
"html": null,
"num": null,
"type_str": "table",
"text": "Examples of grammaticality errors",
"content": "<table><tr><td colspan=\"2\">Index System</td><td>Error</td><td>Reference</td></tr><tr><td>(a)</td><td>LSTM LBL-20</td><td>Yes , it will be mostly sunny today in</td><td>Yes , it will be mostly sunny today and</td></tr><tr><td/><td/><td>your area</td><td>ARG WEEKDAY in your area</td></tr><tr><td>(b)</td><td colspan=\"2\">LSTM LBL-100 Yes , light rain is likely today ,</td><td>Yes , light rain is likely today .</td></tr><tr><td/><td/><td>and light thunderstorms and rain are</td><td>ARG WEEKDAY will also have light</td></tr><tr><td/><td/><td>likely on ARG WEEKDAY and light</td><td>rain and light thunderstorms and rain are</td></tr><tr><td/><td/><td>thunderstorms and rain are likely on</td><td>likely on ARG WEEKDAY</td></tr><tr><td/><td/><td>ARG WEEKDAY</td><td/></tr></table>"
},
"TABREF8": {
"html": null,
"num": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td>: Examples of correctness errors</td></tr></table>"
},
"TABREF10": {
"html": null,
"num": null,
"type_str": "table",
"text": "Agreement rate of human evaluation of correctness with tree accuracy (excluding indeterminate 'same' judgments)",
"content": "<table/>"
}
}
}
}