{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:28:06.586822Z" }, "title": "Self-Training for Compositional Neural NLG in Task-Oriented Dialogue", "authors": [ { "first": "Xintong", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Ohio State University", "location": {} }, "email": "" }, { "first": "Jory", "middle": [], "last": "Stevens-Guille", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Ohio State University", "location": {} }, "email": "stevensguille.1@osu.edu" }, { "first": "Aleksandre", "middle": [], "last": "Maskharashvili", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Ohio State University", "location": {} }, "email": "maskharashvili.1@osu.edu" }, { "first": "Michael", "middle": [], "last": "White", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Ohio State University", "location": {} }, "email": "mwhite@ling.osu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Neural approaches to natural language generation in task-oriented dialogue have typically required large amounts of annotated training data to achieve satisfactory performance, especially when generating from compositional inputs. To address this issue, we show that selftraining enhanced with constrained decoding yields large gains in data efficiency on a conversational weather dataset that employs compositional meaning representations. In particular, our experiments indicate that self-training with constrained decoding can enable sequence-tosequence models to achieve satisfactory quality using vanilla decoding with five to ten times less data than with ordinary supervised baseline; moreover, by leveraging pretrained models, data efficiency can be increased further to fifty times. We confirm the main automatic results with human evaluations and show that they extend to an enhanced, compositional version of the E2E dataset. The end result is an approach that makes it possible to achieve acceptable performance on compositional NLG tasks using hundreds rather than tens of thousands of training samples.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Neural approaches to natural language generation in task-oriented dialogue have typically required large amounts of annotated training data to achieve satisfactory performance, especially when generating from compositional inputs. To address this issue, we show that selftraining enhanced with constrained decoding yields large gains in data efficiency on a conversational weather dataset that employs compositional meaning representations. In particular, our experiments indicate that self-training with constrained decoding can enable sequence-tosequence models to achieve satisfactory quality using vanilla decoding with five to ten times less data than with ordinary supervised baseline; moreover, by leveraging pretrained models, data efficiency can be increased further to fifty times. We confirm the main automatic results with human evaluations and show that they extend to an enhanced, compositional version of the E2E dataset. The end result is an approach that makes it possible to achieve acceptable performance on compositional NLG tasks using hundreds rather than tens of thousands of training samples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Neural approaches to natural language generation (NLG) have received increasing attention due to their flexibility and end-to-end trainability (Wen et al., 2016; Mei et al., 2016; Du\u0161ek and Jurcicek, 2016; Du\u0161ek et al., 2019) . However, despite using simplistic input meaning representations (MR), most neural models require large quantities of clean annotated training data in order to obtain good performance. As such, the time and expense required to obtain sufficient training data is a significant obstacle to deploying neural NLG models at scale.", "cite_spans": [ { "start": 143, "end": 161, "text": "(Wen et al., 2016;", "ref_id": "BIBREF38" }, { "start": 162, "end": 179, "text": "Mei et al., 2016;", "ref_id": "BIBREF25" }, { "start": 180, "end": 205, "text": "Du\u0161ek and Jurcicek, 2016;", "ref_id": "BIBREF12" }, { "start": 206, "end": 225, "text": "Du\u0161ek et al., 2019)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To enable richer task-oriented dialogue, Balakrishnan et al. (2019) argue for using compositional, tree-structured MRs that include discourse rela-tions, emphasizing the need for applications to exert control over these relations when generating text. Perhaps not surprisingly, their compositional input MRs further exacerbate annotated data needs. To address this issue, introduce a novel constrained decoding technique that nearly always yields correct output even in challenging cases. However, their constrained decoding method incurs a substantial runtime cost, making it too slow to deploy in task-oriented dialogue systems where low latency is a priority. Thus, finding ways to improve data efficiency for training models that perform satisfactorily with vanilla decoding remains an important challenge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In order to reduce annotated data needs, Kedzie and McKeown (2019) and Qader et al. (2019) propose self-training methods for NLG, though they do not explore self-training for the more challenging case of generating from compositional input representations. Arun et al. (2020) do explore self-training with compositional inputs, but they do not consider constrained decoding. In this paper, we investigate for the first time whether constrained decoding can be used during self-training to enhance data efficiency for compositional neural NLG, since the speed of constrained decoding is much less of a concern during self-training than it is at runtime in dialogue systems. In particular, we adapt and extend He et al.'s (2020) approach to self-training for MT to the setting of neural NLG from compositional MRs, comparing vanilla self-training to self-training enhanced with constrained decoding as well as with reverse model reranking (Shen et al., 2019; Yee et al., 2019) , a simpler technique where the n-best outputs of the forward model are reranked using scores from a reverse model. In both cases, the idea is to enhance the quality of the pseudo-annotated texts created during self-training, so that self-training can more successfully avoid entrenching the model's own Balakrishnan et al.'s (2019) conversational weather dataset. In the actual dataset, discourse relations have a DS prefix (e.g., DS CONTRAST), dialog acts have a DG prefix (e.g, DG INFORM) and arguments have an ARG prefix (e.g., ARG CITY); these are elided here for brevity. mistakes. We show that self-training benefits considerably from both methods, and that constrained decoding yields especially large gains in data efficiency. In particular, our experiments indicate that using constrained decoding during self-training, rather than at runtime, enables standard sequenceto-sequence (seq2seq) models to achieve satisfactory quality with much reduced latency.", "cite_spans": [ { "start": 41, "end": 66, "text": "Kedzie and McKeown (2019)", "ref_id": "BIBREF20" }, { "start": 71, "end": 90, "text": "Qader et al. (2019)", "ref_id": "BIBREF31" }, { "start": 257, "end": 275, "text": "Arun et al. (2020)", "ref_id": "BIBREF0" }, { "start": 708, "end": 726, "text": "He et al.'s (2020)", "ref_id": null }, { "start": 937, "end": 956, "text": "(Shen et al., 2019;", "ref_id": "BIBREF35" }, { "start": 957, "end": 974, "text": "Yee et al., 2019)", "ref_id": "BIBREF42" }, { "start": 1279, "end": 1307, "text": "Balakrishnan et al.'s (2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our contributions are two-fold. On Balakrishnan et al.'s (2019) conversational weather dataset, we show that using constrained decoding during selftraining and their SEQ2SEQ-TREE model at runtime yields comparable performance with 20% of the annotated data as using the full training set in supervised fashion, and by leveraging pretrained models, annotated data needs can be further reduced to 2%. We then confirm the main automatic metric results with human evaluations and show that they hold for Balakrishnan et al.'s (2019) enhanced version of the E2E dataset (Du\u0161ek et al., 2019) .", "cite_spans": [ { "start": 500, "end": 528, "text": "Balakrishnan et al.'s (2019)", "ref_id": null }, { "start": 565, "end": 585, "text": "(Du\u0161ek et al., 2019)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Neural NLG seq2seq models aim to generate a natural language text", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "y = y 1 , \u2022 \u2022 \u2022 , y |y| from a mean- ing representation x = x 1 , \u2022 \u2022 \u2022 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "x |x| by modeling the conditional probability", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (y|x) = |y| i=1 P (y i |y 0.3 in all cases).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human Evaluation of Grammaticality", "sec_num": "3.6" }, { "text": "The most frequent grammaticality issue, especially for LSTM ST-CD-20, was missing punctuation between independent clauses, as shown in (a) in Table 2. Other errors included occasional agreement errors or missing constituents, as in (b). Example correctness errors appear in Table 3 ; they usually involved missing information, but sometimes also repeated or extra information.", "cite_spans": [], "ref_spans": [ { "start": 274, "end": 281, "text": "Table 3", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "3.7" }, { "text": "We also evaluate our strategies on the enhanced E2E dataset. As shown in Figure 4 in Appendix, we can draw the same general conclusions regarding data efficiency as with the conversational weather dataset. 6 Both constrained decoding and reverse model reranking improve upon vanilla self-training, with constrained decoding being more effective when using less parallel data. Notably, for LSTM models, with vanilla decoding at runtime, tree accuracy and BLEU of using self-training with constrained decoding and 20% of the parallel data (ST-CD-20) are essentially identical to the supervised model using all the available data (LBL-100). For BART models, the performance of ST-CD-02 is also very similar to the one of LBL-100: While the BLEU score of ST-CD-02 is slightly lower than that of LBL-100, it is still very high, and the tree accuracy of ST-CD-02 is slightly higher than the tree accuracy of LBL-100. Likewise, our general approach to self-training (He et al., 2020 ) is much simpler than in Chang et al.'s (2021) work, where they generate new text samples using GPT-2 (unconditioned on any input) then pair them with data samples. Earlier, Chisholm et al. (2017) train NLG and NLU models that share parameters to reduce the risk of hallucination. Our iterative method of training forward and reverse seq2seq models instead draws from Yee et al.'s (2019) reverse model reranking approach and is much simpler to implement. Additionally, Nie et al. (2019) apply self-training to a NLU model to reduce the noise in the original MR-text pairs in order to reduce the hallucination problem in NLG, but they do not investigate data efficiency issues. Also related is work on back-translation (Sennrich et al., 2016) in MT, which starts from the assumption that there is much target side data; by contrast, selftraining assumes there is much source side data, as is the case with our task (where new unlabeled MRs can be easily created).", "cite_spans": [ { "start": 206, "end": 207, "text": "6", "ref_id": null }, { "start": 959, "end": 975, "text": "(He et al., 2020", "ref_id": "BIBREF18" }, { "start": 1002, "end": 1023, "text": "Chang et al.'s (2021)", "ref_id": null }, { "start": 1151, "end": 1173, "text": "Chisholm et al. (2017)", "ref_id": "BIBREF8" }, { "start": 1345, "end": 1364, "text": "Yee et al.'s (2019)", "ref_id": null }, { "start": 1695, "end": 1718, "text": "(Sennrich et al., 2016)", "ref_id": "BIBREF33" } ], "ref_spans": [ { "start": 73, "end": 81, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "E2E Confirmation", "sec_num": "3.8" }, { "text": "More recent work takes advantage of pre-trained language models to develop few-shot NLG methods. Chen et al. (2019) show impressive results with just 200 training items using a specialized table encoder with GPT-2, while Peng et al. (2020) use cross-domain training (an orthogonal approach) together with GPT-2; neither investigates more challenging compositional inputs. Although Arun et al. (2020) do use BART on compositional inputs, their tree accuracy levels are much lower even when using considerably more data.", "cite_spans": [ { "start": 97, "end": 115, "text": "Chen et al. (2019)", "ref_id": "BIBREF7" }, { "start": 221, "end": 239, "text": "Peng et al. (2020)", "ref_id": "BIBREF30" }, { "start": 381, "end": 399, "text": "Arun et al. (2020)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "E2E Confirmation", "sec_num": "3.8" }, { "text": "More generally, reverse (or reconstructor) models have taken on greater theoretical interest thanks to Rational Speech Act (RSA) theory (Frank et al., 2009) and have recently proved useful in NLG (Fried et al., 2018; Shen et al., 2019) . Our approach differs in using reverse models during selftraining rather than at runtime. Work on combining parsing and generation for ambiguity avoidance goes back much farther (Neumann and van Noord, 1992) , with managing the trade-off between fluency and ambiguity avoidance a more recent topic (Duan and White, 2014) that we also leave for future work. Constrained decoding (Balakrishnan et al., 2019) is inspired by coverage tracking in grammar-based approaches to realization (Kay, 1996; Carroll and Oepen, 2005; White, 2006) ; its use during self-training is novel to this work.", "cite_spans": [ { "start": 136, "end": 156, "text": "(Frank et al., 2009)", "ref_id": "BIBREF16" }, { "start": 196, "end": 216, "text": "(Fried et al., 2018;", "ref_id": "BIBREF17" }, { "start": 217, "end": 235, "text": "Shen et al., 2019)", "ref_id": "BIBREF35" }, { "start": 415, "end": 444, "text": "(Neumann and van Noord, 1992)", "ref_id": "BIBREF26" }, { "start": 535, "end": 557, "text": "(Duan and White, 2014)", "ref_id": "BIBREF10" }, { "start": 719, "end": 730, "text": "(Kay, 1996;", "ref_id": "BIBREF19" }, { "start": 731, "end": 755, "text": "Carroll and Oepen, 2005;", "ref_id": "BIBREF4" }, { "start": 756, "end": 768, "text": "White, 2006)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "E2E Confirmation", "sec_num": "3.8" }, { "text": "In this paper, we have shown that using selftraining with constrained decoding in compositional neural NLG can deliver large gains in data efficiency, enabling seq2seq models to achieve satisfactory quality using vanilla decoding with much less annotated data. The idea of using constrained decoding with self-training rather than for runtime inference is a very simple one, but ours is the first paper to investigate the idea, and we show via thorough experimentation and evaluation that it works remarkably well. In our experiments, we found that LSTM models trained from scratch can increase data efficiency by a factor of at least 5, while pretrained BART models yielded a 50 times increase, achieving essentially comparable levels of correctness and grammaticality using only 2% of the existing training data. As such, the approach promises to help pave the way towards developing systems with mere hundreds rather than tens of thousands of annotated samples, potentially eliminating the need for crowdsourcing in system development.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "In future work, it would be exploring ways of at least partially automatically adding semantic annotations to the target texts using methods that treat such annotations as latent Xu et al., 2021) to facilitate using our approach on a new task or dataset.", "cite_spans": [ { "start": 179, "end": 195, "text": "Xu et al., 2021)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "be released upon acceptance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "The dependencies are specified in requirements.txt. Code usage instructions are in README.md and self-training/README.md. Table 4 shows the detailed breakdown of agreement between the expert judges and tree accuracy. We can observe that agreement with tree accuracy is higher with LSTM models than with BART, and higher where there is a significant difference in tree accuracy than in the one case where there was no significant difference (BART ST-CD-02 vs. BART LBL-100). For this comparison, there were relatively few discrepancies in tree accuracy to sample from, and the items in question likely represent somewhat unusual cases. In examining the handful of cases where the judges agreed but did not agree with tree accuracy, about half were real errors where BART's words did not match the nonterminals (influenced by its pre-trained knowledge), while the other half had (presumably rare) errors in the input or reference. It is not surprising that tree accuracy would be somewhat less reliable with BART, as it relies on its pre-trained knowledge as well as the input in making generation choices. For example, in one case the BART ST-CD-02 model output, \"It's not expected to be warm tomorrow morning in your area. The temperature will drop to ARG TEMP tomorrow.\" Here, it seems that BART inferred that if it won't be warm tomorrow, that may well be because the temperature is dropping. However, \"will drop\" is not part of the input and may or may not be factual. Since these words appear outside of the non-terminal signaling the low temperature in the output, they are not checked by tree accuracy, and thus this error is missed.", "cite_spans": [], "ref_spans": [ { "start": 122, "end": 129, "text": "Table 4", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "ST-CD-20 vs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LSTM", "sec_num": null }, { "text": "Judge ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LBL-20 LBL-100", "sec_num": null }, { "text": "Arguments appearing multiple times in the input MR are only required to appear once in the output.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "With 20% of the parallel data, ST-CD exceeds LBL in tree accuracy while trailing it slightly in BLEU.5 Confirmed in significance tests on tree accuracy and human evaluation later in this section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that the BLEU scores here are calculated in the same generous way as inBalakrishnan et al.'s (2019) evaluation. In particular, since multiple test MRs in the enhanced data have the same original MR, we select the best generation of the same original MR using NLTK's(Bird et al., 2009) implementation of sentence BLEU on multi-references.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/pytorch/fairseq/tree/master/examples/bart", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank that the Ohio Super Computer Center (Center, 1987) supports us sufficient computational devices for training many large models in our experiments. This research was supported by a collaborative open science research agreement between Facebook and The Ohio State University. The last author is a paid consultant for Facebook.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "For LSTM models, the word embedding and hidden size dimensions are 300 and 128 respectively, and the decoder output embedding size is 512. The dropout rate for both encoder and decoder is 0.2. There are no more than 128 sentences in a batch. Training uses early stopping when the validation loss has not improved for the last 20 epochs. The learning rate is 0.001, and the scheduler is ReduceL-ROnPlateau whose factor is 0.1 and patience is 3. The maximum output length is 2 times source length plus 50, and the beam size is 5. The loss function is optimized with Adam (Kingma and Ba, 2014), where \u03b2 1 = 0.9, \u03b2 2 = 0.999 and = 10 \u22128 .For BART models, we use the BART-Large model available in the fairseq, which 12 encoder and decoder layers. 7 The dropout rate for both encoder and decoder is 0.1. There are no more than 2048 tokens in a batch. Training uses early stopping when the validation loss has not improved for the last 20 epochs. The learning rate is 0.00003, and the scheduler is polynomial decay with 1000 warm updates. The maximum output length is 1024. The loss function is optimized with Adam (Kingma and Ba, 2014), where \u03b2 1 = 0.9, \u03b2 2 = 0.999 and = 10 \u22128 . For every experiment, the computing infrastructure we used is an NVIDIA V100 GPU and an Intel(R) Xeon(R) Platinum 8268 CPU @ 2.90GHz CPU. The numbers of trainable parameters of LSTM models for weather and E2E datasets are 2,212,928 and 3,079,256 respectively. Training a LSTM model on the full weather dataset takes around 0.5k seconds for 38 epochs. Training a LSTM model on the pseudo-labeled weather dataset takes around 3.4k seconds for 57 epochs. Training and validation loss at convergence is around 1.8. The speed of vanilla decoding was 37,973 tokens/s, and the speed of constrained decoding was 532.61 tokens/s. The numbers of trainable parameters of BART models for weather and E2E datasets are both 406,290,432. Training a BART model on the full weather dataset takes around 10k seconds for 21 epochs. Training a BART model on the pseudo-labeled weather dataset takes around 42k seconds for 20 epochs. Training and validation loss at convergence is around 2.1. Figure 5 : Tree accuracy and BLEU scores of LSTM and two self-training strategies by parallel training data size with constrained decoding at runtime on the conversational weather dataset and the enhanced E2E dataset. The self-training results of the enhanced E2E dataset are measured on the first iteration. Figure 6 : Tree accuracy and BLEU scores of BART and two self-training strategies by parallel training data size with constrained decoding at runtime on the conversational weather dataset and the enhanced E2E dataset. The self-training results of the enhanced E2E dataset are measured on the first iteration.", "cite_spans": [ { "start": 742, "end": 743, "text": "7", "ref_id": null } ], "ref_spans": [ { "start": 2146, "end": 2154, "text": "Figure 5", "ref_id": null }, { "start": 2455, "end": 2463, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "A Reproducibility Details", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Best practices for data-efficient modeling in NLG:how to train production-ready neural models with less data", "authors": [ { "first": "Ankit", "middle": [], "last": "Arun", "suffix": "" }, { "first": "Soumya", "middle": [], "last": "Batra", "suffix": "" }, { "first": "Vikas", "middle": [], "last": "Bhardwaj", "suffix": "" }, { "first": "Ashwini", "middle": [], "last": "Challa", "suffix": "" }, { "first": "Pinar", "middle": [], "last": "Donmez", "suffix": "" }, { "first": "Peyman", "middle": [], "last": "Heidari", "suffix": "" }, { "first": "Hakan", "middle": [], "last": "Inan", "suffix": "" }, { "first": "Shashank", "middle": [], "last": "Jain", "suffix": "" }, { "first": "Anuj", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Shawn", "middle": [], "last": "Mei", "suffix": "" }, { "first": "Karthik", "middle": [], "last": "Mohan", "suffix": "" }, { "first": "Michael", "middle": [], "last": "White", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics: Industry Track", "volume": "", "issue": "", "pages": "64--77", "other_ids": { "DOI": [ "10.18653/v1/2020.coling-industry.7" ] }, "num": null, "urls": [], "raw_text": "Ankit Arun, Soumya Batra, Vikas Bhardwaj, Ashwini Challa, Pinar Donmez, Peyman Heidari, Hakan Inan, Shashank Jain, Anuj Kumar, Shawn Mei, Karthik Mohan, and Michael White. 2020. Best practices for data-efficient modeling in NLG:how to train production-ready neural models with less data. In Proceedings of the 28th International Conference on Computational Linguistics: Industry Track, pages 64-77, Online. International Committee on Compu- tational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Constrained decoding for neural NLG from compositional representations in task-oriented dialogue", "authors": [ { "first": "Anusha", "middle": [], "last": "Balakrishnan", "suffix": "" }, { "first": "Jinfeng", "middle": [], "last": "Rao", "suffix": "" }, { "first": "Kartikeya", "middle": [], "last": "Upasani", "suffix": "" }, { "first": "Michael", "middle": [], "last": "White", "suffix": "" }, { "first": "Rajen", "middle": [], "last": "Subba", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "831--844", "other_ids": { "DOI": [ "10.18653/v1/P19-1080" ] }, "num": null, "urls": [], "raw_text": "Anusha Balakrishnan, Jinfeng Rao, Kartikeya Upasani, Michael White, and Rajen Subba. 2019. Constrained decoding for neural NLG from compositional repre- sentations in task-oriented dialogue. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 831-844, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Natural language processing with Python: analyzing text with the natural language toolkit", "authors": [ { "first": "Steven", "middle": [], "last": "Bird", "suffix": "" }, { "first": "Ewan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Loper", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven Bird, Ewan Klein, and Edward Loper. 2009. Nat- ural language processing with Python: analyzing text with the natural language toolkit. \" O'Reilly Media, Inc.\".", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Generating and evaluating evaluative arguments", "authors": [ { "first": "Giuseppe", "middle": [], "last": "Carenini", "suffix": "" }, { "first": "Johanna", "middle": [ "D" ], "last": "Moore", "suffix": "" } ], "year": 2006, "venue": "Artificial Intelligence", "volume": "170", "issue": "", "pages": "925--952", "other_ids": {}, "num": null, "urls": [], "raw_text": "Giuseppe Carenini and Johanna D. Moore. 2006. Gener- ating and evaluating evaluative arguments. Artificial Intelligence, 170:925-952.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "High efficiency realization for a wide-coverage unification grammar", "authors": [ { "first": "John", "middle": [], "last": "Carroll", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Oepen", "suffix": "" } ], "year": 2005, "venue": "Proc. IJCNLP-05", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Carroll and Stefan Oepen. 2005. High efficiency realization for a wide-coverage unification grammar. In Proc. IJCNLP-05.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Neural data-to-text generation with LM-based text augmentation", "authors": [ { "first": "Ernie", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Xiaoyu", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Dawei", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Vera", "middle": [], "last": "Demberg", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Su", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume", "volume": "", "issue": "", "pages": "758--768", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ernie Chang, Xiaoyu Shen, Dawei Zhu, Vera Demberg, and Hui Su. 2021. Neural data-to-text generation with LM-based text augmentation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Vol- ume, pages 758-768, Online. Association for Com- putational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Few-shot nlg with pre-trained language model", "authors": [ { "first": "Zhiyu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Harini", "middle": [], "last": "Eavani", "suffix": "" }, { "first": "Wenhu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yinyin", "middle": [], "last": "Liu", "suffix": "" }, { "first": "William", "middle": [ "Yang" ], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhiyu Chen, Harini Eavani, Wenhu Chen, Yinyin Liu, and William Yang Wang. 2019. Few-shot nlg with pre-trained language model.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Learning to generate one-sentence biographies from Wikidata", "authors": [ { "first": "Andrew", "middle": [], "last": "Chisholm", "suffix": "" }, { "first": "Will", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Hachey", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "633--642", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Chisholm, Will Radford, and Ben Hachey. 2017. Learning to generate one-sentence biographies from Wikidata. In Proceedings of the 15th Conference of the European Chapter of the Association for Compu- tational Linguistics: Volume 1, Long Papers, pages 633-642, Valencia, Spain. Association for Computa- tional Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A strategy for information presentation in spoken dialog systems", "authors": [ { "first": "Vera", "middle": [], "last": "Demberg", "suffix": "" }, { "first": "Andi", "middle": [], "last": "Winterboer", "suffix": "" }, { "first": "Johanna", "middle": [ "D" ], "last": "Moore", "suffix": "" } ], "year": 2011, "venue": "Computational Linguistics", "volume": "37", "issue": "3", "pages": "489--539", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vera Demberg, Andi Winterboer, and Johanna D Moore. 2011. A strategy for information presentation in spoken dialog systems. Computational Linguistics, 37(3):489-539.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "That's not what I meant! Using parsers to avoid structural ambiguities in generated text", "authors": [ { "first": "Manjuan", "middle": [], "last": "Duan", "suffix": "" }, { "first": "Michael", "middle": [], "last": "White", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.3115/v1/P14-1039" ] }, "num": null, "urls": [], "raw_text": "Manjuan Duan and Michael White. 2014. That's not what I meant! Using parsers to avoid structural ambi- guities in generated text. In Proceedings of the 52nd", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "1", "issue": "", "pages": "413--423", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 413-423, Baltimore, Maryland. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Sequence-tosequence generation for spoken dialogue via deep syntax trees and strings", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" }, { "first": "Filip", "middle": [], "last": "Jurcicek", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/P16-2008" ] }, "num": null, "urls": [], "raw_text": "Ond\u0159ej Du\u0161ek and Filip Jurcicek. 2016. Sequence-to- sequence generation for spoken dialogue via deep syntax trees and strings. In Proceedings of the 54th", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "2", "issue": "", "pages": "45--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 45-51. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Findings of the E2E NLG Challenge", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" }, { "first": "Jekaterina", "middle": [], "last": "Novikova", "suffix": "" }, { "first": "Verena", "middle": [], "last": "Rieser", "suffix": "" } ], "year": 2018, "venue": "Proc. of the 11th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "322--328", "other_ids": { "DOI": [ "10.18653/v1/W18-6539" ] }, "num": null, "urls": [], "raw_text": "Ond\u0159ej Du\u0161ek, Jekaterina Novikova, and Verena Rieser. 2018. Findings of the E2E NLG Challenge. In Proc. of the 11th International Conference on Natu- ral Language Generation, pages 322-328, Tilburg, The Netherlands. Association for Computational Lin- guistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Evaluating the state-of-the-art of end-to-end natural language generation", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" }, { "first": "Jekaterina", "middle": [], "last": "Novikova", "suffix": "" }, { "first": "Verena", "middle": [], "last": "Rieser", "suffix": "" } ], "year": 2019, "venue": "The E2E NLG Challenge", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1901.11528" ] }, "num": null, "urls": [], "raw_text": "Ond\u0159ej Du\u0161ek, Jekaterina Novikova, and Verena Rieser. 2019. Evaluating the state-of-the-art of end-to-end natural language generation: The E2E NLG Chal- lenge. arXiv preprint arXiv:1901.11528.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Informative communication in word production and word learning", "authors": [ { "first": "Michael", "middle": [], "last": "Frank", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Goodman", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Tenenbaum", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Annual Meeting of the Cognitive Science Society", "volume": "", "issue": "", "pages": "1228--1233", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Frank, Noah Goodman, Peter Lai, and Joshua Tenenbaum. 2009. Informative communication in word production and word learning. In Proceedings of the Annual Meeting of the Cognitive Science Soci- ety, pages 1228-1233.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Unified pragmatic models for generating and following instructions", "authors": [ { "first": "Daniel", "middle": [], "last": "Fried", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Andreas", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1951--1963", "other_ids": { "DOI": [ "10.18653/v1/N18-1177" ] }, "num": null, "urls": [], "raw_text": "Daniel Fried, Jacob Andreas, and Dan Klein. 2018. Uni- fied pragmatic models for generating and following instructions. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long Papers), pages 1951-1963, New Orleans, Louisiana. Association for Computa- tional Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Revisiting self-training for neural sequence generation", "authors": [ { "first": "Junxian", "middle": [], "last": "He", "suffix": "" }, { "first": "Jiatao", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Jiajun", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Marc'aurelio", "middle": [], "last": "Ranzato", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junxian He, Jiatao Gu, Jiajun Shen, and Marc'Aurelio Ranzato. 2020. Revisiting self-training for neural sequence generation. In International Conference on Learning Representations.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Chart generation", "authors": [ { "first": "Martin", "middle": [], "last": "Kay", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "200--204", "other_ids": { "DOI": [ "10.3115/981863.981890" ] }, "num": null, "urls": [], "raw_text": "Martin Kay. 1996. Chart generation. In Proceedings of the 34th Annual Meeting of the Association for Com- putational Linguistics, pages 200-204, Santa Cruz, California, USA. Association for Computational Lin- guistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A good sample is hard to find: Noise injection sampling and self-training for neural language generation models", "authors": [ { "first": "Chris", "middle": [], "last": "Kedzie", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 12th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "584--593", "other_ids": { "DOI": [ "10.18653/v1/W19-8672" ] }, "num": null, "urls": [], "raw_text": "Chris Kedzie and Kathleen McKeown. 2019. A good sample is hard to find: Noise injection sampling and self-training for neural language generation models. In Proceedings of the 12th International Conference on Natural Language Generation, pages 584-593, Tokyo, Japan. Association for Computational Lin- guistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6980" ] }, "num": null, "urls": [], "raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "authors": [ { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Marjan", "middle": [], "last": "Ghazvininejad", "suffix": "" }, { "first": "Abdelrahman", "middle": [], "last": "Mohamed", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7871--7880", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.703" ] }, "num": null, "urls": [], "raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and com- prehension. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Effective self-training for parsing", "authors": [ { "first": "David", "middle": [], "last": "Mcclosky", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Human Language Technology Conference of the NAACL, Main Conference", "volume": "", "issue": "", "pages": "152--159", "other_ids": {}, "num": null, "urls": [], "raw_text": "David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In Proceed- ings of the Human Language Technology Conference of the NAACL, Main Conference, pages 152-159, New York City, USA. Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Note on the sampling error of the difference between correlated proportions or percentages", "authors": [ { "first": "Quinn", "middle": [], "last": "Mcnemar", "suffix": "" } ], "year": 1947, "venue": "Psychometrika", "volume": "12", "issue": "2", "pages": "153--157", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quinn McNemar. 1947. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika, 12(2):153-157.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "What to talk about and how? selective generation using lstms with coarse-to-fine alignment", "authors": [ { "first": "Hongyuan", "middle": [], "last": "Mei", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "R. Matthew", "middle": [], "last": "Walter", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "720--730", "other_ids": { "DOI": [ "10.18653/v1/N16-1086" ] }, "num": null, "urls": [], "raw_text": "Hongyuan Mei, Mohit Bansal, and R. Matthew Wal- ter. 2016. What to talk about and how? selective generation using lstms with coarse-to-fine alignment. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 720-730. Association for Computational Lin- guistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Selfmonitoring with reversible grammars", "authors": [ { "first": "Gunter", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Gertjan", "middle": [], "last": "Van Noord", "suffix": "" } ], "year": 1992, "venue": "The 15th International Conference on Computational Linguistics", "volume": "2", "issue": "", "pages": "700--706", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gunter Neumann and Gertjan van Noord. 1992. Self- monitoring with reversible grammars. In COLING 1992 Volume 2: The 15th International Conference on Computational Linguistics, pages 700-706.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A simple recipe towards reducing hallucination in neural surface realisation", "authors": [ { "first": "Feng", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Jin-Ge", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Jinpeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Rong", "middle": [], "last": "Pan", "suffix": "" }, { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2673--2679", "other_ids": { "DOI": [ "10.18653/v1/P19-1256" ] }, "num": null, "urls": [], "raw_text": "Feng Nie, Jin-Ge Yao, Jinpeng Wang, Rong Pan, and Chin-Yew Lin. 2019. A simple recipe towards re- ducing hallucination in neural surface realisation. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 2673- 2679, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "fairseq: A fast, extensible toolkit for sequence modeling", "authors": [ { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Alexei", "middle": [], "last": "Baevski", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Ng", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" } ], "year": 2019, "venue": "Proceedings of NAACL-HLT 2019: Demonstrations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th annual meeting on association for computational linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting on association for compu- tational linguistics, pages 311-318. Association for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Few-shot natural language generation for taskoriented dialog", "authors": [ { "first": "Baolin", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Chenguang", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Chunyuan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xiujun", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jinchao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baolin Peng, Chenguang Zhu, Chunyuan Li, Xiujun Li, Jinchao Li, Michael Zeng, and Jianfeng Gao. 2020. Few-shot natural language generation for task- oriented dialog.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Semi-supervised neural text generation by joint learning of natural language generation and natural language understanding models", "authors": [ { "first": "Raheel", "middle": [], "last": "Qader", "suffix": "" }, { "first": "Fran\u00e7ois", "middle": [], "last": "Portet", "suffix": "" }, { "first": "Cyril", "middle": [], "last": "Labb\u00e9", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 12th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "552--562", "other_ids": { "DOI": [ "10.18653/v1/W19-8669" ] }, "num": null, "urls": [], "raw_text": "Raheel Qader, Fran\u00e7ois Portet, and Cyril Labb\u00e9. 2019. Semi-supervised neural text generation by joint learn- ing of natural language generation and natural lan- guage understanding models. In Proceedings of the 12th International Conference on Natural Language Generation, pages 552-562, Tokyo, Japan. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "A tree-to-sequence model for neural NLG in taskoriented dialog", "authors": [ { "first": "Jinfeng", "middle": [], "last": "Rao", "suffix": "" }, { "first": "Kartikeya", "middle": [], "last": "Upasani", "suffix": "" }, { "first": "Anusha", "middle": [], "last": "Balakrishnan", "suffix": "" }, { "first": "Michael", "middle": [], "last": "White", "suffix": "" }, { "first": "Anuj", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Rajen", "middle": [], "last": "Subba", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 12th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "95--100", "other_ids": { "DOI": [ "10.18653/v1/W19-8611" ] }, "num": null, "urls": [], "raw_text": "Jinfeng Rao, Kartikeya Upasani, Anusha Balakrishnan, Michael White, Anuj Kumar, and Rajen Subba. 2019. A tree-to-sequence model for neural NLG in task- oriented dialog. In Proceedings of the 12th Interna- tional Conference on Natural Language Generation, pages 95-100, Tokyo, Japan. Association for Com- putational Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Improving neural machine translation models with monolingual data", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/P16-1009" ] }, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "1", "issue": "", "pages": "86--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computational Lin- guistics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Pragmatically informative text generation", "authors": [ { "first": "Sheng", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Fried", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Andreas", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4060--4067", "other_ids": { "DOI": [ "10.18653/v1/N19-1410" ] }, "num": null, "urls": [], "raw_text": "Sheng Shen, Daniel Fried, Jacob Andreas, and Dan Klein. 2019. Pragmatically informative text gener- ation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4060-4067, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Neural data-to-text generation via jointly learning the segmentation and correspondence", "authors": [ { "first": "Xiaoyu", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Ernie", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Su", "suffix": "" }, { "first": "Cheng", "middle": [], "last": "Niu", "suffix": "" }, { "first": "Dietrich", "middle": [], "last": "Klakow", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7155--7165", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.641" ] }, "num": null, "urls": [], "raw_text": "Xiaoyu Shen, Ernie Chang, Hui Su, Cheng Niu, and Di- etrich Klakow. 2020. Neural data-to-text generation via jointly learning the segmentation and correspon- dence. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7155-7165, Online. Association for Computational Linguistics.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Individual and domain adaptation in sentence planning for dialogue", "authors": [ { "first": "Marilyn", "middle": [], "last": "Walker", "suffix": "" }, { "first": "Amanda", "middle": [], "last": "Stent", "suffix": "" }, { "first": "Francois", "middle": [], "last": "Mairesse", "suffix": "" }, { "first": "Rashmi", "middle": [], "last": "Prasad", "suffix": "" } ], "year": 2007, "venue": "Journal of Artificial Intelligence Research (JAIR)", "volume": "30", "issue": "", "pages": "413--456", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marilyn Walker, Amanda Stent, Francois Mairesse, and Rashmi Prasad. 2007. Individual and domain adap- tation in sentence planning for dialogue. Journal of Artificial Intelligence Research (JAIR), 30:413-456.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Multi-domain neural network language generation for spoken dialogue systems", "authors": [ { "first": "Milica", "middle": [], "last": "Tsung-Hsien Wen", "suffix": "" }, { "first": "Nikola", "middle": [], "last": "Ga\u0161i\u0107", "suffix": "" }, { "first": "M", "middle": [ "Lina" ], "last": "Mrk\u0161i\u0107", "suffix": "" }, { "first": "Pei-Hao", "middle": [], "last": "Rojas-Barahona", "suffix": "" }, { "first": "David", "middle": [], "last": "Su", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Vandyke", "suffix": "" }, { "first": "", "middle": [], "last": "Young", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "120--129", "other_ids": { "DOI": [ "10.18653/v1/N16-1015" ] }, "num": null, "urls": [], "raw_text": "Tsung-Hsien Wen, Milica Ga\u0161i\u0107, Nikola Mrk\u0161i\u0107, M. Lina Rojas-Barahona, Pei-Hao Su, David Vandyke, and Steve Young. 2016. Multi-domain neu- ral network language generation for spoken dialogue systems. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 120-129. Association for Com- putational Linguistics.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Efficient realization of coordinate structures in combinatory categorial grammar. Research on Language and Computation", "authors": [ { "first": "Michael", "middle": [], "last": "White", "suffix": "" } ], "year": 2006, "venue": "", "volume": "4", "issue": "", "pages": "39--75", "other_ids": { "DOI": [ "10.1007/s11168-006-9010-2" ] }, "num": null, "urls": [], "raw_text": "Michael White. 2006. Efficient realization of coordinate structures in combinatory categorial grammar. Re- search on Language and Computation, 4(1):39-75.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Generating tailored, comparative descriptions with contextually appropriate intonation", "authors": [ { "first": "Michael", "middle": [], "last": "White", "suffix": "" }, { "first": "A", "middle": [ "J" ], "last": "Robert", "suffix": "" }, { "first": "Johanna", "middle": [ "D" ], "last": "Clark", "suffix": "" }, { "first": "", "middle": [], "last": "Moore", "suffix": "" } ], "year": 2010, "venue": "Computational Linguistics", "volume": "36", "issue": "2", "pages": "159--201", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael White, Robert A. J. Clark, and Johanna D. Moore. 2010. Generating tailored, comparative de- scriptions with contextually appropriate intonation. Computational Linguistics, 36(2):159-201.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "AggGen: Ordering and aggregating while generating", "authors": [ { "first": "Xinnuo", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" }, { "first": "Verena", "middle": [], "last": "Rieser", "suffix": "" }, { "first": "Ioannis", "middle": [], "last": "Konstas", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "1419--1434", "other_ids": { "DOI": [ "10.18653/v1/2021.acl-long.113" ] }, "num": null, "urls": [], "raw_text": "Xinnuo Xu, Ond\u0159ej Du\u0161ek, Verena Rieser, and Ioannis Konstas. 2021. AggGen: Ordering and aggregating while generating. In Proceedings of the 59th Annual Meeting of the Association for Computational Lin- guistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1419-1434, Online. Association for Computational Linguistics.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Simple and effective noisy channel modeling for neural machine translation", "authors": [ { "first": "Kyra", "middle": [], "last": "Yee", "suffix": "" }, { "first": "Yann", "middle": [], "last": "Dauphin", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "5696--5701", "other_ids": { "DOI": [ "10.18653/v1/D19-1571" ] }, "num": null, "urls": [], "raw_text": "Kyra Yee, Yann Dauphin, and Michael Auli. 2019. Simple and effective noisy channel modeling for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5696-5701, Hong Kong, China. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "TABREF1": { "html": null, "num": null, "type_str": "table", "text": "Example compositional MR and annotated response from", "content": "" }, "TABREF4": { "html": null, "num": null, "type_str": "table", "text": "Tree accuracy and BLEU scores of the LSTM base model and three self-training strategies by parallel training data size with vanilla decoding on the conversational weather dataset. Tree accuracy on pseudo-labeled data is indicated by the same color dashed lines. Performance of the supervised model (LBL) using all of the labeled data is indicated by the gray dashed lines.", "content": "
80
100
75
80
Tree Accuracy40 60LBLBLEU65 70LBL
ST-VAN60ST-VAN
20ST-RMR ST-CD55ST-RMR ST-CD
0
25350712692539507812695253902535071269253950781269525390
%1%2%5%10%20%50%100%1%2%5%10%20%50%100
#Training Samples#Training Samples
Figure 1:
" }, "TABREF7": { "html": null, "num": null, "type_str": "table", "text": "Examples of grammaticality errors", "content": "
Index SystemErrorReference
(a)LSTM LBL-20Yes , it will be mostly sunny today inYes , it will be mostly sunny today and
your areaARG WEEKDAY in your area
(b)LSTM LBL-100 Yes , light rain is likely today ,Yes , light rain is likely today .
and light thunderstorms and rain areARG WEEKDAY will also have light
likely on ARG WEEKDAY and lightrain and light thunderstorms and rain are
thunderstorms and rain are likely onlikely on ARG WEEKDAY
ARG WEEKDAY
" }, "TABREF8": { "html": null, "num": null, "type_str": "table", "text": "", "content": "
: Examples of correctness errors
" }, "TABREF10": { "html": null, "num": null, "type_str": "table", "text": "Agreement rate of human evaluation of correctness with tree accuracy (excluding indeterminate 'same' judgments)", "content": "" } } } }