ACL-OCL / Base_JSON /prefixI /json /inlg /2020.inlg-1.14.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
87.9 kB
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:28:15.103287Z"
},
"title": "Text-to-Text Pre-Training for Data-to-Text Tasks",
"authors": [
{
"first": "Mihir",
"middle": [],
"last": "Kale",
"suffix": "",
"affiliation": {},
"email": "mihirkale@google.com"
},
{
"first": "Abhinav",
"middle": [],
"last": "Rastogi",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We study the pre-train + fine-tune strategy for data-to-text tasks. Our experiments indicate that text-to-text pre-training in the form of T5 (Raffel et al., 2019), enables simple, end-to-end transformer based models to outperform pipelined neural architectures tailored for data-to-text generation, as well as alternative language model based pre-training techniques such as BERT and GPT-2. Importantly, T5 pre-training leads to better generalization, as evidenced by large improvements on out-ofdomain test sets. We hope our work serves as a useful baseline for future research, as transfer learning becomes ever more prevalent for data-to-text tasks.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We study the pre-train + fine-tune strategy for data-to-text tasks. Our experiments indicate that text-to-text pre-training in the form of T5 (Raffel et al., 2019), enables simple, end-to-end transformer based models to outperform pipelined neural architectures tailored for data-to-text generation, as well as alternative language model based pre-training techniques such as BERT and GPT-2. Importantly, T5 pre-training leads to better generalization, as evidenced by large improvements on out-ofdomain test sets. We hope our work serves as a useful baseline for future research, as transfer learning becomes ever more prevalent for data-to-text tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Natural language generation from structured data, or data-to-text (Kukich, 1983; McKeown, 1985) , is the task of generating natural language text conditioned on source content provided in the form of structured data such as a table, graph etc. Some example applications include task oriented dialog (Wen et al., 2015) , summarizing weather forecasts (Sripada et al.; Goldberg et al., 1994) , etc.",
"cite_spans": [
{
"start": 66,
"end": 80,
"text": "(Kukich, 1983;",
"ref_id": "BIBREF9"
},
{
"start": 81,
"end": 95,
"text": "McKeown, 1985)",
"ref_id": "BIBREF14"
},
{
"start": 299,
"end": 317,
"text": "(Wen et al., 2015)",
"ref_id": "BIBREF27"
},
{
"start": 350,
"end": 366,
"text": "(Sripada et al.;",
"ref_id": "BIBREF26"
},
{
"start": 367,
"end": 389,
"text": "Goldberg et al., 1994)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work we study the applicability of large scale text-to-text transfer learning learning for this task. In particular, we focus on pre-training in the form of the \"Text-to-Text Transfer Transformer\" (T5) models released by Raffel et al. (2019) . Finetuning T5 achieves state-of-the-art results on diverse benchmarks spanning task oriented dialogue (MultiWoz), tables-to-text (ToTTo) and graph-totext (WebNLG). Empirical results further demonstrate the following:",
"cite_spans": [
{
"start": 229,
"end": 249,
"text": "Raffel et al. (2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Pre-training greatly improves robustness of models to out-of-domain inputs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 By leveraging pre-training, a simple end-toend transformer model can outperform sophis-ticated, multi-stage pipelined approaches and other exotic architectures like graph neural networks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 T5 outperforms alternatives like BERT (Devlin et al., 2018) and GPT-2 (Radford et al., 2019) .",
"cite_spans": [
{
"start": 40,
"end": 61,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 66,
"end": 94,
"text": "GPT-2 (Radford et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our approach is simple, only scratching the surface of what is possible. There is much to be explored in the space of leveraging unlabelled data, developing unsupervised objectives etc. that are more tailored for generating text from structured data. We hope our work serves as a useful baseline for future research, as pre-training becomes ever more prevalent for this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Data-to-Text Early research on data-to-text focused on rule-based methods (Reiter and Dale, 2000) , while recent works have favored neural approaches (Wen et al., 2015) . Liu et al. (2018) generate text by conditioning language models on tables, Puduppully et al. (2019) explicitly model entities and Marcheggiani and Perez-Beltrachini (2018) encode structured data using graph convolutional networks. Ferreira et al. 2019and Moryossef et al. (2019) find that neural pipelined approaches perform better than end-to-end models.",
"cite_spans": [
{
"start": 74,
"end": 97,
"text": "(Reiter and Dale, 2000)",
"ref_id": "BIBREF22"
},
{
"start": 150,
"end": 168,
"text": "(Wen et al., 2015)",
"ref_id": "BIBREF27"
},
{
"start": 171,
"end": 188,
"text": "Liu et al. (2018)",
"ref_id": "BIBREF12"
},
{
"start": 246,
"end": 270,
"text": "Puduppully et al. (2019)",
"ref_id": "BIBREF19"
},
{
"start": 426,
"end": 449,
"text": "Moryossef et al. (2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Transfer Learning Devlin et al. (2018) showed that unsupervised pre-training can greatly benefit tasks like, question answering, summarization etc. In particular, Raffel et al. (2019) perform a large scale study of different training objectives, model capacity and size of data. Peng et al. (2020) and Chen et al. (2019b) show that pre-training in the form of GPT-2 can indeed improve performance on the data-to-text task as well. 3 Pre-training",
"cite_spans": [
{
"start": 9,
"end": 38,
"text": "Learning Devlin et al. (2018)",
"ref_id": null
},
{
"start": 163,
"end": 183,
"text": "Raffel et al. (2019)",
"ref_id": "BIBREF21"
},
{
"start": 279,
"end": 297,
"text": "Peng et al. (2020)",
"ref_id": "BIBREF18"
},
{
"start": 302,
"end": 321,
"text": "Chen et al. (2019b)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We rely on the T5 pre-trained models released by Raffel et al. (2019) . They consist of a transformer based encoder-decoder architecture. These models were pre-trained in a multitask fashion with an unsupervised \"span masking\" objective on Common Crawl data as well as supervised translation, summarization, classification, and question answering tasks. Note that none of the supervised tasks include language generation from structured data.",
"cite_spans": [
{
"start": 49,
"end": 69,
"text": "Raffel et al. (2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "To study the impact of model capacity, we experiment with different T5 variants -Small (60 million parameters), Base (220 million), Large (770 million) and 3B (3 billion).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1",
"sec_num": null
},
{
"text": "Our modeling approach is simple. The data-totext task is cast in the text-to-text framework by representing the structured data as a flat string (linearization). Figure 1 shows examples of the input representation for each dataset. We then fine-tune T5 on the data-to-text corpus for a small number of steps.",
"cite_spans": [],
"ref_spans": [
{
"start": 162,
"end": 170,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Fine-tuning",
"sec_num": "4"
},
{
"text": "Following (Raffel et al., 2019) , models are finetuned with a constant learning rate of 0.001. We use a batch size of 131,072 tokens, and a maximum input length of 512 tokens. The maximum training steps is set to 5K for WebNLG, while the larger ToTTo dataset is trained for 10K steps. The T5 vocabulary consists of 32,000 sentencepieces. All the model parameters are updated in the fine-tuning process.",
"cite_spans": [
{
"start": 10,
"end": 31,
"text": "(Raffel et al., 2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning",
"sec_num": "4"
},
{
"text": "The best checkpoint is chosen based on the BLEU (Papineni et al., 2002 ) score on the development set. Decoding is done via greedy search. In the final evaluation, for each dataset we rely on metrics used by prior work.",
"cite_spans": [
{
"start": 48,
"end": 70,
"text": "(Papineni et al., 2002",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning",
"sec_num": "4"
},
{
"text": "We conduct experiments on 3 English datasets spanning a variety of domains. \u2022 MultiWoz (Budzianowski et al., 2018 ) is a corpus of 10K human-human dialogs for developing task oriented dialogue systems. For the NLG task, a meaning representation encapsulating system actions must be verbalized into natural language response.",
"cite_spans": [
{
"start": 87,
"end": 113,
"text": "(Budzianowski et al., 2018",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5"
},
{
"text": "\u2022 WebNLG (Gardent et al., 2017) , where the task is to convert a graph of subject-objectpredicate triples into a textual description.",
"cite_spans": [
{
"start": 9,
"end": 31,
"text": "(Gardent et al., 2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5"
},
{
"text": "Each dataset uses a different kind of structured data (tables, meaning representations and graph/triples). Table 1 lists the sizes of the three datasets and Figure 1 shows examples for each.",
"cite_spans": [],
"ref_spans": [
{
"start": 107,
"end": 114,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 157,
"end": 165,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5"
},
{
"text": "Train Dev Test WebNLG 18.1K 2.2k 4.9k ToTTo 120K 7.7k 7.7k Multiwoz 56.8K 7.3k 7.3k 6 Results and Discussion",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": null
},
{
"text": "The evaluation is done using BLEU and METEOR (Lavie and Agarwal, 2007) , similar to (Ferreira et al., 2019) . The test set is split into two partsseen and unseen. The examples in the unseen set are drawn from domains not present in the training set, along with roughly 100 new predicates. Some of the baselines we compare with are: \u2022 Melbourne, a neural encoder-decoder approach, which scored the highest in the automatic evaluation of the WebNLG challenge (Gardent et al., 2017) . The model relies on delexicalization, where entities are replaced with placeholders.",
"cite_spans": [
{
"start": 45,
"end": 70,
"text": "(Lavie and Agarwal, 2007)",
"ref_id": "BIBREF10"
},
{
"start": 84,
"end": 107,
"text": "(Ferreira et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 457,
"end": 479,
"text": "(Gardent et al., 2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "WebNLG",
"sec_num": "6.1"
},
{
"text": "\u2022 GTR-LSTM (Distiawan et al., 2018), which employs a graph based triple encoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WebNLG",
"sec_num": "6.1"
},
{
"text": "\u2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WebNLG",
"sec_num": "6.1"
},
{
"text": "Step-by-Step (Moryossef et al., 2019) which splits the generation procedure into a planning stage followed by a neural generation stage.",
"cite_spans": [
{
"start": 13,
"end": 37,
"text": "(Moryossef et al., 2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "WebNLG",
"sec_num": "6.1"
},
{
"text": "\u2022 Pipeline-Transformer (Ferreira et al., 2019), a pipelined neural system consisting of discourse ordering, text structuring, lexicalization and referring expression generation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WebNLG",
"sec_num": "6.1"
},
{
"text": "\u2022 DualEnc (Zhao et al., 2020) , the current stateof-the-art system. It consists of a graph convolution network based planning model which first predicts the order of the triples, followed by a separate LSTM with attention and copy mechanism model to generate the text. To train the planning model, the approach relies on extra annotations for the triple ordering. Such annotations are can be expensive and time consuming to obtain, especially for large, complex inputs.",
"cite_spans": [
{
"start": 10,
"end": 29,
"text": "(Zhao et al., 2020)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "WebNLG",
"sec_num": "6.1"
},
{
"text": "Results are reported in ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WebNLG",
"sec_num": "6.1"
},
{
"text": "Following (Parikh et al., 2020), BLEU and PAR-ENT are employed as evaluation metrics for this table-to-text generation task. PARENT is a reference less, word-overlap based metric that reflects the factual accuracy of generated text relative to the structured data. Dhingra et al. (2019) find that PARENT correlates better with human factual accuracy judgements in comparison to other generation metrics like ROGUE (Lin, 2004) and METEOR. The following baseline models are compared:",
"cite_spans": [
{
"start": 265,
"end": 286,
"text": "Dhingra et al. (2019)",
"ref_id": "BIBREF4"
},
{
"start": 414,
"end": 425,
"text": "(Lin, 2004)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ToTTo",
"sec_num": "6.2"
},
{
"text": "\u2022 Pointer Generator (See et al., 2017b) -An LSTM based seq2seq model with attention and pointer network based copy mechanism.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ToTTo",
"sec_num": "6.2"
},
{
"text": "\u2022 BERT-to-BERT (Rothe et al., 2019) -A transformer based encoder-decoder model, where both the encoder and decoder are initialized with BERT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ToTTo",
"sec_num": "6.2"
},
{
"text": "Since it deals with open domain tables, ToTTo is arguably the most challenging dataset. Notably, it features a hidden test set, which is split into two halves -Overlap and Non-Overlap. The Non-Overlap test set features examples that are out-ofdomain from the training set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ToTTo",
"sec_num": "6.2"
},
{
"text": "Results are reported in Table 3 . T5-3B 2 achieves state-of-the-art results 3 , improving upon the BERT baseline by 5.5 BLEU and 5.8 PARENT. Moreover, the model is more robust to out-of-domain tables, with larger improvements of 6.6 BLEU and 7.5 PARENT on the Non-Overlap test set. Table 4 reports results on the development set for the different T5 model sizes. T5-Small outperforms BERT-to-BERT, even though it has 3x fewer parameters (220M vs 60M). (Chen et al., 2019a) 6.3 MultiWoz Evaluation on MultiWoz is done using BLEU and SER (Slot Error Rate). SER is the fraction of examples where at least one slot value from the structured data is not expressed in the predicted response. 4 Our baselines are",
"cite_spans": [
{
"start": 453,
"end": 473,
"text": "(Chen et al., 2019a)",
"ref_id": "BIBREF1"
},
{
"start": 687,
"end": 688,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 24,
"end": 31,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 282,
"end": 290,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "ToTTo",
"sec_num": "6.2"
},
{
"text": "\u2022 HDSA (Chen et al., 2019a ) is a transformer based architecture that encodes the dialog acts into a multi-layer hierarchical graph, with individual attention heads modeling specific nodes in graph.",
"cite_spans": [
{
"start": 7,
"end": 26,
"text": "(Chen et al., 2019a",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "\u2022 SC-GPT2 (Peng et al., 2020 ) is a GPT-2 (345M parameters) model that is further pretrained on a large data-to-text dialog corpus consisting of 400,000 examples and finally fine-tuned on MultiWoz. This 2 stage pretraining approach is currently state-of-the-art for Multiwoz.",
"cite_spans": [
{
"start": 10,
"end": 28,
"text": "(Peng et al., 2020",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Results are reported in Table 5 . All T5 based models (including T5-small which has 5x fewer parameters) outperform SC-GPT2 by 4-5 BLEU 2 We used beam search with a width of 10 for the test set submission.",
"cite_spans": [
{
"start": 136,
"end": 137,
"text": "2",
"ref_id": null
}
],
"ref_spans": [
{
"start": 24,
"end": 31,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "3 The leaderboard can be found at https://github.com/google-research-datasets/totto. 4 The metric is noisy since the comparison is done via exact match, does not accoutn for paraphrases and does not cover all slots.",
"cite_spans": [
{
"start": 85,
"end": 86,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "without any in-domain pre-training. We note that the SER score on MultiWOZ is slightly worse in comparison with SC-GPT. SC-GPT generates 5 predictions for each input and then ranks them based on the SER score itself, which naturally leads to better slot error rates. On the other hand, we generate a single output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Unseen Nat Acc Nat Acc DualEnc 2.30 89.2 1.99 66 T5-Large 2.39 92.0 2.33 90.0 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seen",
"sec_num": null
},
{
"text": "We conduct a human evaluation study on WebNLG. Human raters are presented with predicted text, along with up to 3 ground truth references. They are asked to judge the prediction along two axes -(1) Accuracy -A binary rating to gauge whether the prediction conveys the same information as the gold references and (2) Naturalness -A five point scale between 1-3, with 3 indicating a perfectly fluent and grammatical response. Each prediction is rated by 3 raters. For accuracy, we take the majority vote and for naturalness we take the average. We evaluate 500 examples, equally split between the Seen and Unseen test sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "6.4"
},
{
"text": "The evaluation is performed for T5-Large and the current state-of-the-art DualEnc model. Results are reported in Table 6 . On the Seen set, both models perform well, with T5 being rated better across both metrics. On the Unseen set, DualEnc shows a large drop of 24% in accuracy while the fluency degrades to just 1.99. Remarkably, T5 sees only a marginal drop, scoring 90% on accuracy and 2.33 on fluency. Table 7 shows some qualitative examples.",
"cite_spans": [],
"ref_spans": [
{
"start": 113,
"end": 120,
"text": "Table 6",
"ref_id": "TABREF8"
},
{
"start": 407,
"end": 414,
"text": "Table 7",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "6.4"
},
{
"text": "Our experiments with different T5 variants of varying sizes shed some light on how model capacity impacts performance. The results suggest that it largely depends on the size and complexity of the dataset. For instance, MultiWoz exhibits the least variation in the structured data and is fairly large at 56k examples. Here, even the smallest model T5-Small, is on par with the larger models. WebNLG has only 18K examples and features roughly 200 distinct relations. On the seen test set, all models perform comparably. However, on the unseen test set we notice that performance increases with model size. In particular, there is a stark jump of 10 BLEU when going from T5-Small to T5-Base, implying that model capacity is critical for out-of-domain generalization. A similar trend is observed for ToTTo (Table 4) , with a noticeable improvement from Small to Base, followed by smaller improvements upto T5-3B.",
"cite_spans": [],
"ref_spans": [
{
"start": 803,
"end": 812,
"text": "(Table 4)",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Impact of model capacity",
"sec_num": "6.5"
},
{
"text": "In this study we evaluated pre-training in the form of T5 for the data-to-text task. We found that it leads to state-of-the-art results, while greatly improving robustness to out-of-domain inputs. In the future, we hope to design unsupervised pre-training objectives that are specifically tailored for the datato-text task. We also hope to extend this work to multiple languages, especially low resource ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Initial experiments with T5 variants trained on a purely unsupervised objective did not show any difference in performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Multiwoz-a largescale multi-domain wizard-of-oz dataset for taskoriented dialogue modelling",
"authors": [
{
"first": "Pawe\u0142",
"middle": [],
"last": "Budzianowski",
"suffix": ""
},
{
"first": "Tsung-Hsien",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "Bo-Hsiang",
"middle": [],
"last": "Tseng",
"suffix": ""
},
{
"first": "I\u00f1igo",
"middle": [],
"last": "Casanueva",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Ultes",
"suffix": ""
},
{
"first": "Milica",
"middle": [],
"last": "Osman Ramadan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gasic",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "5016--5026",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pawe\u0142 Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, I\u00f1igo Casanueva, Stefan Ultes, Osman Ra- madan, and Milica Gasic. 2018. Multiwoz-a large- scale multi-domain wizard-of-oz dataset for task- oriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5016-5026.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Semantically conditioned dialog response generation via hierarchical disentangled self-attention",
"authors": [
{
"first": "Wenhu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jianshu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Pengda",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Xifeng",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "William",
"middle": [
"Yang"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3696--3709",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenhu Chen, Jianshu Chen, Pengda Qin, Xifeng Yan, and William Yang Wang. 2019a. Semantically con- ditioned dialog response generation via hierarchical disentangled self-attention. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, pages 3696-3709.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Few-shot nlg with pre-trained language model",
"authors": [
{
"first": "Zhiyu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Harini",
"middle": [],
"last": "Eavani",
"suffix": ""
},
{
"first": "Yinyin",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "William",
"middle": [
"Yang"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.09521"
]
},
"num": null,
"urls": [],
"raw_text": "Zhiyu Chen, Harini Eavani, Yinyin Liu, and William Yang Wang. 2019b. Few-shot nlg with pre-trained language model. arXiv preprint arXiv:1904.09521.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Handling divergent reference texts when evaluating table-to-text generation",
"authors": [
{
"first": "Bhuwan",
"middle": [],
"last": "Dhingra",
"suffix": ""
},
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4884--4895",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bhuwan Dhingra, Manaal Faruqui, Ankur Parikh, Ming-Wei Chang, Dipanjan Das, and William Co- hen. 2019. Handling divergent reference texts when evaluating table-to-text generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4884-4895.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Gtr-lstm: A triple encoder for sentence generation from rdf data",
"authors": [
{
"first": "Jianzhong",
"middle": [],
"last": "Bayu Distiawan",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1627--1637",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bayu Distiawan, Jianzhong Qi, Rui Zhang, and Wei Wang. 2018. Gtr-lstm: A triple encoder for sentence generation from rdf data. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1627-1637.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Neural datato-text generation: A comparison between pipeline and end-to-end architectures",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Thiago Castro Ferreira",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Van Der Lee",
"suffix": ""
},
{
"first": "Emiel",
"middle": [],
"last": "Emiel Van Miltenburg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Krahmer",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "552--562",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thiago Castro Ferreira, Chris van der Lee, Emiel van Miltenburg, and Emiel Krahmer. 2019. Neural data- to-text generation: A comparison between pipeline and end-to-end architectures. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 552-562.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The webnlg challenge: Generating text from rdf data",
"authors": [
{
"first": "Claire",
"middle": [],
"last": "Gardent",
"suffix": ""
},
{
"first": "Anastasia",
"middle": [],
"last": "Shimorina",
"suffix": ""
},
{
"first": "Shashi",
"middle": [],
"last": "Narayan",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Perez-Beltrachini",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 10th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "124--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. The webnlg challenge: Generating text from rdf data. In Pro- ceedings of the 10th International Conference on Natural Language Generation, pages 124-133.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Using natural-language processing to produce weather forecasts",
"authors": [
{
"first": "Eli",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Norbert",
"middle": [],
"last": "Driedger",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"I"
],
"last": "Kittredge",
"suffix": ""
}
],
"year": 1994,
"venue": "IEEE Expert",
"volume": "9",
"issue": "2",
"pages": "45--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eli Goldberg, Norbert Driedger, and Richard I Kit- tredge. 1994. Using natural-language processing to produce weather forecasts. IEEE Expert, 9(2):45- 53.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Design of a knowledge-based report generator",
"authors": [
{
"first": "Karen",
"middle": [],
"last": "Kukich",
"suffix": ""
}
],
"year": 1983,
"venue": "Proceedings of the 21st annual meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "145--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karen Kukich. 1983. Design of a knowledge-based re- port generator. In Proceedings of the 21st annual meeting on Association for Computational Linguis- tics, pages 145-150. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments",
"authors": [
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
},
{
"first": "Abhaya",
"middle": [],
"last": "Agarwal",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Second Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "228--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alon Lavie and Abhaya Agarwal. 2007. Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments. In Proceed- ings of the Second Workshop on Statistical Machine Translation, pages 228-231. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Rouge: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text summarization branches out",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Table-to-text generation by structure-aware seq2seq learning",
"authors": [
{
"first": "Tianyu",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Kexiang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Sha",
"suffix": ""
},
{
"first": "Baobao",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Zhifang",
"middle": [],
"last": "Sui",
"suffix": ""
}
],
"year": 2018,
"venue": "Thirty-Second AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, and Zhifang Sui. 2018. Table-to-text generation by structure-aware seq2seq learning. In Thirty-Second AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Deep graph convolutional encoders for structured data to text generation",
"authors": [
{
"first": "Diego",
"middle": [],
"last": "Marcheggiani",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Perez-Beltrachini",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 11th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diego Marcheggiani and Laura Perez-Beltrachini. 2018. Deep graph convolutional encoders for struc- tured data to text generation. In Proceedings of the 11th International Conference on Natural Language Generation, pages 1-9.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Text generation: using discourse strategies and focus constraints to generate natural language text",
"authors": [
{
"first": "",
"middle": [],
"last": "Kathleen R Mckeown",
"suffix": ""
}
],
"year": 1985,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kathleen R McKeown. 1985. Text generation: using discourse strategies and focus constraints to generate natural language text.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Step-by-step: Separating planning from realization in neural data-to-text generation",
"authors": [
{
"first": "Amit",
"middle": [],
"last": "Moryossef",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2267--2277",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amit Moryossef, Yoav Goldberg, and Ido Dagan. 2019. Step-by-step: Separating planning from realization in neural data-to-text generation. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2267-2277.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting on association for computational linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting on association for compu- tational linguistics, pages 311-318. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Totto: A controlled table-to-text generation dataset",
"authors": [
{
"first": "P",
"middle": [],
"last": "Ankur",
"suffix": ""
},
{
"first": "Xuezhi",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Manaal",
"middle": [],
"last": "Gehrmann",
"suffix": ""
},
{
"first": "Bhuwan",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Diyi",
"middle": [],
"last": "Dhingra",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Das",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.14373"
]
},
"num": null,
"urls": [],
"raw_text": "Ankur P Parikh, Xuezhi Wang, Sebastian Gehrmann, Manaal Faruqui, Bhuwan Dhingra, Diyi Yang, and Dipanjan Das. 2020. Totto: A controlled table-to-text generation dataset. arXiv preprint arXiv:2004.14373.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Few-shot natural language generation for task-oriented dialog",
"authors": [
{
"first": "Baolin",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Chenguang",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Chunyuan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiujun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jinchao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2002.12328"
]
},
"num": null,
"urls": [],
"raw_text": "Baolin Peng, Chenguang Zhu, Chunyuan Li, Xi- ujun Li, Jinchao Li, Michael Zeng, and Jian- feng Gao. 2020. Few-shot natural language gen- eration for task-oriented dialog. arXiv preprint arXiv:2002.12328.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Data-to-text generation with content selection and planning",
"authors": [
{
"first": "Ratish",
"middle": [],
"last": "Puduppully",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "6908--6915",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ratish Puduppully, Li Dong, and Mirella Lapata. 2019. Data-to-text generation with content selection and planning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6908-6915.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "OpenAI Blog",
"volume": "",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8).",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Exploring the limits of transfer learning with a unified text-to-text transformer",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter J",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.10683"
]
},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv preprint arXiv:1910.10683.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Building natural language generation systems",
"authors": [
{
"first": "Ehud",
"middle": [],
"last": "Reiter",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Dale",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ehud Reiter and Robert Dale. 2000. Building natural language generation systems. Cambridge university press.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Leveraging pre-trained checkpoints for sequence generation tasks",
"authors": [
{
"first": "Sascha",
"middle": [],
"last": "Rothe",
"suffix": ""
},
{
"first": "Shashi",
"middle": [],
"last": "Narayan",
"suffix": ""
},
{
"first": "Aliaksei",
"middle": [],
"last": "Severyn",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.12461"
]
},
"num": null,
"urls": [],
"raw_text": "Sascha Rothe, Shashi Narayan, and Aliaksei Sev- eryn. 2019. Leveraging pre-trained checkpoints for sequence generation tasks. arXiv preprint arXiv:1907.12461.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Get to the point: Summarization with pointer-generator networks",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1073--1083",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1099"
]
},
"num": null,
"urls": [],
"raw_text": "Abigail See, Peter J. Liu, and Christopher D. Man- ning. 2017a. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 1073-1083, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Get to the point: Summarization with pointer-generator networks",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1073--1083",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abigail See, Peter J Liu, and Christopher D Man- ning. 2017b. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1073-1083.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Sumtime-mousam: Configurable marine weather forecast generator",
"authors": [
{
"first": "Somayajulu",
"middle": [],
"last": "Sripada",
"suffix": ""
},
{
"first": "Ehud",
"middle": [],
"last": "Reiter",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Davy",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Somayajulu Sripada, Ehud Reiter, and Ian Davy. Sumtime-mousam: Configurable marine weather forecast generator.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Semantically conditioned lstm-based natural language generation for spoken dialogue systems",
"authors": [
{
"first": "Milica",
"middle": [],
"last": "Tsung-Hsien Wen",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Gasic",
"suffix": ""
},
{
"first": "Pei-Hao",
"middle": [],
"last": "Mrksic",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Vandyke",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.01745"
]
},
"num": null,
"urls": [],
"raw_text": "Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Pei- Hao Su, David Vandyke, and Steve Young. 2015. Se- mantically conditioned lstm-based natural language generation for spoken dialogue systems. arXiv preprint arXiv:1508.01745.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Bridging the structural gap between encoding and decoding for data-to-text generation",
"authors": [
{
"first": "Chao",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Marilyn",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "Snigdha",
"middle": [],
"last": "Chaturvedi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chao Zhao, Marilyn Walker, and Snigdha Chaturvedi. 2020. Bridging the structural gap between encod- ing and decoding for data-to-text generation. In Pro- ceedings of the 58th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Examples from each dataset -The first row is WebNLG, second is Multiwoz and third is ToTTo. Each row illustrates the structured data (left), its linearized representation (top) and the target text(bottom)",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "ToTTo (Parikh et al., 2020) consists of Wikipedia tables paired with natural language descriptions. The input is a set of cells from a table, along with metadata such as the title of the table.",
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"num": null,
"html": null,
"type_str": "table",
"text": "Dataset sizes.",
"content": "<table/>"
},
"TABREF1": {
"num": null,
"html": null,
"type_str": "table",
"text": "45.1 54.5 33.3 0.37 0.41 0.33 GTR-LSTM : 37.1 54.0 29.2 0.31 0.37 0.28 Pipe-Trans 51.7 56.4 38.9 0.32 0.41 0.21 Step",
"content": "<table><tr><td>Model</td><td>O</td><td>BLEU S</td><td>U</td><td>O</td><td>METEOR S U</td></tr><tr><td>Melbourne :</td><td/><td/><td/><td/><td/></tr><tr><td>:</td><td colspan=\"5\">47.4 53.3 34.4 0.39 0.44 0.34</td></tr><tr><td>DualEnc</td><td colspan=\"5\">51.4 63.4 36.7 0.41 0.45 0.37</td></tr><tr><td>T5-Small</td><td colspan=\"5\">52.0 62.6 38.8 0.41 0.45 0.37</td></tr><tr><td>T5-Base</td><td colspan=\"5\">55.2 64.7 49.4 0.43 0.46 0.41</td></tr><tr><td>T5-Large</td><td colspan=\"5\">57.1 63.9 52.8 0.44 0.46 0.41</td></tr><tr><td>T5-3B</td><td colspan=\"5\">54.0 62.8 52.0 0.43 0.45 0.42</td></tr></table>"
},
"TABREF2": {
"num": null,
"html": null,
"type_str": "table",
"text": "Results on WebNLG. O stands for Overall test set, S for Seen and U for Unseen. Pipe-Trans is Pipeline-Transformer.",
"content": "<table/>"
},
"TABREF3": {
"num": null,
"html": null,
"type_str": "table",
"text": ", for the overall test set as well as the Seen and Unseen splits. T5-Large performs the best across BLEU as well as METEOR. It improves over DualEnc by 4.3 BLEU on the overall test set. It also displays excellent generalization to new domains and relations, with a 14 BLEU improvement on the unseen test set. The results indicate that with pre-training, end-to-end neural models can surpass sophisticated pipelined approaches while being much more robust to domain shift.",
"content": "<table><tr><td>Model</td><td colspan=\"4\">Overall BLEU PAR BLEU PAR Non-Overlap</td></tr><tr><td>PGen</td><td>41.6</td><td>51.6</td><td>32.2</td><td>45.2</td></tr><tr><td>BERT-to-BERT</td><td>44.0</td><td>52.6</td><td>34.8</td><td>46.7</td></tr><tr><td>T5-3B</td><td>49.5</td><td>58.4</td><td>41.4</td><td>54.2</td></tr></table>"
},
"TABREF4": {
"num": null,
"html": null,
"type_str": "table",
"text": "Results on the ToTTo test set. PAR is short for PARENT. PGen stands for Pointer Generetator(See et al., 2017a).",
"content": "<table><tr><td>Model</td><td colspan=\"3\">Overall BLEU PAR BLEU PAR Non-Overlap</td></tr><tr><td colspan=\"2\">BERT-to-BERT 44.0</td><td>52.6 34.8</td><td>46.7</td></tr><tr><td>T5-Small</td><td>45.7</td><td>55.9 37.7</td><td>51.6</td></tr><tr><td>T5-Base</td><td>47.7</td><td>57.1 39.6</td><td>52.6</td></tr><tr><td>T5-Large</td><td>48.1</td><td>57.3 39.8</td><td>52.8</td></tr><tr><td>T5-3B</td><td>48.4</td><td>57.8 40.4</td><td>53.3</td></tr></table>"
},
"TABREF5": {
"num": null,
"html": null,
"type_str": "table",
"text": "",
"content": "<table/>"
},
"TABREF7": {
"num": null,
"html": null,
"type_str": "table",
"text": "",
"content": "<table/>"
},
"TABREF8": {
"num": null,
"html": null,
"type_str": "table",
"text": "Human evaluation on WebNLG. Nat is short for Naturalness and Acc is short for Accuracy.",
"content": "<table/>"
},
"TABREF9": {
"num": null,
"html": null,
"type_str": "table",
"text": "Input <aidastella, christening date, 2013-03-16> DualEnc Aidastella was inaugurated on March 16 , 2013 . T5 Aidastella was christened on March 16 , 2013 . Input <Andra (singer). genre , rhythm and blues> DualEnc Andra singer is rhythm and blues . T5 Andra is a singer who plays rhythm and blues . Input <Aaron deer, genre, indie rock><Aaron Deer, origin, Indiana><Aaron Deer, origin, United States> DualEnc Aaron Deer , indie rock , has a origin of Indiana and is located in United States . T5 Aaron Deer is an American from Indiana who is part of the genre of indie rock . Input <Alvah Sabin, birth date, 1793-10-23><Alvah Sabin, office (worked at , worked as), secretary of state of Vermont> DualEnc Alvah Sabin was born on October 23 , 1793 and is in secretary of state of Vermont . T5 Alvah Sabin was born on 23 October 1793 and served as secretary of state of Vermont .",
"content": "<table/>"
},
"TABREF10": {
"num": null,
"html": null,
"type_str": "table",
"text": "Model predictions on the WebNLG Unseen set. DualEnc struggles to verbalize predicates and produces ungrammatical output. T5 output is accurate and more grammatical.",
"content": "<table/>"
}
}
}
}