{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:28:58.761955Z" }, "title": "Schema-Guided Natural Language Generation", "authors": [ { "first": "Yuheng", "middle": [], "last": "Du", "suffix": "", "affiliation": {}, "email": "yuhendu@amazon.com" }, { "first": "Shereen", "middle": [], "last": "Oraby", "suffix": "", "affiliation": {}, "email": "orabys@amazon.com" }, { "first": "Vittorio", "middle": [], "last": "Perera", "suffix": "", "affiliation": {}, "email": "pererv@amazon.com" }, { "first": "Minmin", "middle": [], "last": "Shen", "suffix": "", "affiliation": {}, "email": "shenm@amazon.com" }, { "first": "Anjali", "middle": [], "last": "Narayan-Chen", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Tagyoung", "middle": [], "last": "Chung", "suffix": "", "affiliation": {}, "email": "tagyoung@amazon.com" }, { "first": "Anu", "middle": [], "last": "Venkatesh", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Dilek", "middle": [], "last": "Hakkani-Tur", "suffix": "", "affiliation": {}, "email": "hakkanit@amazon.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Neural network based approaches to data-totext natural language generation (NLG) have gained popularity in recent years, with the goal of generating a natural language prompt that accurately realizes an input meaning representation. To facilitate the training of neural network models, researchers created large datasets of paired utterances and their meaning representations. However, the creation of such datasets is an arduous task and they mostly consist of simple meaning representations composed of slot and value tokens to be realized. These representations do not include any contextual information that an NLG system can use when trying to generalize, such as domain information and descriptions of slots and values. In this paper, we present the novel task of Schema-Guided Natural Language Generation (SG-NLG). Here, the goal is still to generate a natural language prompt, but in SG-NLG, the input MRs are paired with rich schemata providing contextual information. To generate a dataset for SG-NLG we re-purpose an existing dataset for another task: dialog state tracking, which includes a large and rich schema spanning multiple different attributes, including information about the domain, user intent, and slot descriptions. We train different state-of-the-art models for neural natural language generation on this dataset and show that in many cases, including rich schema information allows our models to produce higher quality outputs both in terms of semantics and diversity. We also conduct experiments comparing model performance on seen versus unseen domains, and present a human evaluation demonstrating high ratings for overall output quality.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Neural network based approaches to data-totext natural language generation (NLG) have gained popularity in recent years, with the goal of generating a natural language prompt that accurately realizes an input meaning representation. To facilitate the training of neural network models, researchers created large datasets of paired utterances and their meaning representations. However, the creation of such datasets is an arduous task and they mostly consist of simple meaning representations composed of slot and value tokens to be realized. These representations do not include any contextual information that an NLG system can use when trying to generalize, such as domain information and descriptions of slots and values. In this paper, we present the novel task of Schema-Guided Natural Language Generation (SG-NLG). Here, the goal is still to generate a natural language prompt, but in SG-NLG, the input MRs are paired with rich schemata providing contextual information. To generate a dataset for SG-NLG we re-purpose an existing dataset for another task: dialog state tracking, which includes a large and rich schema spanning multiple different attributes, including information about the domain, user intent, and slot descriptions. We train different state-of-the-art models for neural natural language generation on this dataset and show that in many cases, including rich schema information allows our models to produce higher quality outputs both in terms of semantics and diversity. We also conduct experiments comparing model performance on seen versus unseen domains, and present a human evaluation demonstrating high ratings for overall output quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Much of the recent work on Neural Natural Language Generation (NNLG) focuses on generating a * Authors contributed equally and are listed alphabetically. natural language string given some input content, primarily in the form of a structured Meaning Representation (MR) (Moryossef et al., 2019; Wiseman et al., 2017; Gong et al., 2019; Du\u0161ek et al., 2018; Colin et al., 2016; Wen et al., 2016; Dusek and Jurc\u00edcek, 2016; Du\u0161ek and Jurcicek, 2015; Wen et al., 2015) . Popular datasets used for MR-to-text generation are confined to limited domains, e.g., restaurants or product information, and usually consist of simple tuples of slots and values describing the content to be realized, failing to offer any information about domains or slots that might be useful to generation models (Novikova et al., 2017b; Gardent et al., 2017; Wen et al., 2015) . The satellite eurus 65 is a laptop designed for home use with 4 gb of memory and a medium sized hard drive Only having simple and limited information within these MRs has several shortcomings. Model outputs are either very generic or generators have to be trained for a narrow domain and cannot be used for new domains. Thus, some recent work has focused on different methods to improve naturalness (Zhu et al., 2019) and promote domain transfer (Tran and Nguyen, 2018; Wen et al., 2016) .", "cite_spans": [ { "start": 270, "end": 294, "text": "(Moryossef et al., 2019;", "ref_id": "BIBREF20" }, { "start": 295, "end": 316, "text": "Wiseman et al., 2017;", "ref_id": null }, { "start": 317, "end": 335, "text": "Gong et al., 2019;", "ref_id": "BIBREF13" }, { "start": 336, "end": 355, "text": "Du\u0161ek et al., 2018;", "ref_id": "BIBREF11" }, { "start": 356, "end": 375, "text": "Colin et al., 2016;", "ref_id": "BIBREF6" }, { "start": 376, "end": 393, "text": "Wen et al., 2016;", "ref_id": "BIBREF30" }, { "start": 394, "end": 419, "text": "Dusek and Jurc\u00edcek, 2016;", "ref_id": "BIBREF10" }, { "start": 420, "end": 445, "text": "Du\u0161ek and Jurcicek, 2015;", "ref_id": "BIBREF9" }, { "start": 446, "end": 463, "text": "Wen et al., 2015)", "ref_id": "BIBREF31" }, { "start": 783, "end": 807, "text": "(Novikova et al., 2017b;", "ref_id": "BIBREF22" }, { "start": 808, "end": 829, "text": "Gardent et al., 2017;", "ref_id": "BIBREF12" }, { "start": 830, "end": 847, "text": "Wen et al., 2015)", "ref_id": "BIBREF31" }, { "start": 1249, "end": 1267, "text": "(Zhu et al., 2019)", "ref_id": "BIBREF33" }, { "start": 1296, "end": 1319, "text": "(Tran and Nguyen, 2018;", "ref_id": "BIBREF28" }, { "start": 1320, "end": 1337, "text": "Wen et al., 2016)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "MRs are not unique to the problem of language generation: tasks such as dialog state tracking (Rastogi et al., 2019) , policy learning (Chen et al., 2018) , and task completion (Li et al., 2017 ) also require the use of an MR to track context and state information relevant to the task. MRs from these more dialog-oriented tasks are often referred to as a \"schemata.\"", "cite_spans": [ { "start": 94, "end": 116, "text": "(Rastogi et al., 2019)", "ref_id": "BIBREF26" }, { "start": 135, "end": 154, "text": "(Chen et al., 2018)", "ref_id": "BIBREF5" }, { "start": 177, "end": 193, "text": "(Li et al., 2017", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While dialog state tracking schemata do not necessarily include descriptions (and generally only include names of intents, slots, and values like traditional MRs), recent work has suggested that the use of descriptions may help with different language tasks, such as zero-shot and transfer learning (Bapna et al., 2017) . The most recent Dialog System Technology Challenge (DSTC8) (Rastogi et al., 2019) provides such descriptions and introduces the idea of schema-guided dialog state tracking. Table 2 shows a sample schema from DSTC8. It is much richer and more contextually informative than traditional MRs. Each turn is annotated with information about the current speaker, (e.g., SYS-TEM, USER), dialog act (e.g., REQUEST), slots (e.g., CUISINE), values (e.g., Mexican and Italian), as well as the surface string utterance. When comparing this schema in Table 2 to the MRs from Table 1, we can see that the only part of the schema reflected in the MRs is the ACTIONS section, which explicitly describes intents, slots, and values.", "cite_spans": [ { "start": 299, "end": 319, "text": "(Bapna et al., 2017)", "ref_id": "BIBREF1" }, { "start": 381, "end": 403, "text": "(Rastogi et al., 2019)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 495, "end": 502, "text": "Table 2", "ref_id": null }, { "start": 859, "end": 866, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "ACT: REQUEST SLOT: CUISINE VALUES: Mexican, Italian SLOT DESCRIPTIONS -CUISINE: \"Cuisine of food served in the restaurant\" SLOT TYPE: CUISINE: is categorical=true INTENT -FindRestaurants INTENT DESCRIPTION: \"Find a restaurant of a particular cuisine in a city\" SERVICE -Restaurants 1 SERVICE DESCRIPTION: \"A leading provider for restaurant search and reservations\" SPEAKER -System UTTERANCE -\"Is there a specific cuisine type you enjoy, such as Mexican, Italian, or something else?\" Table 2 : Sample schema from DSTC8. \"Actions\" describe a traditional MR; blue fields are newly introduced in the schema.", "cite_spans": [], "ref_spans": [ { "start": 483, "end": 490, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "ACTIONS -", "sec_num": null }, { "text": "To our knowledge, no previous work on NNLG has attempted to generate natural language strings from schemata using this richer and more informative data. In this paper, we propose the new task of Schema-guided Natural Language Generation, where we take a turn-level schema as input and generate a natural language string describing the required content, guided by the context information provided in the schema. Following previous work on schema-guided language tasks, we hypoth-esize that descriptions in the schema will lead to better generated outputs and the possibility of zeroshot learning (Bapna et al., 2017) . For example, to realize the MR REQUEST(time), domain-specific descriptions of common slots like time can help us realize better outputs, such as \"What time do you want to reserve your dinner?\" in the restaurant domain, and \"What time do you want to see your movie?\" for movies. Similarly, we note that for dialog system developers, writing domain-specific templates for all scenarios is clearly not scalable, but providing a few domain-specific descriptions for slots/intents is much more feasible. We focus on system-side turns from the DSTC8 dataset and, to allow our models to better generalize, we generate natural language templates, i.e., delexicalized surface forms, such as \"Is there a specific cuisine type you enjoy, such as $cuisine1, $cuisine2, or something else?\" from the example schema in Table 2 . We chose to focus on the system-side turn as currently, when building a dialog system, developers need to spend a large amount of time hand-writing prompts for each possible situation. We believe that enabling a model to automatically generate these prompts would streamline the development process and make it much faster.", "cite_spans": [ { "start": 595, "end": 615, "text": "(Bapna et al., 2017)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 1422, "end": 1429, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "ACTIONS -", "sec_num": null }, { "text": "Our contributions in this paper are three-fold: (1) we introduce a novel task and repurpose a dataset for schema-guided NLG, (2) we present our methods to include schema descriptions in state-of-theart NNLG models, and (3) we demonstrate how using a schema frequently leads to better quality outputs than traditional MRs. We experiment with three different NNLG models (Sequence-to-Sequence, Conditional Variational AutoEncoders, and GPT-2 as a Pretrained Language Model). We show that the rich schema information frequently helps improve model performance on similarity-toreference and semantic accuracy measures across domains, and that it promotes more diverse outputs with larger vocabularies. We also present a human evaluation demonstrating the high quality of our outputs in terms of naturalness and semantic correctness.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ACTIONS -", "sec_num": null }, { "text": "To create a rich dataset for NNLG, we repurpose the dataset used for the Schema-Guided State Tracking track of DSTC8 (Rastogi et al., 2019) . 1 We preprocess the data to create our Schema-Guided Natural Language (SG-NLG) dataset for training and evaluating our NNLG models. 2 Since we are focused on system turns, we first drop all the user turns. The second step in the preprocessing pipeline is to delexicalize each of the system utterances. The original data is annotated with the spans of the slots mentioned in each turn. We replace these mentions with the slot type plus an increasing index prefixed by the $ sign, e.g., $cuisine 1. For example, the utterance \"Is there a specific cuisine type you enjoy, such as Mexican, Italian, or something else?\" becomes \"Is there a specific cuisine type you enjoy, such as $cuisine 1, $cuisine 2 or something else?", "cite_spans": [ { "start": 117, "end": 139, "text": "(Rastogi et al., 2019)", "ref_id": "BIBREF26" }, { "start": 142, "end": 143, "text": "1", "ref_id": null }, { "start": 274, "end": 275, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "2" }, { "text": "The third step is to construct the MR corresponding to each system turn. We represent an MR as a triplet: one dialog act with exactly one slot and one value. Therefore, an MR that in the original DSTC8 dataset is represented as REQUEST(cuisine = [Mexican, Italian]) becomes REQUEST(cuisine=$cuisine 1), REQUEST(cuisine=$cuisine 2) (see Table 3 ). Note that the MR has been delexicalized in the same fashion as the utterance. Similarly, for MRs that do not have a value, e.g., REQUEST(city), we introduced the null value resulting in REQUEST(city=null). We also use the null value to replace the slot in dialog acts that do not require one, e.g., BYE() becomes BYE(null=null) in order to ensure that each MR is converted to a triplet.", "cite_spans": [], "ref_spans": [ { "start": 336, "end": 343, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Data", "sec_num": "2" }, { "text": "Once we generate templates and MR pairs, we add information about the service. In DSTC8, there are multiple services within a single domain, e.g., services travel 1 and travel 2 are both part of the travel domain, but have distinct schema. 3 DSTC8 annotates each turn with the corresponding service, so we reuse this information. Our schema also includes user intent. 4 Since only user turns are annotated with intent information, we use the immediately preceding user turn's intent annotation if the system turn and the user turn share the same service. If the service is not the same, we drop the intent information, i.e., we use an empty string as the intent (this only happens in 3.3% of cases).", "cite_spans": [ { "start": 368, "end": 369, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "2" }, { "text": "Next, we add information extracted from the schema file of the original data. This includes service description, slot descriptions (one description for each slot in the MR), and intent descriptions. These descriptions are very short English sentences (on average 9.8, 5.9 and 8.3 words for services, slots and intents). Lastly, we add to each triplet a sentence describing, in plain English, the meaning of the MR. These description are not directly available in DSTC8 but are procedurally generated by a set of rules. 5 For example, the MR CON-FIRM(city=$city 1) is \"Please confirm that the [city] is [$city 1].\" The intuition behind these natural language MRs is to provide a more semantically informative representation of the dialog acts, slots and values. Table 4 shows the SG-NLG dataset statistics. In summary, SG-NLG is composed of nearly 4K MRs and over 140K templates. On average, every MR has 58 templates associated with it, but there is a large variance. There is one MR associated with over 1.7K templates (CONFIRM(restaurant name, city, time, party size, date)) and many MRs with only one template.", "cite_spans": [], "ref_spans": [ { "start": 761, "end": 768, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Data", "sec_num": "2" }, { "text": "ACTIONS -ACT: REQUEST SLOT: CUISINE VALUES: Mexican, Italian UTTERANCE -\"Is there a specific cuisine type you enjoy, such as Mexican, Italian, or something else?\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "DSTC8 (ORIGINAL)", "sec_num": null }, { "text": "SG-NLG (PRE-PROCESSED) MR=[REQUEST(cuisine=$cuisine1),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "DSTC8 (ORIGINAL)", "sec_num": null }, { "text": "REQUEST(cuisine=$cuisine2)] UTTERANCE -\"Is there a specific cuisine type you enjoy, such as $cuisine1, $cuisine2, or something else?\" Table 4 : SG-NLG dataset statistics. 5 We have a single rule for each act type; 10 in total.", "cite_spans": [ { "start": 171, "end": 172, "text": "5", "ref_id": null } ], "ref_spans": [ { "start": 134, "end": 141, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "DSTC8 (ORIGINAL)", "sec_num": null }, { "text": "3 Models", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "DSTC8 (ORIGINAL)", "sec_num": null }, { "text": "We categorize the features from schemata into two different types. The first type is symbolic features. Symbolic features are encoded using a word embedding layer. They typically consist of single tokens, e.g., service names or dialog acts, and frequently resemble variable names (e.g., restaurant and restaurant name). The second type of features is natural language features. These features are typically sentences, e.g., service/slot descriptions or the natural language MR, that we encode using BERT (Devlin et al., 2018) to derive a single semantic embedding tensor.", "cite_spans": [ { "start": 504, "end": 525, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Feature Encoding", "sec_num": "3.1" }, { "text": "To represent the full schema, we adopt a flatencoding strategy. The first part of each schema is the MR, which we define as a sequence of dialog act, slot, and value tuples. At each timestep, we encode a three-part sequence: (1) a new act, slot, and value tuple from the MR, (2) the embeddings of all schema-level features (i.e., services, intents, and their descriptions), and (3) the embedding of the current slot description (see Figure 1 ). Finally, we append the encoded natural language MR. ", "cite_spans": [], "ref_spans": [ { "start": 433, "end": 441, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Feature Encoding", "sec_num": "3.1" }, { "text": "Our first model is a Seq2Seq model with attention, copy, and constrained decoding (see the full model diagram in the appendix). We implement the attention from Luong et al. 2015:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence-to-Sequence", "sec_num": "3.2" }, { "text": "a t = softmax(align(h t , s t ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence-to-Sequence", "sec_num": "3.2" }, { "text": "where align is a function that computes the alignment score of the hidden state of the encoder h t and the decoder hidden state, s t . The goal of this layer is to attend to the more salient input features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence-to-Sequence", "sec_num": "3.2" }, { "text": "The copy mechanism we add is based on pointergenerator networks (See et al., 2017) . At each decoding step t we compute a probability p gen :", "cite_spans": [ { "start": 64, "end": 82, "text": "(See et al., 2017)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Sequence-to-Sequence", "sec_num": "3.2" }, { "text": "p gen = \u03c3(w T h h * t + w T s s t + w T x x t + b ptr )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence-to-Sequence", "sec_num": "3.2" }, { "text": "where w h , w s , and w x are a learnable weights matrix; h * t is a context vector computed by combining the encoder hidden state and the attention weights, s t is the decoder hidden state, x t the decoder input, and b ptr is a bias term. The probability p gen is then used to determine the next word w generated:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence-to-Sequence", "sec_num": "3.2" }, { "text": "P (w) = p gen P vocab (w) + (1 \u2212 p gen ) i a t i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence-to-Sequence", "sec_num": "3.2" }, { "text": "Thus p gen behaves like a switch to decide whether to generate from the vocab or copy from the input. The goal of the copy mechanism is to enable the generation of special symbols such as $cuisine 1 that are specific to the service.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence-to-Sequence", "sec_num": "3.2" }, { "text": "The Conditional Variational Auto-Encoder (CVAE) (Hu et al., 2017) is an extension of the VAE models, where an additional vector c is attached to the last hidden state of the encoder z as the initial hidden state of the decoder. The vector c is used to control the semantic meaning of the output to align with the desired MR. We use the encoded feature vector described in Section 3.1 as c. The model objective is the same as VAE, which is the sum of reconstruction loss and Kullback-Leibler divergence loss. At training time, z is the encoded input sentence. At prediction time, z is sampled from a Gaussian prior learned at training time. We also adapt the attention mechanism for CVAE by adding an additional matrix W e to compute the alignment score,", "cite_spans": [ { "start": 48, "end": 65, "text": "(Hu et al., 2017)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Conditional Variational Auto-Encoder", "sec_num": "3.3" }, { "text": "align(h t ,s t ) = W (W e * h t +s t ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Variational Auto-Encoder", "sec_num": "3.3" }, { "text": "wheres t is the decoder hidden state. For Seq2Seq/CVAE, we use constrained decoding to prune out candidate outputs with slot repetitions. We use a beam to keep track of slots that have already been generated and set the probability of a new candidate node to zero if slots are repeated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Variational Auto-Encoder", "sec_num": "3.3" }, { "text": "We also experiment with a pretrained language model, specifically GPT-2 (Radford et al., 2019) . 6 Since GPT-2 is trained on purely natural language strings, we first combine the symbolic and natural language features into flat natural language strings, similar to previous work by Budzianowski and Vuli\u0107 (2019) . We fine-tune the GPT-2 model using these natural language inputs with the target [Schema 1] ACTIONS (MR): INFORM(price-per-night= $price-per-night1), NOTIFY-SUCCESS(null=null) Slot Desc: price-per-night: \"price per night for the stay\" Service: hotels-4 Service Desc: \"Accommodation searching and booking portal\" Intent: ReserveHotel Intent Desc: \"Reserve rooms at a selected place for given dates.\" Natural Language MR: the [price per night] is [$price-per-night1]. the request succeeded. Ref $price-per-night1 a night Seq2Seq your reservation is booked and the total cost is $price-per-night1 . CVAE your reservation has been made . the total cost is $price-per-night1 per night .", "cite_spans": [ { "start": 72, "end": 94, "text": "(Radford et al., 2019)", "ref_id": "BIBREF25" }, { "start": 97, "end": 98, "text": "6", "ref_id": null }, { "start": 282, "end": 311, "text": "Budzianowski and Vuli\u0107 (2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Pretrained Language Model: GPT-2", "sec_num": "3.4" }, { "text": "your reservation was successful! the cost of the room is $price-per-night1 per night. [Schema 2] ACTIONS (MR): OFFER(movie-name= $movie-name1), OFFER(movie-name= $movie-name2) OFFER(movie-name= $movie-name3), INFORM(count=$count1) Slot Desc: movie-name: \"name of the movie\", count: \"the number of items that satisfy the user's request\" Service: media-2 Service Desc: \"The widest selection and lowest prices for movie rentals\" Intent: FindMovies Intent Desc: \"Find movies to watch by genre and, optionally, director or actors\" Natural template. 7 At prediction time, given the schema tokens as input, we use our fine-tuned GPT-2 model with a language model head to generate an output sequence (until we hit an end-of-sequence token). We adopt top-k sampling at each decoding step.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GPT2", "sec_num": null }, { "text": "For each of our three models, we generate a single output for each test instance. Table 5 shows example model outputs.", "cite_spans": [], "ref_spans": [ { "start": 82, "end": 89, "text": "Table 5", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Evaluation", "sec_num": "4" }, { "text": "We focus on three distinct metric types: similarity to references, semantic accuracy, and diversity. Similarity to references. As a measure of how closely our outputs match the corresponding test references, we use BLEU (n-gram precision with brevity penalty) (Papineni et al., 2002) and METEOR (n-gram precision and recall, with synonyms) (Lavie and Agarwal, 2007) . We compute corpus-level BLEU for the full set of outputs and matching references. For METEOR, we com-pute per-output metrics and average across all instances. 8 We include these metrics in our evaluation primarily for completeness and supplement them with a human evaluation, since it is widely agreed that lexical overlap-based metrics are weak measures of quality (Novikova et al., 2017a; Belz and Reiter, 2006; Bangalore et al., 2000) .", "cite_spans": [ { "start": 260, "end": 283, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF24" }, { "start": 340, "end": 365, "text": "(Lavie and Agarwal, 2007)", "ref_id": "BIBREF15" }, { "start": 527, "end": 528, "text": "8", "ref_id": null }, { "start": 734, "end": 758, "text": "(Novikova et al., 2017a;", "ref_id": "BIBREF21" }, { "start": 759, "end": 781, "text": "Belz and Reiter, 2006;", "ref_id": "BIBREF2" }, { "start": 782, "end": 805, "text": "Bangalore et al., 2000)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.1" }, { "text": "Semantic accuracy. We compute the slot error rate (SER) for each model output as compared to the corresponding MR by finding the total number of deletions, repetitions, and hallucinations over the total number of slots for that instance (the lower the better). 9 It is important to note that we only consider slots that have explicit values (e.g., MR: INFORM date=$date1) for our automatic SER computations. We are investigating methods to compute SER over implicit slots (e.g., MR: RE-QUEST party size=null) as future work, since it is non-trivial to compute due to the various ways an implicit slot might be expressed in a generated template (e.g., \"How many people are in your party?\", Table 6 : Automatic evaluation metrics comparing traditional MR vs. rich schema. Higher is better for all metrics except SER.", "cite_spans": [], "ref_spans": [ { "start": 689, "end": 696, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.1" }, { "text": "or \"What is the size of your group?\"). We also compute \"slot match rate\", that is the ratio of generated outputs that contain exactly the same explicit slots as the matching test MR. Diversity. We measure diversity based on vocabulary, distinct-N (the ratio between distinct ngrams over total n-grams) (Li et al., 2016) and novelty (the ratio of unique generated utterances in test versus references in train). 10 Table 6 compares model performance when trained using only the traditional MR versus using the full schema (better result for each model in bold).", "cite_spans": [ { "start": 302, "end": 319, "text": "(Li et al., 2016)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 414, "end": 421, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.1" }, { "text": "Model comparisons. To get a general sense of model performance, we first compare results across models. From the table, we see that Seq2Seq and CVAE have higher BLEU compared to GPT2 (for both MR and Schema), but that GPT2 has a higher METEOR. This indicates that GPT2 is more frequently able to generate outputs that are semantically similar to references, but that might not be exact lexical matches (e.g., substituting \"film\" for \"movie\") since GPT2 is a pretrained model. Similarly, GPT2 has a significantly higher vocabulary and diversity than both Seq2Seq and CVAE.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Traditional MR vs. Rich Schema", "sec_num": "4.2" }, { "text": "MR vs. Schema. Next, we compare the performance of each model when trained using MR versus Schema. For all models, we see an improvement in similarity metrics (BLEU/METEOR) when training on the full schema. Similarly, in terms of diversity, we see increases in vocabulary for all models, as well as increases in distinct-N and novelty (with the exception of Seq2Seq novelty, which drops slightly).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Traditional MR vs. Rich Schema", "sec_num": "4.2" }, { "text": "In terms of semantic accuracy, we see an improvement in both SER and Slot Match Rate for both CVAE and GPT2. For Seq2Seq, however, we see that the model performs better on semantics when training on only the MR. To investigate, we look at a breakdown of the kinds of errors made. We find that Seq2Seq/CVAE only suffer from deletions, but GPT2 also produces repetitions and hallucinations (a common problem with pretrained language models); however, training using the schema reduces the number of these mistakes enough to result in an SER improvement for GPT2 (see the appendix for details).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Traditional MR vs. Rich Schema", "sec_num": "4.2" }, { "text": "Next, we are interested to see how our models perform on specific services in the SG-NLG dataset. Recall that the original dataset consists of a set of services that can be grouped into domains: e.g., services restaurant 1 and restaurant 2 are both under the restaurant domain. Based on this, we segment our test set into three parts, by service: seen, or services that have been seen in training, partially-unseen, or services that are unseen in training but are part of domains that have been seen, and fully-unseen where both the service and domain are unseen. 11 MR vs. Schema. To better understand how the models do on average across all services, we show 11 We show distribution plots by service in the appendix. average BLEU/SER scores in Table 7 . 12 Once again, we compare performance between training on the MR vs. the schema. On average, we see that for the seen and fully-unseen partitions, training with the schema is better across almost all metrics (sometimes showing no differences for SER for fully unseen). For partially-unseen, we see that CVAE performs better when training on only the MR; however, when averaging across the full test in Table 6 , we see an improvement with schema. We see naturally higher BLEU and lower SER for seen vs. both partially-unseen and fully-unseen across all models. Surprisingly, we see higher schema BLEU for CVAE on fully-unseen as compared to partially-unseen, but we note that there is a very small fully-unseen sample size (only 10 test MRs). We also note that GPT2 has high SER for the fully-unseen domain; upon inspection, we see slot hallucination from GPT2 within alarm 1, while Seq2Seq/CVAE never hallucinate.", "cite_spans": [ { "start": 564, "end": 566, "text": "11", "ref_id": null }, { "start": 661, "end": 663, "text": "11", "ref_id": null } ], "ref_spans": [ { "start": 746, "end": 753, "text": "Table 7", "ref_id": "TABREF7" }, { "start": 1158, "end": 1165, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Seen vs. Unseen Services", "sec_num": "4.3" }, { "text": "Seen vs. Unseen. Table 8 shows model performance in terms of BLEU and SER. We sort services by how many references we have for them in test; events 1 for example constitutes 19% of the test references. To focus our discussion here, we show only the top-3 services in terms of percentage of test references. 13 For fully-unseen we show the only available service (alarm 1). We show the best scores in bold and the worst scores in italic.", "cite_spans": [], "ref_spans": [ { "start": 17, "end": 24, "text": "Table 8", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "SEQ2SEQ", "sec_num": null }, { "text": "12 Scores are weighted by the percentage of test references per service in each split, e.g. events 1 in seen makes up 19% of the seen test references, thus its scores are weighted by that factor. 13 We show results for all services in the appendix.", "cite_spans": [ { "start": 196, "end": 198, "text": "13", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "SEQ2SEQ", "sec_num": null }, { "text": "For seen services (Figure 8a) , we see the highest BLEU scores for all models on the rentalcars 1. We note that SER is consistently low across all models, with the worst SER for the top-3 services at 0.15 (the worst SER across all of seen is 0.23 as shown in the appendix).", "cite_spans": [], "ref_spans": [ { "start": 18, "end": 29, "text": "(Figure 8a)", "ref_id": null } ], "eq_spans": [], "section": "SEQ2SEQ", "sec_num": null }, { "text": "For partially-unseen services (Figure 8b) , we see the best SER on restaurants 2 (but comparatively lower BLEU scores). The services 4 domain shows the highest BLEU scores for Seq2Seq and GPT2, with low SER. We note that flights 3 has the worst SER for all models. Upon investigation, we find slot description discrepancies: e.g., slot origin airport name has slot description \"Number of the airport flying out from\". This highlights how models may be highly sensitive to nuances in the schema information, warranting further analysis in the future.", "cite_spans": [], "ref_spans": [ { "start": 30, "end": 41, "text": "(Figure 8b)", "ref_id": null } ], "eq_spans": [], "section": "SEQ2SEQ", "sec_num": null }, { "text": "To supplement our automatic metric evaluations which show some the benefits of schema-based generation, we conduct an annotation study to evaluate our schema-guided output quality. We randomly sample 50 MRs from our test set, and collect 3 judgments per output for each model as well as a reference (randomly shuffled). 14 We ask the annotators to give a binary rating for each output across 3 dimensions: grammar, naturalness, and semantics (as compared to the input MR). We also get an \"overall\" rating for each tem-plate on a 1 (poor) to 5 (excellent) Likert scale. 15 Table 9 summarizes the results of the study. For grammar, naturalness, and semantics, we show the ratio of how frequently a given model or reference output is marked as correct over all outputs for that model. For the \"overall\" rating, we average the 3 ratings given by the annotators for each instance, and present an average across all MRs (out of 5). From the table, we see that the CVAE model has the highest score in terms of both grammar and naturalness. Moreover, CVAE also achieves a score higher than the reference in terms of naturalness. A possible explanation explanation for this behavior is that the quality of the reference is subjective, and not always an ideal \"gold-standard\". In terms of semantics, we see that GPT-2 has the highest ratings of all models. Most interestingly, we see that CVAE has a significantly lower semantic rating, although it is the winner on grammar and naturalness, indicating that while CVAE outputs may be fluent, they frequently do not actually express the required content (see Schema 3 in Table 5 ). This finding is also consistent with our SER calculations from Table 6 , where we see that CVAE has the highest SER. 16 In terms of overall score, we see that GPT-2 has the highest rating of all three models, and is most frequently comparable to the ratings for the references. This can be attributed to its higher semantic accuracy, combined with good (even if not the highest) ratings on grammar and naturalness.", "cite_spans": [ { "start": 320, "end": 322, "text": "14", "ref_id": null }, { "start": 1737, "end": 1739, "text": "16", "ref_id": null } ], "ref_spans": [ { "start": 572, "end": 579, "text": "Table 9", "ref_id": "TABREF10" }, { "start": 1609, "end": 1616, "text": "Table 5", "ref_id": "TABREF4" }, { "start": 1683, "end": 1690, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Human Evaluation", "sec_num": "4.4" }, { "text": "Most work on NNLG uses a simple MR that consists of slots and value tokens that only describe information that should be realized, without including contextual information to guide the generator as we do; although some work has described how this could be useful (Walker et al., 2018) . WebNLG (Colin et al., 2016) includes structured triples from Wikipedia which may constitute slightly richer MRs, but are not contextualized. Oraby et al. (2019) generate rich MRs that contain syntactic and stylistic information for generating descriptive restaurant reviews, but do not add in any contextual information that does not need to be included in the output realization. Table- to-text generation using ROTOWIRE (NBA players and stats) also includes richer information, but it is also not contextualized (Wiseman et al., 2017; Gong et al., 2019) .", "cite_spans": [ { "start": 263, "end": 284, "text": "(Walker et al., 2018)", "ref_id": "BIBREF29" }, { "start": 294, "end": 314, "text": "(Colin et al., 2016)", "ref_id": "BIBREF6" }, { "start": 428, "end": 447, "text": "Oraby et al. (2019)", "ref_id": "BIBREF23" }, { "start": 801, "end": 823, "text": "(Wiseman et al., 2017;", "ref_id": null }, { "start": 824, "end": 842, "text": "Gong et al., 2019)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 668, "end": 674, "text": "Table-", "ref_id": null } ], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Other previous work has attempted to address domain transfer in NLG. Dethlefs et al. (2017) use an abstract meaning representation (AMR) as a way to share common semantic information across domains. Wen et al. (2016) use a \"data counterfeiting\" method to generate synthetic data from existing domains to train models on unseen domains, then fine-tune on a small set of in-domain utterances. Tran et al. (2018) also train models on a source domain dataset, then fine-tune on a small sample of target domain utterances for domain adaptation. Rather than fine-tuning models for new domains, our data-driven approach allows us to learn domain information directly from the data schema.", "cite_spans": [ { "start": 69, "end": 91, "text": "Dethlefs et al. (2017)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "In this paper, we present the novel task of Schema-Guided NLG. We demonstrate how we are able to generate templates (i.e., delexicalized system prompts) across different domains using three stateof-the-art models, informed by a rich schema of information including intent descriptions, slot descriptions and domain information. We present our novel SG-NLG dataset, which we construct by repurposing a dataset from the dialog state tracking community.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "In our evaluation, we demonstrate how training using our rich schema frequently improves the overall quality of generated prompts. This is true for different similarity metrics (up to 0.43 BLEU and 0.61 METEOR) that we recognize are weak measures of quality but, more importantly, for semantic metrics (as low as 0.18 average SER), and even for diversity (up to 2.6K bigram vocabulary). Moreover, this holds true on both seen and unseen domains in many different settings. We conduct a human evaluation as a more accurate quality assessment, and show how our outputs are rated up to 3.61 out of 5 overall (as compared to 3.97 for references). We observe that different models have different strengths: Seq2Seq and CVAE have higher BLEU reference similarity scores, while GPT2 is significantly more diverse and is scored highest overall in human evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "For future work, we are interested in exploring how schema-guided NLG can be used in dialog system contexts, where only outputs that have no slot errors and high overall fluency should be selected as responses. We are also interested in improving both the semantic correctness and fluency of our model outputs by introducing improved methods for constrained decoding and language model integration. Additionally, we plan to develop more accurate automatic measures of quality, as well as more fine-grained control of domain transfer. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "All of the errors made by Seq2Seq and CVAE are deletion errors (constrained decoding prevents repetitions/hallucinations). While using schema leads to more deletions in GPT2, it reduces repetitions and hallucinations, leading to better SER. For the seen set in Figure 2a , we present the distribution of references both in training and test. For the unseen sets in Figure 2b , we present only test reference distribution (since there are no corresponding train references). Table 8 shows the performance of each model across all seen and partially-unseen test sets. E Output Examples Table 13 shows more model output examples. Schema 1 shows correct outputs for all models. Schema 2 shows a slot drop in CVAE, and Schema 3 shows incorrect outputs from Seq2Seq/CVAE for the single fully-unseen domain, alarm-1. : Automatic evaluation metrics across seen and partially-unseen services (best in bold, worst in italic). ", "cite_spans": [], "ref_spans": [ { "start": 261, "end": 270, "text": "Figure 2a", "ref_id": "FIGREF3" }, { "start": 365, "end": 374, "text": "Figure 2b", "ref_id": "FIGREF3" }, { "start": 474, "end": 481, "text": "Table 8", "ref_id": "TABREF8" }, { "start": 584, "end": 592, "text": "Table 13", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "B Details of SER Errors", "sec_num": null }, { "text": "https://github.com/ google-research-datasets/ dstc8-schema-guided-dialogue 2 https://github.com/alexa/ schema-guided-nlg3 We show service examples in the appendix.4 At experimentation time, the DSTC8 test set was not annotated with user intent. Since we needed user intents for our task, we used DSTC8 dev as our test set. We randomly split the DSTC8 train set into 90% training and 10% development.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We use GPT-2 small from HuggingFace Transformers (https://github.com/huggingface/ transformers)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We train with special beginning of sequence, end of sequence, and separator tokens such that each training instance is: \"[BOS] schema-tokens [SEP] target-tokens[EOS].\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We use NLTK for BLEU4/METEOR(Bird et al., 2009). 9 Although Wen et al. (2015) compute SER using only deletions and repetitions, we include hallucinations to capture errors more accurately.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "To avoid inflating novelty metrics, we normalize our template values. (e.g., \"Table is reserved for $date1.\" is normalized to \"Table is reserved for $date.\" for any $dateN value).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We have a pool of 6 annotators that are highly-skilled at evaluating language tasks and were not involved in any other parts of the project.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "To make annotation more intuitive, we automatically lexicalize slots with values from the schema (although this may add noise), e.g., \"The date is $date1\" \u2192 \"The date is[March 1st].\" We use the same values for all templates for consistency.16 We compute Fleiss Kappa scores for each dimension, finding near-perfect agreement for semantics (0.87), substantial agreement for grammar (0.76), and moderate agreement for naturalness (0.58) and overall (0.47).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The authors would like to thank Sofia Scharfenberg, Jasmin Rehm, and the rest of the Alexa Data Services Rapid Machine Learning Prototyping team for all of their help with preparing and performing the human evaluation study.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "A Service and Slot Descriptions", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix", "sec_num": null }, { "text": "The comprehensive portal to find and reserve seats at events near you category Type of event time Time when the event is scheduled to start ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Events 1", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Evaluation metrics for generation", "authors": [ { "first": "Srinivas", "middle": [], "last": "Bangalore", "suffix": "" }, { "first": "Owen", "middle": [], "last": "Rambow", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Whittaker", "suffix": "" } ], "year": 2000, "venue": "INLG'2000 Proceedings of the First International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "1--8", "other_ids": { "DOI": [ "10.3115/1118253.1118255" ] }, "num": null, "urls": [], "raw_text": "Srinivas Bangalore, Owen Rambow, and Steve Whit- taker. 2000. Evaluation metrics for generation. In INLG'2000 Proceedings of the First International Conference on Natural Language Generation, pages 1-8, Mitzpe Ramon, Israel. Association for Compu- tational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Towards zero shot frame semantic parsing for domain scaling", "authors": [ { "first": "Ankur", "middle": [], "last": "Bapna", "suffix": "" }, { "first": "Gokhan", "middle": [], "last": "Tur", "suffix": "" }, { "first": "Dilek", "middle": [], "last": "Hakkani-Tur", "suffix": "" }, { "first": "Larry", "middle": [], "last": "Heck", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ankur Bapna, Gokhan Tur, Dilek Hakkani-Tur, and Larry Heck. 2017. Towards zero shot frame seman- tic parsing for domain scaling. In Interspeech 2017.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Comparing automatic and human evaluation of NLG systems", "authors": [ { "first": "Anja", "middle": [], "last": "Belz", "suffix": "" }, { "first": "Ehud", "middle": [], "last": "Reiter", "suffix": "" } ], "year": 2006, "venue": "11th Conference of the European Chapter", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anja Belz and Ehud Reiter. 2006. Comparing auto- matic and human evaluation of NLG systems. In 11th Conference of the European Chapter of the Association for Computational Linguistics, Trento, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Natural Language Processing with Python", "authors": [ { "first": "Steven", "middle": [], "last": "Bird", "suffix": "" }, { "first": "Ewan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Loper", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python, 1st edi- tion. O'Reilly Media, Inc.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Hello, it's GPT-2 -how can I help you? towards the use of pretrained language models for task-oriented dialogue systems", "authors": [ { "first": "Pawe\u0142", "middle": [], "last": "Budzianowski", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 3rd Workshop on Neural Generation and Translation", "volume": "", "issue": "", "pages": "15--22", "other_ids": { "DOI": [ "10.18653/v1/D19-5602" ] }, "num": null, "urls": [], "raw_text": "Pawe\u0142 Budzianowski and Ivan Vuli\u0107. 2019. Hello, it's GPT-2 -how can I help you? towards the use of pre- trained language models for task-oriented dialogue systems. In Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 15-22, Hong Kong. Association for Computational Linguis- tics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Structured dialogue policy with graph neural networks", "authors": [ { "first": "Lu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Sishan", "middle": [], "last": "Long", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1257--1268", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lu Chen, Bowen Tan, Sishan Long, and Kai Yu. 2018. Structured dialogue policy with graph neural net- works. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1257-1268, Santa Fe, New Mexico, USA. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The webnlg challenge: Generating text from dbpedia data", "authors": [ { "first": "Emilie", "middle": [], "last": "Colin", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Gardent", "suffix": "" }, { "first": "Yassine", "middle": [], "last": "Mrabet", "suffix": "" }, { "first": "Shashi", "middle": [], "last": "Narayan", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Perez-Beltrachini", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 9th International Natural Language Generation conference", "volume": "", "issue": "", "pages": "163--167", "other_ids": { "DOI": [ "10.18653/v1/W16-6626" ] }, "num": null, "urls": [], "raw_text": "Emilie Colin, Claire Gardent, Yassine Mrabet, Shashi Narayan, and Laura Perez-Beltrachini. 2016. The webnlg challenge: Generating text from dbpedia data. In Proceedings of the 9th International Nat- ural Language Generation conference, pages 163- 167. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Domain transfer for deep natural language generation from abstract meaning representations", "authors": [ { "first": "Nina", "middle": [], "last": "Dethlefs", "suffix": "" } ], "year": 2017, "venue": "IEEE Computational Intelligence Magazine", "volume": "12", "issue": "", "pages": "18--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nina Dethlefs. 2017. Domain transfer for deep natu- ral language generation from abstract meaning repre- sentations. IEEE Computational Intelligence Maga- zine, 12:18-28.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Training a natural language generator from unaligned data", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" }, { "first": "Filip", "middle": [], "last": "Jurcicek", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "451--461", "other_ids": { "DOI": [ "10.3115/v1/P15-1044" ] }, "num": null, "urls": [], "raw_text": "Ond\u0159ej Du\u0161ek and Filip Jurcicek. 2015. Training a nat- ural language generator from unaligned data. In Pro- ceedings of the 53rd Annual Meeting of the Associa- tion for Computational Linguistics and the 7th Inter- national Joint Conference on Natural Language Pro- cessing (Volume 1: Long Papers), pages 451-461, Beijing, China. Association for Computational Lin- guistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A contextaware natural language generator for dialogue systems", "authors": [ { "first": "Ondrej", "middle": [], "last": "Dusek", "suffix": "" }, { "first": "Filip", "middle": [], "last": "Jurc\u00edcek", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ondrej Dusek and Filip Jurc\u00edcek. 2016. A context- aware natural language generator for dialogue sys- tems. CoRR, abs/1608.07076.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Findings of the e2e nlg challenge", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" }, { "first": "Jekaterina", "middle": [], "last": "Novikova", "suffix": "" }, { "first": "Verena", "middle": [], "last": "Rieser", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 11th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "322--328", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ond\u0159ej Du\u0161ek, Jekaterina Novikova, and Verena Rieser. 2018. Findings of the e2e nlg challenge. In Proceed- ings of the 11th International Conference on Natural Language Generation, pages 322-328. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Creating Training Corpora for NLG Micro-Planning", "authors": [ { "first": "Claire", "middle": [], "last": "Gardent", "suffix": "" }, { "first": "Anastasia", "middle": [], "last": "Shimorina", "suffix": "" }, { "first": "Shashi", "middle": [], "last": "Narayan", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Perez-Beltrachini", "suffix": "" } ], "year": 2017, "venue": "55th annual meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. Creating Train- ing Corpora for NLG Micro-Planning. In 55th an- nual meeting of the Association for Computational Linguistics (ACL), Vancouver, Canada.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Table-to-text generation with effective hierarchical encoder on three dimensions (row, column and time)", "authors": [ { "first": "Xiaocheng", "middle": [], "last": "Heng Gong", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Qin", "suffix": "" }, { "first": "", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "EMNLP/IJCNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heng Gong, Xiaocheng Feng, Bing Qin, and Ting Liu. 2019. Table-to-text generation with effective hier- archical encoder on three dimensions (row, column and time). In EMNLP/IJCNLP.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Toward controlled generation of text", "authors": [ { "first": "Zhiting", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Zichao", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Eric", "middle": [ "P" ], "last": "Xing", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 34th International Conference on Machine Learning", "volume": "70", "issue": "", "pages": "1587--1596", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward controlled generation of text. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1587-1596. JMLR.org.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments", "authors": [ { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" }, { "first": "Abhaya", "middle": [], "last": "Agarwal", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Second Workshop on Statistical Machine Translation, StatMT '07", "volume": "", "issue": "", "pages": "228--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alon Lavie and Abhaya Agarwal. 2007. Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments. In Proceed- ings of the Second Workshop on Statistical Machine Translation, StatMT '07, pages 228-231, Strouds- burg, PA, USA. Association for Computational Lin- guistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A diversity-promoting objective function for neural conversation models", "authors": [ { "first": "Jiwei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "110--119", "other_ids": { "DOI": [ "10.18653/v1/N16-1014" ] }, "num": null, "urls": [], "raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting ob- jective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 110-119, San Diego, California. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "End-to-end taskcompletion neural dialogue systems", "authors": [ { "first": "Xiujun", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yun-Nung", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Lihong", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Asli", "middle": [], "last": "Celikyilmaz", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "733--743", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiujun Li, Yun-Nung Chen, Lihong Li, Jianfeng Gao, and Asli Celikyilmaz. 2017. End-to-end task- completion neural dialogue systems. In Proceedings of the Eighth International Joint Conference on Nat- ural Language Processing (Volume 1: Long Papers), pages 733-743, Taipei, Taiwan. Asian Federation of Natural Language Processing.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Table-to-text generation by structure-aware seq2seq learning", "authors": [ { "first": "Tianyu", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Kexiang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Sha", "suffix": "" }, { "first": "Baobao", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Zhifang", "middle": [], "last": "Sui", "suffix": "" } ], "year": 2017, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, and Zhifang Sui. 2017. Table-to-text generation by structure-aware seq2seq learning. In AAAI.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Effective approaches to attentionbased neural machine translation", "authors": [ { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1508.04025" ] }, "num": null, "urls": [], "raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention- based neural machine translation. arXiv preprint arXiv:1508.04025.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Step-by-step: Separating planning from realization in neural data-to-text generation", "authors": [ { "first": "Amit", "middle": [], "last": "Moryossef", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2267--2277", "other_ids": { "DOI": [ "10.18653/v1/N19-1236" ] }, "num": null, "urls": [], "raw_text": "Amit Moryossef, Yoav Goldberg, and Ido Dagan. 2019. Step-by-step: Separating planning from realization in neural data-to-text generation. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2267-2277, Minneapolis, Minnesota. Association for Computational Linguis- tics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Why we need new evaluation metrics for nlg", "authors": [ { "first": "Jekaterina", "middle": [], "last": "Novikova", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" }, { "first": "Amanda", "middle": [ "Cercas" ], "last": "Curry", "suffix": "" }, { "first": "Verena", "middle": [], "last": "Rieser", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2241--2252", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jekaterina Novikova, Ond\u0159ej Du\u0161ek, Amanda Cer- cas Curry, and Verena Rieser. 2017a. Why we need new evaluation metrics for nlg. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, pages 2241-2252. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "The e2e dataset: New challenges for endto-end generation", "authors": [ { "first": "Jekaterina", "middle": [], "last": "Novikova", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" }, { "first": "Verena", "middle": [], "last": "Rieser", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue", "volume": "", "issue": "", "pages": "201--206", "other_ids": { "DOI": [ "10.18653/v1/W17-5525" ] }, "num": null, "urls": [], "raw_text": "Jekaterina Novikova, Ond\u0159ej Du\u0161ek, and Verena Rieser. 2017b. The e2e dataset: New challenges for end- to-end generation. In Proceedings of the 18th An- nual SIGdial Meeting on Discourse and Dialogue, pages 201-206. Association for Computational Lin- guistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Curate and generate: A corpus and method for joint control of semantics and style in neural NLG", "authors": [ { "first": "Shereen", "middle": [], "last": "Oraby", "suffix": "" }, { "first": "Vrindavan", "middle": [], "last": "Harrison", "suffix": "" }, { "first": "Abteen", "middle": [], "last": "Ebrahimi", "suffix": "" }, { "first": "Marilyn", "middle": [], "last": "Walker", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5938--5951", "other_ids": { "DOI": [ "10.18653/v1/P19-1596" ] }, "num": null, "urls": [], "raw_text": "Shereen Oraby, Vrindavan Harrison, Abteen Ebrahimi, and Marilyn Walker. 2019. Curate and generate: A corpus and method for joint control of semantics and style in neural NLG. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 5938-5951, Florence, Italy. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset", "authors": [ { "first": "Abhinav", "middle": [], "last": "Rastogi", "suffix": "" }, { "first": "Xiaoxue", "middle": [], "last": "Zang", "suffix": "" }, { "first": "Srinivas", "middle": [], "last": "Sunkara", "suffix": "" }, { "first": "Raghav", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Pranav", "middle": [], "last": "Khaitan", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2019. Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Get to the point: Summarization with pointer-generator networks", "authors": [ { "first": "Abigail", "middle": [], "last": "See", "suffix": "" }, { "first": "J", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Liu", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1704.04368" ] }, "num": null, "urls": [], "raw_text": "Abigail See, Peter J Liu, and Christopher D Man- ning. 2017. Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Adversarial domain adaptation for variational neural language generation in dialogue systems", "authors": [ { "first": "Le-Minh", "middle": [], "last": "Van-Khanh Tran", "suffix": "" }, { "first": "", "middle": [], "last": "Nguyen", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1205--1217", "other_ids": {}, "num": null, "urls": [], "raw_text": "Van-Khanh Tran and Le-Minh Nguyen. 2018. Adver- sarial domain adaptation for variational neural lan- guage generation in dialogue systems. In Proceed- ings of the 27th International Conference on Com- putational Linguistics, pages 1205-1217, Santa Fe, New Mexico, USA. Association for Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Exploring conversational language generation for rich content about hotels", "authors": [ { "first": "Marilyn", "middle": [], "last": "Walker", "suffix": "" }, { "first": "Albry", "middle": [], "last": "Smither", "suffix": "" }, { "first": "Shereen", "middle": [], "last": "Oraby", "suffix": "" }, { "first": "Vrindavan", "middle": [], "last": "Harrison", "suffix": "" }, { "first": "Hadar", "middle": [], "last": "Shemtov", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marilyn Walker, Albry Smither, Shereen Oraby, Vrin- davan Harrison, and Hadar Shemtov. 2018. Explor- ing conversational language generation for rich con- tent about hotels. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018), Miyazaki, Japan. Eu- ropean Languages Resources Association (ELRA).", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Multi-domain neural network language generation for spoken dialogue systems", "authors": [ { "first": "Milica", "middle": [], "last": "Tsung-Hsien Wen", "suffix": "" }, { "first": "Nikola", "middle": [], "last": "Gasic", "suffix": "" }, { "first": "Lina", "middle": [ "Maria" ], "last": "Mrksic", "suffix": "" }, { "first": "Pei", "middle": [ "Hao" ], "last": "Rojas-Barahona", "suffix": "" }, { "first": "David", "middle": [], "last": "Su", "suffix": "" }, { "first": "Steve", "middle": [ "J" ], "last": "Vandyke", "suffix": "" }, { "first": "", "middle": [], "last": "Young", "suffix": "" } ], "year": 2016, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina Maria Rojas-Barahona, Pei hao Su, David Vandyke, and Steve J. Young. 2016. Multi-domain neural network language generation for spoken dia- logue systems. In HLT-NAACL.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Semantically conditioned lstm-based natural language generation for spoken dialogue systems", "authors": [ { "first": "Milica", "middle": [], "last": "Tsung-Hsien Wen", "suffix": "" }, { "first": "Nikola", "middle": [], "last": "Gasic", "suffix": "" }, { "first": "Pei-Hao", "middle": [], "last": "Mrksic", "suffix": "" }, { "first": "David", "middle": [], "last": "Su", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Vandyke", "suffix": "" }, { "first": "", "middle": [], "last": "Young", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1711--1721", "other_ids": { "DOI": [ "10.18653/v1/D15-1199" ] }, "num": null, "urls": [], "raw_text": "Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Pei- Hao Su, David Vandyke, and Steve Young. 2015. Se- mantically conditioned lstm-based natural language generation for spoken dialogue systems. In Proceed- ings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1711-1721. Association for Computational Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Multi-task learning for natural language generation in task-oriented dialogue", "authors": [ { "first": "Chenguang", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Xuedong", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2019, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chenguang Zhu, Michael Zeng, and Xuedong Huang. 2019. Multi-task learning for natural language gen- eration in task-oriented dialogue. In Empirical Methods in Natural Language Processing (EMNLP).", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "CONFIRM(leaving-date=$leaving-date1), CONFIRM(travelers=$travelers1) Slot Desc: leaving-date: \"date of bus leaving for journey", "authors": [], "year": null, "venue": "number of travelers for journey", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "ACL. [Schema 1] ACTIONS (MR): CONFIRM(leaving-date=$leaving-date1), CONFIRM(travelers=$travelers1) Slot Desc: leaving-date: \"date of bus leaving for journey\", travelers: \"number of travelers for journey\" Service: buses-1", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Natural Language MR: please confirm that the leaving date is $leaving-date1. please confirm that the travelers is $travelers1. Ref can you confirm once again that you need tickets for $travelers1 people for the bus leaving on $leaving-date1. Seq2Seq please confirm the following details : you want to book $travelers1 tickets on $leaving-date1 . CVAE please confirm : $travelers1 tickets for the bus leaving on $leaving-date1 . GPT2 okay, it's $travelers1 tickets leaving $leaving-date1, is that right?", "authors": [], "year": null, "venue": "Service Desc: book bus journeys from the biggest bus network in the country Intent: BuyBusTickets Intent Desc: buy tickets for a bus journey", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Service Desc: book bus journeys from the biggest bus network in the country Intent: BuyBusTickets Intent Desc: buy tickets for a bus journey Natural Language MR: please confirm that the leaving date is $leaving-date1. please confirm that the travelers is $travelers1. Ref can you confirm once again that you need tickets for $travelers1 people for the bus leaving on $leaving-date1. Seq2Seq please confirm the following details : you want to book $travelers1 tickets on $leaving-date1 . CVAE please confirm : $travelers1 tickets for the bus leaving on $leaving-date1 . GPT2 okay, it's $travelers1 tickets leaving $leaving-date1, is that right? [Schema 2] ACTIONS (MR): INFORM(ride-fare= $ride-fare1) INFORM(approx-ride-duration=$approx-ride-duration1)", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Service: ridesharing-1 Service Desc: on-demand taxi calling service Intent: GetRide Intent Desc: call a taxi to head to a given destination Natural Language MR: the ride fare is", "authors": [], "year": null, "venue": "approximate duration of ride to the destination", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Slot Desc: ride-fare\"total fare for taxi ride\", approx-ride-duration: \"approximate duration of ride to the destination\" Service: ridesharing-1 Service Desc: on-demand taxi calling service Intent: GetRide Intent Desc: call a taxi to head to a given destination Natural Language MR: the ride fare is [$ride-fare1]. the approximate ride duration is [$approx-ride-duration1]. Ref around $approx-ride-duration1 minutes for $ride-fare1.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Seq2Seq around $approx-ride-duration1 minutes for $ride-fare1. CVAE the cab is on the way . it will cost $ride-fare1. GPT2 the ride will be $ride-fare1 and is expected to last $approx-ride-duration1 minutes", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Seq2Seq around $approx-ride-duration1 minutes for $ride-fare1. CVAE the cab is on the way . it will cost $ride-fare1. GPT2 the ride will be $ride-fare1 and is expected to last $approx-ride-duration1 minutes.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "ACTIONS (MR): REQUEST(new-alarm-time=null)", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "ACTIONS (MR): REQUEST(new-alarm-time=null)", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Service: alarm-1 Service Desc: Manage alarms by getting and setting them easily Intent: AddAlarm Intent Desc: Set a new alarm Natural Language MR: what", "authors": [], "year": null, "venue": "Slot Desc: new-alarm-time: \"time to set for the new alarm", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Slot Desc: new-alarm-time: \"time to set for the new alarm\" Service: alarm-1 Service Desc: Manage alarms by getting and setting them easily Intent: AddAlarm Intent Desc: Set a new alarm Natural Language MR: what [new alarm time] do you want? Ref at what time, you want the alarm to be set? Seq2Seq what time do you want to go ? CVAE what time would you like to go ? GPT2 what time do you want to set the alarm? Table 13: Example model outputs.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Flat-encoding strategy.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF2": { "text": "shows a model architecture diagram for Seq2Seq and CVAE.(a) Distribution of refs in seen services. (b) Distribution of refs in partially/fully unseen services.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF3": { "text": "Distribution of references across services.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF4": { "text": "Seq2seq and CVAE model architectures", "uris": null, "num": null, "type_str": "figure" }, "TABREF0": { "content": "
DatasetMRReference
E2EINFORM name[TheThe Punter offers
(NovikovaPunter], food[Indian],cheap Indian food.
etal.,priceRange[cheap]
2017b)
LaptopINFORM
(Wenname[satelliteeu-
etal.,rus65],type[laptop],
2016)memory[4gb], driver-
Range[medium],
isForBusiness[false]
", "type_str": "table", "html": null, "num": null, "text": "shows examples of MRs from popular datasets." }, "TABREF1": { "content": "", "type_str": "table", "html": null, "num": null, "text": "Sample MRs from popular NNLG datasets." }, "TABREF2": { "content": "
TrainDevTest
Templates110595 14863 20022
Meaning Representations19031314749
Services262617
Domains161616
", "type_str": "table", "html": null, "num": null, "text": "Data preprocessing and delexicalization." }, "TABREF3": { "content": "
Ref$count1 movies $movie-name1 $movie-name2 and $movie-name3
Seq2Seq i found $count1 movies . how about $movie-name2 ?
CVAEi found $count1 movies you might like . how about $movie-name1 , $movie-name2 or $movie-name3 ?
GPT2sure. hey! i've got $count1 movies. do you wish to watch any of these movies -$movie-name1, $movie-name2
or $movie-name3?
[Schema 3] ACTIONS (MR): REQUEST(transfer-amount=null), REQUEST(recipient-name=null)
Slot Desc: transfer-amount: \"the amount of money to transfer\", recipient-name: \"the name of the recipient [...]\"
Service: banks-2Service Desc: \"Service to manage your bank accounts and finances\"
Intent: TransferMoney Intent Desc: \"Transfer money to another user\"
Natural Language MR: \"what [transfer amount] do you want? what [recipient name] do you want?\"
Refamount? recipient?
Seq2Seq what type of ride do you want to transfer ?
", "type_str": "table", "html": null, "num": null, "text": "Language MR: there is [$movie-name2] for [movie name]. there is [$movie-name3] for [movie name]. there is [$movie-name1] for [movie name]. the [count] is [$count1]." }, "TABREF4": { "content": "", "type_str": "table", "html": null, "num": null, "text": "" }, "TABREF7": { "content": "
SEQ2SEQCVAEGPT2
Service% Test RefsBLEUSER\u2193BLEUSER\u2193BLEUSER\u2193
events 119%0.61680.04900.61260.02940.46820.0588
rentalcars 118%0.74860.15000.66450.11250.61730.1000
buses 115%0.38310.15420.50350.10000.40160.0167
(a) Seen services.
SEQ2SEQCVAEGPT2
Service% Test RefsBLEUSER\u2193BLEUSER\u2193BLEUSER\u2193
restaurants 224%0.24660.20980.21260.35010.22970.0527
flights 318%0.31930.45790.34810.50000.30080.7368
services 418%0.57910.21970.32880.40130.57600.0851
(b) Partially-unseen services.
SEQ2SEQCVAEGPT2
Service% Test RefsBLEUSER\u2193BLEUSER\u2193BLEUSER\u2193
alarm 1100%0.35860.26670.44950.26670.22170.5833
(c) Fully-unseen services.
", "type_str": "table", "html": null, "num": null, "text": "Average BLEU and SER by service splits." }, "TABREF8": { "content": "", "type_str": "table", "html": null, "num": null, "text": "Automatic evaluation metrics across seen, partially-unseen, and fully-unseen services when training with schema." }, "TABREF10": { "content": "
: Average human evaluation scores for different
quality dimensions.
", "type_str": "table", "html": null, "num": null, "text": "" }, "TABREF11": { "content": "", "type_str": "table", "html": null, "num": null, "text": "Services, slots and their descriptions. In boldface the service names, in verbatim the slots." }, "TABREF12": { "content": "
C Seen vs. Unseen Domains
C.1 Data Distribution Plots
", "type_str": "table", "html": null, "num": null, "text": "Detailed analysis of slot errors." }, "TABREF14": { "content": "", "type_str": "table", "html": null, "num": null, "text": "" } } } }