{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:11:00.563010Z" }, "title": "Semantic Parsing of Brief and Multi-Intent Natural Language Utterances", "authors": [ { "first": "Logan", "middle": [], "last": "Lebanoff", "suffix": "", "affiliation": {}, "email": "logan.lebanoff@soartech.com" }, { "first": "Charles", "middle": [ "Newton" ], "last": "Victor", "suffix": "", "affiliation": {}, "email": "charles.newton@soartech.com" }, { "first": "Beth", "middle": [], "last": "Atkinson", "suffix": "", "affiliation": {}, "email": "beth.atkinson@navy.mil" }, { "first": "John", "middle": [], "last": "Killilea", "suffix": "", "affiliation": {}, "email": "john.killilea@navy.mil" }, { "first": "Fei", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Central Florida", "location": {} }, "email": "feiliu@cs.ucf.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Many military communication domains involve rapidly conveying situation awareness with few words. Converting natural language utterances to logical forms in these domains is challenging, as these utterances are brief and contain multiple intents. In this paper, we present a first effort toward building a weakly-supervised semantic parser to transform brief, multi-intent natural utterances into logical forms. Our findings suggest a new \"projection and reduction\" method that iteratively performs projection from natural to canonical utterances followed by reduction of natural utterances is the most effective. We conduct extensive experiments on two military and a general-domain dataset and provide a new baseline for future research toward accurate parsing of multi-intent utterances.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Many military communication domains involve rapidly conveying situation awareness with few words. Converting natural language utterances to logical forms in these domains is challenging, as these utterances are brief and contain multiple intents. In this paper, we present a first effort toward building a weakly-supervised semantic parser to transform brief, multi-intent natural utterances into logical forms. Our findings suggest a new \"projection and reduction\" method that iteratively performs projection from natural to canonical utterances followed by reduction of natural utterances is the most effective. We conduct extensive experiments on two military and a general-domain dataset and provide a new baseline for future research toward accurate parsing of multi-intent utterances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Semantic parsing to map a natural language utterance to its logical form is regarded as a challenging task partly due to a lack of annotated data (Berant and Liang, 2014; Yin et al., 2018; Gardner et al., 2018) . A promising avenue of research is to generate a set of candidate logical forms paired with their canonical realizations in natural language. Then, the canonical utterance that best matches the input is identified by a model, and its logical form is used as output (Berant and Liang, 2014) . A paraphrase/sequence-to-sequence model may additionally be used to translate a canonical utterance to a logical form (Wang et al., 2015; Herzig and Berant, 2019; Cao et al., 2020; Marzoev et al., 2020) . While the results are promising, most existing works do not handle natural language utterances with multiple intents. We refer to an intent as a goal intended by a user's utterance. Multi-intent utterances allow people to communicate core aspects of a situation in a consistent and timely manner, as illustrated in Figure 1 . (1) understand paraphrases of canonical utterances and (2) parse multiple intents in one utterance.", "cite_spans": [ { "start": 146, "end": 170, "text": "(Berant and Liang, 2014;", "ref_id": "BIBREF0" }, { "start": 171, "end": 188, "text": "Yin et al., 2018;", "ref_id": "BIBREF18" }, { "start": 189, "end": 210, "text": "Gardner et al., 2018)", "ref_id": "BIBREF5" }, { "start": 477, "end": 501, "text": "(Berant and Liang, 2014)", "ref_id": "BIBREF0" }, { "start": 622, "end": 641, "text": "(Wang et al., 2015;", "ref_id": "BIBREF15" }, { "start": 642, "end": 666, "text": "Herzig and Berant, 2019;", "ref_id": "BIBREF8" }, { "start": 667, "end": 684, "text": "Cao et al., 2020;", "ref_id": "BIBREF1" }, { "start": 685, "end": 706, "text": "Marzoev et al., 2020)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 1024, "end": 1032, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Multi-intent semantic parsing is especially suitable for military domains where emphasis is placed on communication skills, terminology, and brevity (Weinstein, 1990) . While communication protocols are often published, variations are allowed given the current situation. An area of interest is Intelligence, Surveillance, and Reconnaissance (ISR) domains where contact reports (e.g., \"Arriving at home base and ready to descend\") often contain multiple intents, and a system must determine the number of intents, interpret the natural language and predict the exact logical forms for every intent, which can be highly challenging.", "cite_spans": [ { "start": 149, "end": 166, "text": "(Weinstein, 1990)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We investigate new methods for semantic parsing of utterances with multiple intents. Importantly, and distinguishing our work from earlier literature (Iyer et al., 2017; Zhong et al., 2017; Yu et al., 2018; Dong and Lapata, 2018; Zeng et al., 2020) , our domain areas have no supervised training data, nor can pseudo-language utterances be created through crowdsourcing due to their sensi-tive nature and requirement of expert knowledge. We thus operate in a weakly-supervised setting by assuming only access to a grammar that generates canonical utterances and logical forms. Obtaining a comprehensive collection of natural utterances for military applications is difficult; it can be easier to create a grammar that generates canonical utterances for the application. In addition, there are scenarios where there is insufficient time or funding to obtain supervised data, e.g. quickly building a virtual assistant for a new mobile app. Our goal is distinct from related efforts in dialog systems (Gupta et al., 2018; Vanzo et al., 2019; Lee et al., 2019; Ham et al., 2020) ; the parser does not have additional context or interaction but focuses on modeling complex compositional intents. We build on methods that project natural utterances to the canonical space (Marzoev et al., 2020) and investigate novel adaptations for handling multiintent utterances. Our contributions are as follows.", "cite_spans": [ { "start": 150, "end": 169, "text": "(Iyer et al., 2017;", "ref_id": "BIBREF10" }, { "start": 170, "end": 189, "text": "Zhong et al., 2017;", "ref_id": "BIBREF22" }, { "start": 190, "end": 206, "text": "Yu et al., 2018;", "ref_id": "BIBREF19" }, { "start": 207, "end": 229, "text": "Dong and Lapata, 2018;", "ref_id": "BIBREF3" }, { "start": 230, "end": 248, "text": "Zeng et al., 2020)", "ref_id": "BIBREF20" }, { "start": 998, "end": 1018, "text": "(Gupta et al., 2018;", "ref_id": "BIBREF6" }, { "start": 1019, "end": 1038, "text": "Vanzo et al., 2019;", "ref_id": "BIBREF14" }, { "start": 1039, "end": 1056, "text": "Lee et al., 2019;", "ref_id": "BIBREF11" }, { "start": 1057, "end": 1074, "text": "Ham et al., 2020)", "ref_id": "BIBREF7" }, { "start": 1266, "end": 1288, "text": "(Marzoev et al., 2020)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We present a first effort at parsing brief, multiintent utterances into logical forms; this work sheds light on parsing of airborne communications for which parallel resources are limited.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We perform experiments on two military communications datasets and a general-domain dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our findings suggest that a new approach that iteratively projects the natural language utterance to a canonical utterance, followed by a reduction step can achieve the best performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Let X and Y be the set of all natural language utterances and logical forms (LF), respectively. Given a natural language utterance x \u2286 X, we wish to produce y \u2286 Y . We assume only access to a grammar G that defines a set of n production rules whose union forms a canonical set of utterances Z. A grammar is assumed to be in the form", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Projection", "sec_num": "2" }, { "text": "G = R 1 | . . . |R n , where each production rule R i \u2192 (\u03b1, \u03c4 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Projection", "sec_num": "2" }, { "text": "is defined as rule expansion \u03b1 and a tag \u03c4 . Tags define the semantic content associated with a rule, which can be used to build LFs. Figure 2 shows an example grammar. Canonical utterances found in Z do not cover the full range of variation available in natural language. A viable option, described below, is to develop a projection function \u03c0 which maps X directly into Z and obtain an appropriate y through G. We follow Marzoev et al. (2020) and use a pretrained language model (LM) to obtain semantic ", "cite_spans": [ { "start": 423, "end": 444, "text": "Marzoev et al. (2020)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 134, "end": 142, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Hierarchical Projection", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c0(x) = arg min z\u2208Z \u03b4(LM(x), LM(z))", "eq_num": "(1)" } ], "section": "Hierarchical Projection", "sec_num": "2" }, { "text": "LM(\u2022) is calculated as the average of BERT-Base (Devlin et al., 2018) representations, and \u03b4 is cosine similarity. Computing the arg min requires O(Z) operations which can be intractable for many grammars. To handle this, we use a hierarchical projection method by performing a search through the grammar (Marzoev et al., 2020) . The $root is expanded by taking one step in the grammar to yield several partial instantiations z , which takes the place of z in Eq. 1. We refer to a partial instantiation z as a canonical utterance that still contains non-terminals. The z closest to x is chosen in the next search iteration. Non-terminals in z are expanded until only terminals remain ( Figure 2 ).", "cite_spans": [ { "start": 48, "end": 69, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF2" }, { "start": 305, "end": 327, "text": "(Marzoev et al., 2020)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 686, "end": 694, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Hierarchical Projection", "sec_num": "2" }, { "text": "Vector representations of partial instantiations introduce difficulty as non-terminals are not well- (Marzoev et al., 2020) . It conveys to the LM that a word or phrase should exist at that position, but it's not clear yet what exactly belongs there, and allows the LM to form representations for the other tokens with the knowledge that something will exist there. However, partial instantiations may contain few or no terminals at all, meaning LM input will be dominated by [MASK] tokens. For example, for the given partial instantiation -$TypeNP whose $RelNP is $EntityNPit is not clear what values may be used for the non-terminals, and the resulting utterance representation will not be useful.", "cite_spans": [ { "start": 101, "end": 123, "text": "(Marzoev et al., 2020)", "ref_id": "BIBREF13" }, { "start": 476, "end": 482, "text": "[MASK]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Non-Terminal Averaging", "sec_num": "3.1" }, { "text": "We introduce a strategy to mitigate this issue that we call non-terminal averaging. We observe that a non-terminal is restricted to certain values defined by the grammar. We obtain a representation of the non-terminal by averaging over the representations of these possible values, which gives a much better representation than the [MASK] token. This is important when projecting over multiple intents, as discussed in the next section.", "cite_spans": [ { "start": 332, "end": 338, "text": "[MASK]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Non-Terminal Averaging", "sec_num": "3.1" }, { "text": "We explore two methods for parsing utterances with multiple intents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Intent Projection", "sec_num": "3.2" }, { "text": "Meta-Grammar A simple method for handling multiple intents is to create a meta-grammar based on the original grammar. The $root is renamed to $subroot while keeping all other rules unchanged. A new $root is created with the rule $root \u2192 $subroot | $subroot $subroot | . . . It encapsulates all combinations of multiple intents, where each combination is a concatenation of \u2265 1 intents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Intent Projection", "sec_num": "3.2" }, { "text": "Reduction Another approach is to first greedily project to the closest canonical utterance, remove all tokens in the input utterance that appear in the canonical utterance, and repeat to find another sim-ilar canonical utterance. This iterative process of projection and reduction is repeated until no tokens remain or a continuation threshold is met. Figure 1 presents an example.", "cite_spans": [], "ref_spans": [ { "start": 352, "end": 360, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Multi-Intent Projection", "sec_num": "3.2" }, { "text": "To more accurately perform token removal for our Reduction method, we compare BERT representations of tokens rather than comparing exact string matches between tokens, similar to BERTScore . If the cosine similarity scores between two tokens meet a certain similarity threshold, then those two tokens are treated as equivalent, and the token will then be removed. This technique can better handle slight variations in word choice (e.g. \"survivor\" and \"survivors\", or \"spotted\" and \"in sight\"). We used 0.5 as the similarity threshold, but the model is not very sensitive to this value.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Intent Projection", "sec_num": "3.2" }, { "text": "Most semantic parsers are given access to (natural language utterance, LF) pairs during training. Our setting, however, assumes no access to these pairs and are only given a grammar to generate canonical utterances and their LFs. We use two proprietary military communication datasets and a general-purpose dataset OVERNIGHT (see Table 1 ).", "cite_spans": [], "ref_spans": [ { "start": 330, "end": 337, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Data", "sec_num": "4" }, { "text": "ISR Intelligence, surveillance, and reconnaissance (ISR) subject matter experts were consulted to develop a corpus of known utterances that an intelligence operator would say during a mission. The LFs appear in JSON format containing an intent and slots to be filled. The utterances are consolidated into a grammar with a relatively deep structure where an intent may contain slots for nested intents, making it closer to semantic parsing datasets like TOP (Gupta et al., 2018) . It consists of domain-specific words and acronyms outside of ordinary vernacular, making this dataset particularly challenging. The grammar contains 37 rules, 36 non-terminals, and approximately 60 terminals.", "cite_spans": [ { "start": 457, "end": 477, "text": "(Gupta et al., 2018)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "4" }, { "text": "HELI Short commands were collected from helicopter communications and consolidated into a similar grammar to ISR. The grammar has a shallow structure and does not contain many nested intents, but each utterance is short (1-5 tokens), which has its own challenges. The grammar contains 48 rules, 47 non-terminals, and approximately 60 terminals. Example in Fig. 1 .", "cite_spans": [], "ref_spans": [ { "start": 356, "end": 362, "text": "Fig. 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Data", "sec_num": "4" }, { "text": "Natural language utterances in both datasets are wholly defined by its grammar. For evaluation, ISR HELI Single seq2seq (Lewis et al., 2020) 34.4 58.2 Intent proj (Marzoev et al., 2020) 82 we expand to a set of paraphrased canonical utterances using an English-to-X \u2192 X-to-English procedure similar to those used for augmentation in paraphrase datasets (Wieting and Gimpel, 2018; Hu et al., 2019) .", "cite_spans": [ { "start": 120, "end": 145, "text": "(Lewis et al., 2020) 34.4", "ref_id": null }, { "start": 163, "end": 185, "text": "(Marzoev et al., 2020)", "ref_id": "BIBREF13" }, { "start": 353, "end": 379, "text": "(Wieting and Gimpel, 2018;", "ref_id": "BIBREF17" }, { "start": 380, "end": 396, "text": "Hu et al., 2019)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "4" }, { "text": "OVERNIGHT (Wang et al., 2015 ) is a semantic parsing dataset over eight domains, including sports, restaurants, and social media. Each domain contains a grammar to generate canonical utterances and LFs, as well as natural language paraphrases. As we are interested in weakly-supervised parsing, we ignore natural language utterances in training and only use those in the test set for evaluation. The datasets we use contain grammars and natural language data for utterances with a single intent, but they lack multi-intent data. We create simulated multi-intent utterances by concatenating natural language utterances together, with target LFs as concatenations of the utterances' LFs. We enforce a limit of three intents to keep task difficulty manageable.", "cite_spans": [ { "start": 10, "end": 28, "text": "(Wang et al., 2015", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "4" }, { "text": "We present several baselines in our experiments. We train a sequence-to-sequence (seq2seq) model on sequence pairs of the form (canonical utterance, LF). At evaluation, the model is given a natural language utterance associated with a canonical utterance and evaluated based on the original LF. We make use of pre-trained BART (Lewis et al., 2020) by fine-tuning on task-specific data. Proj is the technique of projecting a natural language utterance to a canonical utterance in the grammar, described in Section 2. NT-Avg is the proposed method of averaging the representations of a non-terminal's possible values. For the single-intent OVERNIGHT datasets, we display baseline results presented in Marzoev et al. (2020) . Finally, we experiment with two methods on top of NT-Avg for multi-intent parsing -Meta-Grammar and Reduction. Table 2 presents LF exact match accuracies for our internal datasets in both single-intent and multiintent settings. We observe that projection techniques outperform seq2seq methods for singleintent, consistent with prior work (Marzoev et al., 2020) . Our proposed method (NT-Avg) achieves a sizeable improvement in ISR, but equal performance on HELI. This disparity may be due to HELI's shallow grammar, demonstrating that nonterminal averaging provides gains on domains with deep, hierarchical grammars but less on simple grammars. For multi-intent, Reduction outperforms MetaGrammar by a wide margin. Meta-Grammar must simultaneously predict the number, type, and location of intents. Reduction iteratively simplifies the process by searching for one intent at a time. We also observe that seq2seq achieves much stronger performance for multi-intent with similar accuracy to Reduction on ISR and achieving much higher accuracy on HELI. We believe the improvement is due to the larger amount of data available to train seq2seq models, since we can concatenate multiple single-intent canonical utterances together to form large simulated training sets.", "cite_spans": [ { "start": 327, "end": 347, "text": "(Lewis et al., 2020)", "ref_id": "BIBREF12" }, { "start": 699, "end": 720, "text": "Marzoev et al. (2020)", "ref_id": "BIBREF13" }, { "start": 1061, "end": 1083, "text": "(Marzoev et al., 2020)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 834, "end": 841, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "The OVERNIGHT dataset contains a more complex grammar and longer utterances and LFs compared to our internal datasets (Table 3) . NT-Avg outperforms other approaches on single-intent utterances, similar to results on ISR and HELI.", "cite_spans": [], "ref_spans": [ { "start": 118, "end": 127, "text": "(Table 3)", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "All systems evaluated on the multi-intent split of OVERNIGHT struggle to perform well. A system must be able to determine the number of intents in an utterance, interpret the natural language in each intent, and predict LFs that exactly match the LFs for every intent. Accuracies range between 0% and 2% (see supplementary). This demonstrates that it is non-trivial to transfer parsing systems from the single-intent setting to multi-intent. To tease out performance differences between systems, we instead evaluate a system prediction to be correct if at least one predicted LF has an exact match with any one of the gold standard LFs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "For multi-intent utterances, Reduction achieves the highest accuracies. We believe the long structure of the LFs in OVERNIGHT provide a challenge for current seq2seq models to generate accurately. Meanwhile, grammar-based approaches can easily side-step this issue by producing LFs directly from the grammar, evidenced by the higher accuracies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "An additional phenomenon appearing in all datasets is lower layers of BERT used for projection perform better than higher layers (Figure 3 ). How- ever, we notice that layers 0-1 achieve higher accuracies in ISR and HELI, while layers 1-3 achieve higher accuracies in OVERNIGHT. We believe this is due to the role that context plays in each domain. In terse military domains, words often carry unambiguous meaning and require little context to understand. In traditional domains, context is required to interpret the meaning of a word.", "cite_spans": [], "ref_spans": [ { "start": 129, "end": 138, "text": "(Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "We tackle multi-intent semantic parsing using weakly-supervised methods. Our results show that an iterative approach of projecting the natural utterance to a canonical utterance followed by a token reduction step achieves the best performance. Potential further improvement could be achieved by fine-tuning the BERT model on free text in the desired domain (e.g. military training materials) to create better utterance embeddings. Future research includes parsing more complex multi-intent utterances, borrowing ideas from dialogue systems and capturing dependencies between intents (Gangadharaiah and Narayanaswamy, 2019).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Military Applications It is vital for military personnel to use precise language in the field to minimize confusion. This work is part of an effort to train operators of specialized military equipment to accurately communicate in search-and-rescue team and aircraft management operations. Improvement in these occupations leads to better airspace safety and rescue outcomes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ethics and Broader Impacts", "sec_num": "7" }, { "text": "Broader Impacts This work has a larger societal impact outside of military domains. For example, natural language understanding systems in healthcare require the use of audio data or transcripts of patient interactions, and the collection of this sensitive data has major ethical considerations. Our technology is flexible enough to be used in these specialized domains without the need for training on sensitive data and thus has a positive impact in the healthcare field. Potential misuses of this technology, however, could lead to decreased privacy for individuals whose voice is recognized.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ethics and Broader Impacts", "sec_num": "7" }, { "text": "Environmental Impact As stated in the paper, our models do not require any training, which greatly reduces the number of computations and thus lessens the environmental impact of natural language technology. Instead our models are based on pre-trained language models used in an unsupervised manner, so the only computation time comes from inference and experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ethics and Broader Impacts", "sec_num": "7" }, { "text": "Blo Cal Hou Pub Rec Res Soc Avg", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bas", "sec_num": null }, { "text": "Multi-Intent seq2seq (Lewis et al., 2020) 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 NT-Avg + MetaGrammar 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 NT-Avg + Reduction 0.35 0.35 0.35 1.30 1.30 1.95 1.75 0.30 0.96 Table 4 : Exact match logical form accuracies against OVERNIGHT multi-intent datasets", "cite_spans": [ { "start": 21, "end": 41, "text": "(Lewis et al., 2020)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 217, "end": 224, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Bas", "sec_num": null }, { "text": "We use BART-base (Lewis et al., 2020) to closely match the number of parameters and amount of pre-training data used by BERT-base (Devlin et al., 2018) , which is used for the projection approaches. BART-base uses the Transformer encoder-decoder architectures with 6 layers in the encoder and decoder, 12 attention heads in the encoder and decoder, and hidden size of 768. We train with a batch size of 4, optimized with Adam, a learning rate of 4e-5. The model converged after an average of five epochs for the OVERNIGHT single-intent datasets and one epoch for multi-intent. The model took longer to converge on the ISR and HELI datasets taking 20 epochs and 40 epochs, respectively. This is likely because of the unfamiliar military terms and terse utterances. A beam size of 10 is used for all projection techniques (including the proposed approaches).", "cite_spans": [ { "start": 17, "end": 37, "text": "(Lewis et al., 2020)", "ref_id": "BIBREF12" }, { "start": 130, "end": 151, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "A Model Details", "sec_num": null }, { "text": "Our results for proj differ from those presented in (Marzoev et al., 2020) because we use the hierarchical projection method, which forces a search through the grammar to find the closest canonical utterances. Marzoev et al. (2020) use a linear projection method, which instead compares to all canonical utterances directly, which generally performs better but is not tractable for complex grammars.", "cite_spans": [ { "start": 52, "end": 74, "text": "(Marzoev et al., 2020)", "ref_id": "BIBREF13" }, { "start": 210, "end": 231, "text": "Marzoev et al. (2020)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "A Model Details", "sec_num": null }, { "text": "All semantic parsing systems that we evaluated OVERNIGHT struggle to perform well on parsing multi-intent utterances. It can be difficult to simultaneously determine the number of intents in an utterance, interpret the natural language in each intent, and predict LFs that exactly match the LFs for every intent. Table 4 presents the exact match logical form accuracies for OVERNIGHT multi-intent.", "cite_spans": [], "ref_spans": [ { "start": 313, "end": 320, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "B Full Logical Form Results", "sec_num": null } ], "back_matter": [ { "text": "This research is based upon work supported by the Naval Air Warfare Center Training Systems Division and the Department of the Navy's Small Business Innovation Research (SBIR) Program, contract N68335-19-C-0052. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Department of the Navy or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Semantic parsing via paraphrasing", "authors": [ { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1415--1425", "other_ids": { "DOI": [ "10.3115/v1/P14-1133" ] }, "num": null, "urls": [], "raw_text": "Jonathan Berant and Percy Liang. 2014. Semantic pars- ing via paraphrasing. In Proceedings of the 52nd An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1415- 1425, Baltimore, Maryland. Association for Compu- tational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Unsupervised dual paraphrasing for two-stage semantic parsing", "authors": [ { "first": "Ruisheng", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Su", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Chenyu", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Chen", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Rao", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Yanbin", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "6806--6817", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.608" ] }, "num": null, "urls": [], "raw_text": "Ruisheng Cao, Su Zhu, Chenyu Yang, Chen Liu, Rao Ma, Yanbin Zhao, Lu Chen, and Kai Yu. 2020. Un- supervised dual paraphrasing for two-stage semantic parsing. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 6806-6817, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language under- standing. arXiv:1810.04805.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Coarse-to-fine decoding for neural semantic parsing", "authors": [ { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "731--742", "other_ids": { "DOI": [ "10.18653/v1/P18-1068" ] }, "num": null, "urls": [], "raw_text": "Li Dong and Mirella Lapata. 2018. Coarse-to-fine de- coding for neural semantic parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 731-742, Melbourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Joint multiple intent detection and slot labeling for goal-oriented dialog", "authors": [ { "first": "Rashmi", "middle": [], "last": "Gangadharaiah", "suffix": "" }, { "first": "Balakrishnan", "middle": [], "last": "Narayanaswamy", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "564--569", "other_ids": { "DOI": [ "10.18653/v1/N19-1055" ] }, "num": null, "urls": [], "raw_text": "Rashmi Gangadharaiah and Balakrishnan Narayanaswamy. 2019. Joint multiple intent detection and slot labeling for goal-oriented dialog. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 564-569, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Neural semantic parsing", "authors": [ { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Pradeep", "middle": [], "last": "Dasigi", "suffix": "" }, { "first": "Srinivasan", "middle": [], "last": "Iyer", "suffix": "" }, { "first": "Alane", "middle": [], "last": "Suhr", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts", "volume": "", "issue": "", "pages": "17--18", "other_ids": { "DOI": [ "10.18653/v1/P18-5006" ] }, "num": null, "urls": [], "raw_text": "Matt Gardner, Pradeep Dasigi, Srinivasan Iyer, Alane Suhr, and Luke Zettlemoyer. 2018. Neural seman- tic parsing. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguistics: Tutorial Abstracts, pages 17-18, Melbourne, Aus- tralia. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Semantic parsing for task oriented dialog using hierarchical representations", "authors": [ { "first": "Sonal", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Rushin", "middle": [], "last": "Shah", "suffix": "" }, { "first": "Mrinal", "middle": [], "last": "Mohit", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2787--2792", "other_ids": { "DOI": [ "10.18653/v1/D18-1300" ] }, "num": null, "urls": [], "raw_text": "Sonal Gupta, Rushin Shah, Mrinal Mohit, Anuj Ku- mar, and Mike Lewis. 2018. Semantic parsing for task oriented dialog using hierarchical representa- tions. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2787-2792, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "End-to-end neural pipeline for goal-oriented dialogue systems using GPT-2", "authors": [ { "first": "Donghoon", "middle": [], "last": "Ham", "suffix": "" }, { "first": "Jeong-Gwan", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Youngsoo", "middle": [], "last": "Jang", "suffix": "" }, { "first": "Kee-Eung", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "583--592", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.54" ] }, "num": null, "urls": [], "raw_text": "Donghoon Ham, Jeong-Gwan Lee, Youngsoo Jang, and Kee-Eung Kim. 2020. End-to-end neural pipeline for goal-oriented dialogue systems using GPT-2. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguis- tics, pages 583-592, Online. Association for Com- putational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Don't paraphrase, detect! rapid and effective data collection for semantic parsing", "authors": [ { "first": "Jonathan", "middle": [], "last": "Herzig", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3810--3820", "other_ids": { "DOI": [ "10.18653/v1/D19-1394" ] }, "num": null, "urls": [], "raw_text": "Jonathan Herzig and Jonathan Berant. 2019. Don't paraphrase, detect! rapid and effective data collec- tion for semantic parsing. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3810-3820, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Parabank: Monolingual bitext generation and sentential paraphrasing via lexically-constrained neural machine translation", "authors": [ { "first": "Edward", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Rudinger", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Post", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "33", "issue": "", "pages": "6521--6528", "other_ids": {}, "num": null, "urls": [], "raw_text": "J Edward Hu, Rachel Rudinger, Matt Post, and Ben- jamin Van Durme. 2019. Parabank: Monolingual bitext generation and sentential paraphrasing via lexically-constrained neural machine translation. In Proceedings of the AAAI Conference on Artificial In- telligence, volume 33, pages 6521-6528.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Learning a neural semantic parser from user feedback", "authors": [ { "first": "Srinivasan", "middle": [], "last": "Iyer", "suffix": "" }, { "first": "Ioannis", "middle": [], "last": "Konstas", "suffix": "" }, { "first": "Alvin", "middle": [], "last": "Cheung", "suffix": "" }, { "first": "Jayant", "middle": [], "last": "Krishnamurthy", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "963--973", "other_ids": { "DOI": [ "10.18653/v1/P17-1089" ] }, "num": null, "urls": [], "raw_text": "Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, and Luke Zettlemoyer. 2017. Learn- ing a neural semantic parser from user feedback. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 963-973, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "ConvLab: Multi-domain end-to-end dialog system platform", "authors": [ { "first": "Sungjin", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Ryuichi", "middle": [], "last": "Takanobu", "suffix": "" }, { "first": "Zheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yaoqin", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jinchao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Baolin", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Xiujun", "middle": [], "last": "Li", "suffix": "" }, { "first": "Minlie", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "64--69", "other_ids": { "DOI": [ "10.18653/v1/P19-3011" ] }, "num": null, "urls": [], "raw_text": "Sungjin Lee, Qi Zhu, Ryuichi Takanobu, Zheng Zhang, Yaoqin Zhang, Xiang Li, Jinchao Li, Baolin Peng, Xiujun Li, Minlie Huang, and Jianfeng Gao. 2019. ConvLab: Multi-domain end-to-end dialog system platform. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics: System Demonstrations, pages 64-69, Flo- rence, Italy. Association for Computational Linguis- tics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension", "authors": [ { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal ; Abdelrahman Mohamed", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7871--7880", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.703" ] }, "num": null, "urls": [], "raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Unnatural language processing: Bridging the gap between synthetic and natural language data", "authors": [ { "first": "Alana", "middle": [], "last": "Marzoev", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Madden", "suffix": "" }, { "first": "Frans", "middle": [], "last": "Kaashoek", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Cafarella", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Andreas", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.13645" ] }, "num": null, "urls": [], "raw_text": "Alana Marzoev, Samuel Madden, M Frans Kaashoek, Michael Cafarella, and Jacob Andreas. 2020. Unnat- ural language processing: Bridging the gap between synthetic and natural language data. arXiv preprint arXiv:2004.13645.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Hierarchical multi-task natural language understanding for cross-domain conversational AI: HERMIT NLU", "authors": [ { "first": "Andrea", "middle": [], "last": "Vanzo", "suffix": "" }, { "first": "Emanuele", "middle": [], "last": "Bastianelli", "suffix": "" }, { "first": "Oliver", "middle": [], "last": "Lemon", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue", "volume": "", "issue": "", "pages": "254--263", "other_ids": { "DOI": [ "10.18653/v1/W19-5931" ] }, "num": null, "urls": [], "raw_text": "Andrea Vanzo, Emanuele Bastianelli, and Oliver Lemon. 2019. Hierarchical multi-task natural lan- guage understanding for cross-domain conversa- tional AI: HERMIT NLU. In Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dia- logue, pages 254-263, Stockholm, Sweden. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Building a semantic parser overnight", "authors": [ { "first": "Yushi", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "1332--1342", "other_ids": { "DOI": [ "10.3115/v1/P15-1129" ] }, "num": null, "urls": [], "raw_text": "Yushi Wang, Jonathan Berant, and Percy Liang. 2015. Building a semantic parser overnight. In Proceed- ings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 1: Long Papers), pages 1332-1342, Beijing, China. Association for Computational Lin- guistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Opportunities for advanced speech processing in military computerbased systems*", "authors": [ { "first": "Clifford", "middle": [ "J" ], "last": "Weinstein", "suffix": "" } ], "year": 1990, "venue": "Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Clifford J. Weinstein. 1990. Opportunities for ad- vanced speech processing in military computer- based systems*. In Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27,1990.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "ParaNMT-50M: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations", "authors": [ { "first": "John", "middle": [], "last": "Wieting", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "451--462", "other_ids": { "DOI": [ "10.18653/v1/P18-1042" ] }, "num": null, "urls": [], "raw_text": "John Wieting and Kevin Gimpel. 2018. ParaNMT- 50M: Pushing the limits of paraphrastic sentence em- beddings with millions of machine translations. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 451-462, Melbourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Structvae: Tree-structured latent variable models for semi-supervised semantic parsing", "authors": [ { "first": "Pengcheng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Chunting", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Junxian", "middle": [], "last": "He", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "754--765", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pengcheng Yin, Chunting Zhou, Junxian He, and Gra- ham Neubig. 2018. Structvae: Tree-structured latent variable models for semi-supervised semantic pars- ing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 754-765.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Spider: A largescale human-labeled dataset for complex and crossdomain semantic parsing and text-to-SQL task", "authors": [ { "first": "Tao", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Michihiro", "middle": [], "last": "Yasunaga", "suffix": "" }, { "first": "Dongxu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zifan", "middle": [], "last": "Li", "suffix": "" }, { "first": "James", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Irene", "middle": [], "last": "Li", "suffix": "" }, { "first": "Qingning", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Shanelle", "middle": [], "last": "Roman", "suffix": "" }, { "first": "Zilin", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Dragomir", "middle": [], "last": "Radev", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "3911--3921", "other_ids": { "DOI": [ "10.18653/v1/D18-1425" ] }, "num": null, "urls": [], "raw_text": "Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. 2018. Spider: A large- scale human-labeled dataset for complex and cross- domain semantic parsing and text-to-SQL task. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3911-3921, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Photon: A robust cross-domain text-to-sql system", "authors": [ { "first": "Jichuan", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Xi", "middle": [], "last": "Victoria Lin", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "R", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Irwin", "middle": [], "last": "Lyu", "suffix": "" }, { "first": "", "middle": [], "last": "King", "suffix": "" }, { "first": "C", "middle": [ "H" ], "last": "Steven", "suffix": "" }, { "first": "", "middle": [], "last": "Hoi", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2007.15280" ] }, "num": null, "urls": [], "raw_text": "Jichuan Zeng, Xi Victoria Lin, Caiming Xiong, Richard Socher, Michael R Lyu, Irwin King, and Steven CH Hoi. 2020. Photon: A robust cross-domain text-to-sql system. arXiv preprint arXiv:2007.15280.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Bertscore: Evaluating text generation with bert", "authors": [ { "first": "Tianyi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Varsha", "middle": [], "last": "Kishore", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Q", "middle": [], "last": "Kilian", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Weinberger", "suffix": "" }, { "first": "", "middle": [], "last": "Artzi", "suffix": "" } ], "year": 2019, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Eval- uating text generation with bert. In International Conference on Learning Representations.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Seq2sql: Generating structured queries from natural language using reinforcement learning", "authors": [ { "first": "Victor", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1709.00103" ] }, "num": null, "urls": [], "raw_text": "Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. arXiv preprint arXiv:1709.00103.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "\": [{\"intent\": \"survivor-in-sight\"}, {\"intent\": \"direction\", \"num\": Example generated semantic parse. A parser must" }, "FIGREF1": { "type_str": "figure", "num": null, "uris": null, "text": "(Top) Method to project a natural language utterance to a canonical utterance; the logical form can then be inferred directly. (Bottom) Grammar from our ISR data. representations of utterances in R d . A distance function \u03b4 is used to compute the closest canonical utterance in vector space to the natural language utterance. The projection function is defined as:" }, "FIGREF3": { "type_str": "figure", "num": null, "uris": null, "text": "LF accuracies for NT-Avg system by varying BERT layer. Lower layers result in higher accuracies, especially in the multi-intent setting." }, "TABREF1": { "html": null, "content": "
Single-IntentMulti-IntentAvg
DatasetCan.NLCan.NLLen
ISR44579020,0006007.4
HELI451708,0002,0003.0
OVERNIGHT3022,416 20,000 2,000 10.8
understood by pre-trained LMs. This can be some-
what resolved by replacing non-terminals with the
[MASK] token
", "text": "Number of canonical (Can.) and natural language utterances (NL) and average length of utterances. Each Can. and NL utterance is paired with a gold standard LF. Can. pairs are used for training and NL pairs are for evaluation. OVERNIGHT numbers are averaged over its eight subdomains.", "type_str": "table", "num": null }, "TABREF3": { "html": null, "content": "", "text": "Logical form accuracies for internal ISR and HELI datasets", "type_str": "table", "num": null }, "TABREF4": { "html": null, "content": "
", "text": "Logical form accuracies against OVERNIGHT datasets. Partial accuracies are reported for multi-intent data.", "type_str": "table", "num": null } } } }