ACL-OCL / Base_JSON /prefixI /json /intexsempar /2020.intexsempar-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
117 kB
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:12:30.071614Z"
},
"title": "COLLOQL: Robust Cross-Domain Text-to-SQL Over Search Queries",
"authors": [
{
"first": "Karthik",
"middle": [],
"last": "Radhakrishnan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Srikantan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": "asrikantan@salesforce.com"
},
{
"first": "Victoria",
"middle": [],
"last": "Xi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Lin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Translating natural language utterances to executable queries is a helpful technique in making the vast amount of data stored in relational databases accessible to a wider range of nontech-savvy end users. Prior work in this area has largely focused on textual input that is linguistically correct and semantically unambiguous. However, real-world user queries are often succinct, colloquial, and noisy, resembling the input of a search engine. In this work, we introduce data augmentation techniques and a sampling-based content-aware BERT model (COLLOQL) to achieve robust text-to-SQL modeling over natural language search (NLS) questions. Due to the lack of evaluation data, we curate a new dataset of NLS questions and demonstrate the efficacy of our approach. COLLOQL's superior performance extends to well-formed text, achieving 84.9% (logical) and 90.7% (execution) accuracy on the WikiSQL dataset, making it, to the best of our knowledge, the highest performing model that does not use execution guided decoding.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Translating natural language utterances to executable queries is a helpful technique in making the vast amount of data stored in relational databases accessible to a wider range of nontech-savvy end users. Prior work in this area has largely focused on textual input that is linguistically correct and semantically unambiguous. However, real-world user queries are often succinct, colloquial, and noisy, resembling the input of a search engine. In this work, we introduce data augmentation techniques and a sampling-based content-aware BERT model (COLLOQL) to achieve robust text-to-SQL modeling over natural language search (NLS) questions. Due to the lack of evaluation data, we curate a new dataset of NLS questions and demonstrate the efficacy of our approach. COLLOQL's superior performance extends to well-formed text, achieving 84.9% (logical) and 90.7% (execution) accuracy on the WikiSQL dataset, making it, to the best of our knowledge, the highest performing model that does not use execution guided decoding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Relational databases store a vast amount of the world's data and are typically accessed via structured query languages like SQL. A natural language interface to these databases (NLIDB) could significantly improve the accessibility of this data by allowing users to retrieve and utilize the information without any programming expertise. With the release of large-scale datasets (Zhong et al., 2017; Finegan-Dollak et al., 2018; Yu et al., 2018b) , this task has gained a lot of attention and has been widely studied in recent years.",
"cite_spans": [
{
"start": 378,
"end": 398,
"text": "(Zhong et al., 2017;",
"ref_id": "BIBREF27"
},
{
"start": 399,
"end": 427,
"text": "Finegan-Dollak et al., 2018;",
"ref_id": "BIBREF4"
},
{
"start": 428,
"end": 445,
"text": "Yu et al., 2018b)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Prior research has primarily focused on translating grammatical, complete sentences to queries. However, an internal user survey on the search service of a major customer relationship management (CRM) platform 1 revealed that users have a tendency to communicate in a colloquial form which could vary from using only keywords (\"player 42\") to very short phrases (\"show player 42\") to complete sentences (\"Who is the player who wears Jersey 42?\"). Apart from variation in style, users dropping content words from their searches in the interest of brevity also has the potential consequence of making their questions ambiguous. This could render the task unsolvable even to models accustomed to the NLS style of text. For example, in Figure 1 , without the word \"Jersey\", it is impossible to identify which column's value (Id or Jersey) must equal 42.",
"cite_spans": [],
"ref_spans": [
{
"start": 732,
"end": 740,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we show that Text2SQL systems trained on only complete sentences struggle to adapt to the noisy keyword/short phrasal style of questions. To combat this, we introduce different data augmentation strategies inspired from our user search patterns and style. To tackle the induced ambiguity, a potential solution is to utilize the table content by allowing the model to scan the table for different terms present in the question and utilize that information to disambiguate (If the token \"42\" was only found in the Jersey column, then Jersey must be the column equal to 42). Though effective, this approach could become prohibitively expensive (in terms of inference time or memory required) on large tables as the model would have to search over the entire of the table content for every question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We hypothesize that in most cases, the model only needs samples from the table content and not the exact rows that match tokens in the NLS question to disambiguate columns. For example, if the Id column contained alpha-numeric IDs, Player and Nationality contained strings, and Jersey contained two digit numbers, then Jersey must be the column equal to 42. Sampling alleviates the need of a full table scan for every question. The samples for each column could be generated offline and remain unchanged across questions or periodically refreshed (to reflect potential distribution shifts in the table or user queries), allowing for adaptation and personalization without retraining the model. In summary, our contributions are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. We augment the well-formed WikiSQL dataset with synthetic search-style questions to adapt to short, colloquial input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. We propose new models which incorporate table content in a BERT encoder via two sampling strategies to handle ambiguous questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3. We perform an in-depth qualitative and quantitative (accuracy, inference time, memory) analysis to show the efficacy of each content sampling strategy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "4. We curate a dataset of 400 questions to benchmark performance of Text-to-SQL models in this setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Apart from adapting to NLS style questions, COLLOQL also achieves state-of-the-art performance on the original WikiSQL (Zhong et al., 2017) dataset, outperforming all baselines that do not use execution guided decoding. We base our work off SQLova (Hwang et al., 2019) but our methods are generalizable to other approaches 2 .",
"cite_spans": [
{
"start": 119,
"end": 139,
"text": "(Zhong et al., 2017)",
"ref_id": "BIBREF27"
},
{
"start": 248,
"end": 268,
"text": "(Hwang et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Text-to-SQL approaches for the WikiSQL benchmark Text-to-SQL falls under a broader class of semantic parsing tasks and has been widely studied in the NLP and database communities. While early works have focused on pattern-matching and rule-based techniques (Androutsopoulos et al., 1995; Li and Jagadish, 2014; Setlur et al., 2016) , with the introduction of large scale datasets such as WikiSQL (Zhong et al., 2017) , recent works have focused on neural methods for generating SQL. They can be broadly categorized into a few themes -sequence to sequence (Seq2Seq), sequence to tree (Seq2Tree), and SQL-Sketch (logical form) methods. Seq2Seq models frame the task as an encoderdecoder problem by trying to generate the SQL query token-by-token from the input question. However, as noted by Xu et al. (2018) these models suffer from the \"order matters\" issue where the model is forced to match the ordering of the where clauses. Zhong et al. (2017) employ reinforcement learning based method to overcome this issue but the gains from this has been limited as noted in Xu et al. (2018) . Seq2Tree models generate the SQL query as an abstract syntax tree (AST) instead of a token sequence Wang et al., 2020) . These approaches define a generation grammar for SQL and learn to output the action sequence for constructing the AST (Yin and Neubig, 2018) . Seq2Tree approaches are widely adopted for benchmarks that contain complex SQL queries (Yu et al., 2018b) as the syntactic constraints they adopt are effective at pruning the output search space and capturing structural dependencies. However, they do not show much advantage on the WikiSQL benchmark where the SQL ASTs are largely flat. SQLNet (Xu et al., 2018) introduces the concept of a SQL-Sketch, where it generates a sketch capturing the salient elements of the query as opposed to directly generating the query itself. SQLNet uses LSTMs to encode the question and headers and employs column attention to predict different components of the SQL-Sketch. As shown in Figure 2, the query is decomposed into different components which are predicted individually. Type-SQL (Yu et al., 2018a) extends upon this approach by augmenting each token in the question with its type (whether it resembles the name of the column, FreeBase entity type, etc). SQLova (Hwang et al., 2019) replaces the LSTMs encoder from SQLNet and uses BERT to encode the question and headers jointly. Unlike SQLNet, SQLova does not share any parameters in the decoders and identifies the where clause values using span detection instead of pointer generators. HydraNet breaks down the problem into column-wise ranking and decoding and assembles the outputs from each column to create the SQL query.",
"cite_spans": [
{
"start": 257,
"end": 287,
"text": "(Androutsopoulos et al., 1995;",
"ref_id": "BIBREF0"
},
{
"start": 288,
"end": 310,
"text": "Li and Jagadish, 2014;",
"ref_id": "BIBREF10"
},
{
"start": 311,
"end": 331,
"text": "Setlur et al., 2016)",
"ref_id": "BIBREF13"
},
{
"start": 396,
"end": 416,
"text": "(Zhong et al., 2017)",
"ref_id": "BIBREF27"
},
{
"start": 790,
"end": 806,
"text": "Xu et al. (2018)",
"ref_id": "BIBREF18"
},
{
"start": 928,
"end": 947,
"text": "Zhong et al. (2017)",
"ref_id": "BIBREF27"
},
{
"start": 1067,
"end": 1083,
"text": "Xu et al. (2018)",
"ref_id": "BIBREF18"
},
{
"start": 1186,
"end": 1204,
"text": "Wang et al., 2020)",
"ref_id": "BIBREF16"
},
{
"start": 1325,
"end": 1347,
"text": "(Yin and Neubig, 2018)",
"ref_id": "BIBREF21"
},
{
"start": 1437,
"end": 1455,
"text": "(Yu et al., 2018b)",
"ref_id": "BIBREF24"
},
{
"start": 1694,
"end": 1711,
"text": "(Xu et al., 2018)",
"ref_id": "BIBREF18"
},
{
"start": 2124,
"end": 2142,
"text": "(Yu et al., 2018a)",
"ref_id": "BIBREF23"
},
{
"start": 2306,
"end": 2326,
"text": "(Hwang et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 2021,
"end": 2027,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Text-to-SQL with table content Recent works like NL2SQL-RULE , RAT-SQL (Wang et al., 2020) and Photon (Zeng et al., 2020) have looked into incorporating table content into the SQL generation. NL2SQL-RULE augments BERT representations with mark vectors for each question and table header token to indicate a match across the two parts. Photon only incorporates the content of a limited set of categorical fields when there is an exact match with a question token. Unlike NL2SQL-RULE, ColloQL includes table content in the BERT encoder allowing it to form content-enhanced question and header representations and unlike Photon, ColloQL incorporates content for all columns and includes samples even when there is not an exact match to disambiguate columns effectively. TaBERT (Yin et al., 2020) lifted the idea further by pre-training joint representation of text and table taking into account row subsampled in a random or relevance-based manner. The pre-trained joint representation has been shown to outperform vanilla language models in several table QA and semantic parsing tasks.",
"cite_spans": [
{
"start": 71,
"end": 90,
"text": "(Wang et al., 2020)",
"ref_id": "BIBREF16"
},
{
"start": 102,
"end": 121,
"text": "(Zeng et al., 2020)",
"ref_id": "BIBREF25"
},
{
"start": 774,
"end": 792,
"text": "(Yin et al., 2020)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Text-to-SQL with execution guided decoding One common theme across all the high performing models on WikiSQL is that they all employ Execution Guided (EG) decoding. First introduced by Wang et al. (2018) , EG is a technique where partial SQL queries are executed and their results are used to guide the decoding process. While EG has been shown to boost accuracy significantly, we do not apply execution guided decoding on our models for two reasons: Firstly, most EG methods modify the predicted query based on whether an empty set is returned. While this works well in the WikiSQL setting, having no results is often not due to an erroneous query. It is not uncommon for users to issue searches like \"my escalated support cases\"(with the expectation of surfacing zero records) or \"John Doe leads\"(to ensure that a record does not already exist before creating one) and we wanted to eliminate the reliance on database outputs to translate a query correctly. Secondly, database tables could have over 1M records and performing multiple database executions for every query could be expensive and is not always feasible whilst keeping up with the latency requirements of clients.",
"cite_spans": [
{
"start": 185,
"end": 203,
"text": "Wang et al. (2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Text-to-SQL with noisy user input While recent text-to-SQL research typically focus on benchmark datasets with complete and grammatical input, noisy user queries are commonly encountered in practical NLIDBs. Previous work have proposed several ways to address this issue. Zettlemoyer and Collins (2007) introduced non-standard combinators to a combinatorial categorical grammar (GGG) based semantic parser to handle flexible word order and telegraphic language. Sajjad et al. (2012) and Yao et al. (2019a,b) developed interactive semantic parsing models that generate clarification questions for user to complete their underspecified queries. Arthur et al. (2015) paraphrases an ambiguous input into a less ambiguous form. Setlur et al. (2019) generates default logical forms for underspecified input. Zeng et al. (2020) synthesized a new dataset and trained question filter to identify noisy user input and prompt user to rephrase. Our work focus on handling short user utterances typically found in the search service of Salesforce CRM, where sampling-based content-aware models are effective at resolving most ambiguities.",
"cite_spans": [
{
"start": 272,
"end": 302,
"text": "Zettlemoyer and Collins (2007)",
"ref_id": "BIBREF26"
},
{
"start": 462,
"end": 482,
"text": "Sajjad et al. (2012)",
"ref_id": "BIBREF12"
},
{
"start": 487,
"end": 507,
"text": "Yao et al. (2019a,b)",
"ref_id": null
},
{
"start": 643,
"end": 663,
"text": "Arthur et al. (2015)",
"ref_id": "BIBREF2"
},
{
"start": 723,
"end": 743,
"text": "Setlur et al. (2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The Text2SQL task is to generate a SQL query from a natural language question and the database schema/content. In this work, we use the Wik-iSQL dataset (Zhong et al., 2017) as it most closely matches the queries we expect to serve in a CRM. Our users typically don't issue linguistically complex queries requiring joins or nesting but instead focus on filtering a single table based on certain clauses.",
"cite_spans": [
{
"start": 153,
"end": 173,
"text": "(Zhong et al., 2017)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task and Datasets",
"sec_num": "3"
},
{
"text": "WikiSQL contains over 80K natural language questions distributed across 24K tables and their gold SQL queries. The performance is typically evaluated on two different types of accuracies -Logical Form (LF) and Execution (EX). LF measures if the generated query exactly matches the gold query while EX executes the predicted and gold queries on the database and verifies if the answers returned by both are equal. Note that LF is a stricter metric as many different SQL queries could produce the same output. which deals have an expected revenue of over 10 number of deals closed in 2019 how many deals have closing year as 2019 Table 1 : WikiSQL questions and their NLS-style counterparts.",
"cite_spans": [],
"ref_spans": [
{
"start": 628,
"end": 635,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Task and Datasets",
"sec_num": "3"
},
{
"text": "The WikiSQL dataset mostly comprises of verbose questions which differ in style as compared to the NLS questions issued by our users. Table 1 shows NLS questions and their WikiSQL-style equivalents. To account for the differences in style, we augment the WikiSQL dataset with our synthetic data to simulate real-user NLS questions which is generated as follows.",
"cite_spans": [],
"ref_spans": [
{
"start": 134,
"end": 142,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Task and Datasets",
"sec_num": "3"
},
{
"text": "Synthesizing user utterances from gold SQL labels Since WikiSQL contains the gold labels for the SQL sketch, we can use this data to generate NLS-style questions. By analyzing our user search queries (which resemble those shown in Table 1 ) we built question templates which we fill based on the gold SQL-Sketch. Some examples include shuffling the ordering of where conditions (users apply filters in different order), interchange ordering of column names and values (some users type \"US region cases\" while others type \"region US cases\"), and insert the select column name in the beginning or the end of a question (\"John Doe accounts\" vs \"accounts John Doe\"). The synthetic data is used in conjunction with clean well-formed queries from the original dataset, allowing the model to generalize to other queries not present in the templates. An example of synthetic utterances generated this way is shown below.",
"cite_spans": [],
"ref_spans": [
{
"start": 231,
"end": 238,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Task and Datasets",
"sec_num": "3"
},
{
"text": "Original Query -Who is the player of Australian nationality that wears jersey number 42? Generated Queriesplayer jersey 42 australian nationality; 42 jersey australian nationality player; australian nationality jersey 42 player; . . .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task and Datasets",
"sec_num": "3"
},
{
"text": "We identify popular query ngrams when the conditional operator in the SQL-Sketch corresponds to either \">\" or \"<\" and randomly replace these ngrams (\"bigger than\", \"larger than\", etc) with the operator symbols, allowing our model to properly interpret them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supporting relational symbols in user utterance",
"sec_num": null
},
{
"text": "Controlled question simplification Since Wik-iSQL contains no keyword-based questions and only a small portion of questions that are succinct enough to require reasoning over the table content, we employ a sentence simplification model followed by manual verification to create a test dataset to evaluate performance on NLS questions. A common user behavior is to drop unnecessary words from complete sentences to create shorter questions. We simulate this behavior by simplifying/compressing sentences to reduce verbosity. Note that keyword queries can be viewed as an extreme case of sentence simplification where only the required keywords are retained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supporting relational symbols in user utterance",
"sec_num": null
},
{
"text": "We make use of the controllable sentence simplifier by Handler and O'Connor (2019) to compress sentences to a desired length whilst retaining a specified set of keywords. We specify the list of keywords to be the header name of the select column, the values in the where columns (we ignore the header names for the where columns as users tend to omit them from their queries).",
"cite_spans": [
{
"start": 55,
"end": 82,
"text": "Handler and O'Connor (2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Supporting relational symbols in user utterance",
"sec_num": null
},
{
"text": "In total, we create two datasets: short questions with gold SQL labels and replacement of relation symbols, and simple questions with controlled sentence simplification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supporting relational symbols in user utterance",
"sec_num": null
},
{
"text": "Manually verified test set We create a highquality test set by manually verifying a subset of simple questions 3 . A potential problem with sentence simplification models is ensuring that the shortened version still has enough information to execute the query correctly. This could vary based on the table content and is difficult to identify if the query is impossible to be executed correctly. We had a team of data scientists and engineers proficient in SQL to verify/correct outputs produced by the sentence simplification model and generated 400 queries for testing. We show examples in this dataset and report our manual quality evaluation in \u00a7 A.1. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supporting relational symbols in user utterance",
"sec_num": null
},
{
"text": "Following Xu et al. (2018) and Hwang et al. 2019, we decompose the SQL generation task into 6 different subtasks -one for each component of the SQL-Sketch. These subtasks all share a common encoder but use different decoder layers. The encoder is a BERT model (Devlin et al., 2018) which produces contextualized representations of the question, headers and the decoders largely use a task-specific LSTM with column-attention. Column-attention (Xu et al., 2018 ) is a mechanism where each header attends over all query tokens to produce a single representation over which a dense layer is used to predict probabilities. The select, aggregation, where-num, and whereoperator branches use LSTMs + Column-attention followed by a softmax layer to output probabilities. The where-column branch is similar but uses a sigmoid instead as multiple columns could appear in the where clause and the where-value outputs start-end spans for the values from the question. Figure 3 highlights the architecture of our model. We retain the same encoder-decoder architecture as SQLova as our main contribution lies in the data augmentation and content sampling techniques to handle NLS questions.",
"cite_spans": [
{
"start": 10,
"end": 26,
"text": "Xu et al. (2018)",
"ref_id": "BIBREF18"
},
{
"start": 260,
"end": 281,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 443,
"end": 459,
"text": "(Xu et al., 2018",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 957,
"end": 965,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "4"
},
{
"text": "As highlighted previously, table content could be a useful feature in helping the model disambiguate between different columns. Consider a table of tennis players as shown below. Now, consider a question \"courts with Rafael Nadal as winner\". A model which isn't informed about the content of the table cannot easily understand that Rafael Nadal needs to be the where clause value for Player and winner for the Result column. Allowing the model to scan the table for entities like \"Rafael Nadal\" or \"winner\" could help the model incorporate table content effectively. Consider another question \"courts with Roger Federer as winner\". It is intuitive that this query follows the same structure as the previous, except that the required value is now \"Roger Federer\". However, \"Roger Federer\" is not present in the table. We hypothesize that while table content is useful to the model, it does not need to be relevant to the query. The model, when given random samples of values for each column can infer the role of a particular column and generalize to unseen values which are similar to the column samples. In this work, we experiment with two sampling techniques -random and relevance sampling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Incorporation",
"sec_num": "4.1"
},
{
"text": "Random sampling uses a fixed set of question agnostic column values sampled randomly (without replacement) and does not require access to the table once the samples are created. Since the sampling process can be done entirely offline, it adds negligible memory and time to the query execution. Additionally, the model can now be used in privacy sensitive scenarios as it does not access the table content and the samples could be manually configured. The model, now being content informed, performs better than its non-content counterparts whilst being more efficient than its full table con-tent counterparts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Random Sampling",
"sec_num": "4.1.1"
},
{
"text": "Relevance sampling is used in cases where access to table is permitted and it includes a combination of samples relevant to question tokens and random samples. We index all cells of a table and perform a keyword search in the question to identify most relevant cells using FlashText (Singh, 2017) and include them as samples. In situations where the number of keyword matches are fewer than intended for a column or there are no matches, we fallback on random sampling to select the remaining samples.",
"cite_spans": [
{
"start": 283,
"end": 296,
"text": "(Singh, 2017)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relevance Sampling",
"sec_num": "4.1.2"
},
{
"text": "To illustrate the importance of including random samples in the relevance sampling strategy, consider the following example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relevance Sampling",
"sec_num": "4.1.2"
},
{
"text": "Question -Which countries hosted the MHL league?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relevance Sampling",
"sec_num": "4.1.2"
},
{
"text": "League values -NHL, MLB, NBA",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relevance Sampling",
"sec_num": "4.1.2"
},
{
"text": "Photon (Zeng et al., 2020) , a model which only includes up to a single matched value, interprets this query incorrectly (Select country where league = MHL league). Its value matching approach retrieves an empty set to augment the table. 4 Our model with relevance sampling tackles cases like this successfully (Select country where league = MHL) as NHL, MLB, and NBA were included as samples because of the fallback on random sampling. Including random samples improves the model's ability to interpret questions that have values not directly found in the table.",
"cite_spans": [
{
"start": 7,
"end": 26,
"text": "(Zeng et al., 2020)",
"ref_id": "BIBREF25"
},
{
"start": 238,
"end": 239,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relevance Sampling",
"sec_num": "4.1.2"
},
{
"text": "The addition of random samples also allows the model to discriminate between columns effectively. Consider Question 4 from Table 2 , the question is ambiguous without table content because it is unclear if the column to be selected is Place or Country. The pattern \"where are. . . from?\" indicates that the user's intent is to find a location and both column names seem like a reasonable choice (Place is a synonym for location and Country is a location). However, when augmented with random column samples, we see that the Place column only contains numeric values and is used as the synonym of \"rank\" in this table. Figure 3 shows our input representation to the BERT model. Our representation bears similarity to Photon where the content values are concatenated along with the headers and the question separated by special tokens. However, Photon only tackles columns with picklists (categorical columns storing small fixed set of values) while we support numeric and free-form text columns as well. Additionally, as mentioned above, since Photon only incorporates a single matched value, it doesn't gracefully interpret all questions.",
"cite_spans": [],
"ref_spans": [
{
"start": 123,
"end": 130,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 618,
"end": 626,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Relevance Sampling",
"sec_num": "4.1.2"
},
{
"text": "We concatenate the column samples to the headers with special delimiters and experiment with 1,3,5 samples for each column. The number of samples is currently limited by the maximum sequence length supported by BERT models and in the future we hope to experiment with operating on each column individually and diversity based sampling to extract the most distinctive samples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relevance Sampling",
"sec_num": "4.1.2"
},
{
"text": "We use the base version of BERT in all our experiments and made necessary changes for sampling on the original SQLova codebase. We use Adam (Kingma and Ba, 2019) optimizer with a learning rate of 1e-3 for the decoder layers and 1e-5 for the BERT model. Table 2 shows some qualitative examples from our model when augmented with 3 values included for each column. The first two examples are based on random sampling and the latter two are based on relevance sampling. Our model is able to correctly resolve phrases such as \"Maria Herrera\" and \"BMW\" to the right columns when the corresponding values were not seen during training or inference. Consider the first two examples with different modifiers of \"rider\", leveraging the sampled values, our model correctly matches \"BMW\" to Manufacturer (column storing brand name like values) and \"Maria Herrera\" to Rider (column storing human name like values).",
"cite_spans": [],
"ref_spans": [
{
"start": 253,
"end": 260,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiment Setup",
"sec_num": "5"
},
{
"text": "We show performance of our model evaluated on the original WikiSQL dev dataset under different sampling settings. Owing to the 512 token limit, we only sample upto 5 values per column in ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Random Sampling",
"sec_num": "6.2"
},
{
"text": "In addition to random sampling, we also provide results on a model that finds the most relevant samples to the question. In Table 4 , we compare our results with NL2SQL-RULE (Guo and Gao, 2019) (uses entire table content) and EM:1 (including a * Due to unavailability of code, HydraNet numbers are only reported on datasets used in their paper single exactly matched value), the content incorporation strategy adopted by Photon (Zeng et al., 2020) . Since WikiSQL does not distinguish categorical columns, we applied the exact match to all columns. Our model achieves 85.2% logical form and 90.65% execution accuracy on the original WikiSQL dataset outperforming all models without EG. We also studied the memory and time footprint for indexing cells with increasing table sizes by benchmarking the performance of random and relevance sampling on very large tables. To simulate real-world data, we used IMDB movie database -a large-scale database with tables spanning over 7M rows containing movie metadata.",
"cite_spans": [
{
"start": 428,
"end": 447,
"text": "(Zeng et al., 2020)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 124,
"end": 131,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Effect of Relevance Sampling",
"sec_num": "6.3"
},
{
"text": "The random sampling method is agnostic to table size as samples are generated just once while the relevance sampling method scans the table to pick the best samples for each query. The results are shown in Table 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 206,
"end": 213,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Effect of Relevance Sampling",
"sec_num": "6.3"
},
{
"text": "Rows ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "To measure the efficacy of content augmentation, we compared COLLOQL with other works on our dataset of 400 simplified queries which was generated by the sentence simplification model and verified/corrected by a team of data scientists and engineers. This dataset largely contains queries in which the where columns are not explicitly mentioned in the query and requires the model to infer them. We can see from Table 6 that a model uninformed of the content drops in accuracy (especially in the where column prediction) while COL-LOQL retains its performance. ",
"cite_spans": [],
"ref_spans": [
{
"start": 412,
"end": 419,
"text": "Table 6",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Performance on Simple Questions",
"sec_num": "6.4"
},
{
"text": "Since SQLova was originally trained with complete sentences, it does not adapt well to short questions. Retraining the same model with augmented data from our templates recovers the performance (tested using short questions). Additionally, the augmentation also results in improved generalization resulting in a minor LF accuracy improvement on the original dev data as shown in Table 7 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 379,
"end": 386,
"text": "Table 7",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "Effect of Augmentation",
"sec_num": "6.5"
},
{
"text": "We classified the errors made by our model on the ColloQL curated dataset into two major categories: Aggregation -Given that WikiSQL contains noisy labels for aggregation component (Hwang et al., 2019) and the model was optimized for accuracy on WikiSQL, there are some errors in predicting this slot.",
"cite_spans": [
{
"start": 181,
"end": 201,
"text": "(Hwang et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "7"
},
{
"text": "Select Columns -The simplified questions are often more ambiguous when predicting whether a column is a target to be selected or is used in a filtering condition (e.g. for the question \"smallest tiesplayed 6 years\", the model interprets it as SELECT MIN(years) WHERE tiesplayed = 6 while the correct query is SELECT MIN(tiesplayed) WHERE years = 6). Additionally, we noticed that our annotators simplified column headers like \"shortstop\" and \"rightfielder\" to \"SS\" and \"RF\", making the question very difficult to solve.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "7"
},
{
"text": "In this work we tackled the task of converting noisy (short, potentially ambiguous) search-like (NLS) questions to SQL queries. We introduced data augmentation strategies to adapt to the NLS style of text and a novel content enhancement to BERT via two sampling strategies -random and relevance sampling. Random sampling overcomes some of the performance / privacy challenges of incorporating table content and relevance sampling achieves state-of-the-art performance when access to table content is permitted. Finally, we also curated a new held-out dataset to evaluate performance against NLS questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "8"
},
{
"text": "In the future, we hope to explore different sampling techniques (based on user history, sampling to maximize discernment between columns) to enhance performance. Besides, our approach and dataset mainly target telegraphic queries that can be effectively disambiguated with table contents, which frequency occur in our search service. We plan to extend our work to handle other types of input ambiguities and other application domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "8"
},
{
"text": "One of the authors who did not participate in the dataset annotation randomly sampled 16/400 examples and manually checked the quality. 4/16 annotations were found to have issues in the natural language annotation. Table 9 shows examples from the simple question dataset. The first 4 examples are correct, highquality annotations while the bottom 4 are those with issues found during manual check. The highquality simple question annotations are readable and on average have a smaller compression ratio compared to the noisy annotations.",
"cite_spans": [],
"ref_spans": [
{
"start": 215,
"end": 222,
"text": "Table 9",
"ref_id": "TABREF15"
}
],
"eq_spans": [],
"section": "A.1 Test Set Quality",
"sec_num": null
},
{
"text": "We noticed that some errors in the WikiSQL annotation (Hwang et al., 2019) were corrected when the simplified questions were produced, but some perpetuated through. In the second example, the annotator corrected spelling errors in the original WikiSQL annotation. However, in the 7th example, the original question misinterpreted Year acquired as a quantity and our simplified question inherited that error. Similarly, in the 8th example, the original question misinterpreted the field Finalists as \"score\" (it should represent \"number of finalists\") and our simplified question inherited it.",
"cite_spans": [
{
"start": 54,
"end": 74,
"text": "(Hwang et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Test Set Quality",
"sec_num": null
},
{
"text": "The 5th and 6th examples have unreadable questions as a result of sentence simplification (but our annotators still labeled them as correct). This is an artifact of the dataset as such unreadable, keywordstyle queries may favor models that leverage table content to identify the columns. On the other hand, such queries could be useful as being able to interpret them may give users more flexibility when searching the content of a database.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Test Set Quality",
"sec_num": null
},
{
"text": "What is the amount of trees, that require replacement when the district is motovilikhinsky? ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Original",
"sec_num": null
},
{
"text": "https://www.salesforce.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Our code and annotated data can be found at https://github.com/karthikradhakrishnan96/ ColloQL.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Sentence simplification creates a diverse set of examples which contains some of those generated by gold SQL label.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We ran the evaluation on Photon's demo page.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/salesforce/WikiSQL",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Christian Posse and Mario Rodriguez for their support, help and invaluable feedback throughout the development of this work. We also would like to thank our team of expert annotators for their contribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Natural language interfaces to databases -an introduction",
"authors": [
{
"first": "Ion",
"middle": [],
"last": "Androutsopoulos",
"suffix": ""
},
{
"first": "Graeme",
"middle": [
"D"
],
"last": "Ritchie",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Thanisch",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ion Androutsopoulos, Graeme D. Ritchie, and Pe- ter Thanisch. 1995. Natural language interfaces to databases -an introduction.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Semantic parsing of ambiguous input through paraphrasing and verification",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Arthur",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Sakriani",
"middle": [],
"last": "Sakti",
"suffix": ""
},
{
"first": "Tomoki",
"middle": [],
"last": "Toda",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Nakamura",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "571--584",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00159"
]
},
"num": null,
"urls": [],
"raw_text": "Philip Arthur, Graham Neubig, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2015. Semantic pars- ing of ambiguous input through paraphrasing and verification. Transactions of the Association for Computational Linguistics, 3:571-584.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Improving text-to-sql evaluation methodology",
"authors": [
{
"first": "Catherine",
"middle": [],
"last": "Finegan-Dollak",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [
"K"
],
"last": "Kummerfeld",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Ramanathan",
"suffix": ""
},
{
"first": "Sesh",
"middle": [],
"last": "Sadasivam",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [
"R"
],
"last": "Radev",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018",
"volume": "1",
"issue": "",
"pages": "351--360",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1033"
]
},
"num": null,
"urls": [],
"raw_text": "Catherine Finegan-Dollak, Jonathan K. Kummerfeld, Li Zhang, Karthik Ramanathan, Sesh Sadasivam, Rui Zhang, and Dragomir R. Radev. 2018. Im- proving text-to-sql evaluation methodology. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics, ACL 2018, Mel- bourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 351-360. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Towards complex text-to-sql in cross-domain database with intermediate representation",
"authors": [
{
"first": "Jiaqi",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Zecheng",
"middle": [],
"last": "Zhan",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Jian-Guang",
"middle": [],
"last": "Lou",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Dongmei",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019",
"volume": "1",
"issue": "",
"pages": "4524--4535",
"other_ids": {
"DOI": [
"10.18653/v1/p19-1444"
]
},
"num": null,
"urls": [],
"raw_text": "Jiaqi Guo, Zecheng Zhan, Yan Gao, Yan Xiao, Jian-Guang Lou, Ting Liu, and Dongmei Zhang. 2019. Towards complex text-to-sql in cross-domain database with intermediate representation. In Pro- ceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Pa- pers, pages 4524-4535. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Content enhanced bert-based text-to-sql generation",
"authors": [
{
"first": "Tong",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Huilin",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.07179"
]
},
"num": null,
"urls": [],
"raw_text": "Tong Guo and Huilin Gao. 2019. Content enhanced bert-based text-to-sql generation. arXiv preprint arXiv:1910.07179.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Queryfocused sentence compression in linear time",
"authors": [
{
"first": "Abram",
"middle": [],
"last": "Handler",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Connor",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5969--5975",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1612"
]
},
"num": null,
"urls": [],
"raw_text": "Abram Handler and Brendan O'Connor. 2019. Query- focused sentence compression in linear time. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 5969- 5975, Hong Kong, China. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A comprehensive exploration on wikisql with table-aware word contextualization",
"authors": [
{
"first": "Wonseok",
"middle": [],
"last": "Hwang",
"suffix": ""
},
{
"first": "Jinyeung",
"middle": [],
"last": "Yim",
"suffix": ""
},
{
"first": "Seunghyun",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Minjoon",
"middle": [],
"last": "Seo",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wonseok Hwang, Jinyeung Yim, Seunghyun Park, and Minjoon Seo. 2019. A comprehensive exploration on wikisql with table-aware word contextualization. ArXiv, abs/1902.01069.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "J Adam",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and J Adam Ba. 2019. A method for stochastic optimization. arxiv 2014. arXiv preprint arXiv:1412.6980, 434.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Nalir: an interactive natural language interface for querying relational databases",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Hosagrahar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jagadish",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 ACM SIGMOD international conference on Management of data",
"volume": "",
"issue": "",
"pages": "709--712",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Li and Hosagrahar V Jagadish. 2014. Nalir: an in- teractive natural language interface for querying re- lational databases. In Proceedings of the 2014 ACM SIGMOD international conference on Management of data, pages 709-712.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Hybrid ranking network for text-to-sql",
"authors": [
{
"first": "Qin",
"middle": [],
"last": "Lyu",
"suffix": ""
},
{
"first": "Kaushik",
"middle": [],
"last": "Chakrabarti",
"suffix": ""
},
{
"first": "Shobhit",
"middle": [],
"last": "Hathi",
"suffix": ""
},
{
"first": "Souvik",
"middle": [],
"last": "Kundu",
"suffix": ""
},
{
"first": "Jianwen",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qin Lyu, Kaushik Chakrabarti, Shobhit Hathi, Souvik Kundu, Jianwen Zhang, and Zheng Chen. 2020. Hy- brid ranking network for text-to-sql. Technical Re- port MSR-TR-2020-7, Microsoft Dynamics 365 AI.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Underspecified query refinement via natural language question generation",
"authors": [
{
"first": "Hassan",
"middle": [],
"last": "Sajjad",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Gamon",
"suffix": ""
}
],
"year": 2012,
"venue": "The COLING 2012 Organizing Committee",
"volume": "",
"issue": "",
"pages": "2341--2356",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hassan Sajjad, Patrick Pantel, and Michael Gamon. 2012. Underspecified query refinement via natural language question generation. In Proceedings of COLING 2012, pages 2341-2356, Mumbai, India. The COLING 2012 Organizing Committee.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Eviza: A natural language interface for visual analysis",
"authors": [
{
"first": "Vidya",
"middle": [],
"last": "Setlur",
"suffix": ""
},
{
"first": "Sarah",
"middle": [
"E"
],
"last": "Battersby",
"suffix": ""
},
{
"first": "Melanie",
"middle": [],
"last": "Tory",
"suffix": ""
},
{
"first": "Rich",
"middle": [],
"last": "Gossweiler",
"suffix": ""
},
{
"first": "Angel X",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 29th Annual Symposium on User Interface Software and Technology",
"volume": "",
"issue": "",
"pages": "365--377",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vidya Setlur, Sarah E Battersby, Melanie Tory, Rich Gossweiler, and Angel X Chang. 2016. Eviza: A natural language interface for visual analysis. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology, pages 365-377.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Inferencing underspecified natural language utterances in visual analysis",
"authors": [
{
"first": "Vidya",
"middle": [],
"last": "Setlur",
"suffix": ""
},
{
"first": "Melanie",
"middle": [],
"last": "Tory",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Djalali",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 24th International Conference on Intelligent User Interfaces, IUI '19",
"volume": "",
"issue": "",
"pages": "40--51",
"other_ids": {
"DOI": [
"10.1145/3301275.3302270"
]
},
"num": null,
"urls": [],
"raw_text": "Vidya Setlur, Melanie Tory, and Alex Djalali. 2019. In- ferencing underspecified natural language utterances in visual analysis. In Proceedings of the 24th Inter- national Conference on Intelligent User Interfaces, IUI '19, page 40-51, New York, NY, USA. Associa- tion for Computing Machinery.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Replace or Retrieve Keywords In Documents at Scale. ArXiv e-prints",
"authors": [
{
"first": "V",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Singh. 2017. Replace or Retrieve Keywords In Doc- uments at Scale. ArXiv e-prints.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "RAT-SQL: relation-aware schema encoding and linking for textto-sql parsers",
"authors": [
{
"first": "Bailin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Shin",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Oleksandr",
"middle": [],
"last": "Polozov",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020",
"volume": "",
"issue": "",
"pages": "7567--7578",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bailin Wang, Richard Shin, Xiaodong Liu, Oleksandr Polozov, and Matthew Richardson. 2020. RAT-SQL: relation-aware schema encoding and linking for text- to-sql parsers. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, ACL 2020, Online, July 5-10, 2020, pages 7567-7578. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Robust text-to-sql generation with execution-guided decoding",
"authors": [
{
"first": "Chenglong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kedar",
"middle": [],
"last": "Tatwawadi",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Brockschmidt",
"suffix": ""
},
{
"first": "Po-Sen",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Oleksandr",
"middle": [],
"last": "Polozov",
"suffix": ""
},
{
"first": "Rishabh",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1807.03100"
]
},
"num": null,
"urls": [],
"raw_text": "Chenglong Wang, Kedar Tatwawadi, Marc Brockschmidt, Po-Sen Huang, Yi Mao, Olek- sandr Polozov, and Rishabh Singh. 2018. Robust text-to-sql generation with execution-guided decoding. arXiv preprint arXiv:1807.03100.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Sqlnet: Generating structured queries from natural language without reinforcement learning",
"authors": [
{
"first": "Xiaojun",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Chang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Dawn",
"middle": [],
"last": "Song",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaojun Xu, Chang Liu, and Dawn Song. 2018. Sqlnet: Generating structured queries from natural language without reinforcement learning.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Interactive semantic parsing for if-then recipes via hierarchical reinforcement learning",
"authors": [
{
"first": "Ziyu",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Xiujun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Brian",
"middle": [
"M"
],
"last": "Sadler",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2019,
"venue": "The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence",
"volume": "2019",
"issue": "",
"pages": "2547--2554",
"other_ids": {
"DOI": [
"10.1609/aaai.v33i01.33012547"
]
},
"num": null,
"urls": [],
"raw_text": "Ziyu Yao, Xiujun Li, Jianfeng Gao, Brian M. Sadler, and Huan Sun. 2019a. Interactive semantic pars- ing for if-then recipes via hierarchical reinforce- ment learning. In The Thirty-Third AAAI Con- ference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial In- telligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 -February 1, 2019, pages 2547-2554. AAAI Press.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Model-based interactive semantic parsing: A unified framework and A text-to-sql case study",
"authors": [
{
"first": "Ziyu",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019",
"volume": "",
"issue": "",
"pages": "5446--5457",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1547"
]
},
"num": null,
"urls": [],
"raw_text": "Ziyu Yao, Yu Su, Huan Sun, and Wen-tau Yih. 2019b. Model-based interactive semantic parsing: A unified framework and A text-to-sql case study. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 5446-5457. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "TRANX: A transition-based neural abstract syntax parser for semantic parsing and code generation",
"authors": [
{
"first": "Pengcheng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "2018",
"issue": "",
"pages": "7--12",
"other_ids": {
"DOI": [
"10.18653/v1/d18-2002"
]
},
"num": null,
"urls": [],
"raw_text": "Pengcheng Yin and Graham Neubig. 2018. TRANX: A transition-based neural abstract syntax parser for se- mantic parsing and code generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018: Sys- tem Demonstrations, Brussels, Belgium, October 31 -November 4, 2018, pages 7-12. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Tabert: Pretraining for joint understanding of textual and tabular data",
"authors": [
{
"first": "Pengcheng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online",
"volume": "",
"issue": "",
"pages": "8413--8426",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pengcheng Yin, Graham Neubig, Wen-tau Yih, and Se- bastian Riedel. 2020. Tabert: Pretraining for joint understanding of textual and tabular data. In Pro- ceedings of the 58th Annual Meeting of the Associ- ation for Computational Linguistics, ACL 2020, On- line, July 5-10, 2020, pages 8413-8426. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Typesql: Knowledgebased type-aware neural text-to-sql generation",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Zifan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Zilin",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [],
"last": "Radev",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.09769"
]
},
"num": null,
"urls": [],
"raw_text": "Tao Yu, Zifan Li, Zilin Zhang, Rui Zhang, and Dragomir Radev. 2018a. Typesql: Knowledge- based type-aware neural text-to-sql generation. arXiv preprint arXiv:1804.09769.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Spider: A largescale human-labeled dataset for complex and crossdomain semantic parsing and text-to-sql task",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Michihiro",
"middle": [],
"last": "Yasunaga",
"suffix": ""
},
{
"first": "Dongxu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zifan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Irene",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Qingning",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Shanelle",
"middle": [],
"last": "Roman",
"suffix": ""
},
{
"first": "Zilin",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [],
"last": "Radev",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. 2018b. Spider: A large- scale human-labeled dataset for complex and cross- domain semantic parsing and text-to-sql task. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Photon: A robust cross-domain text-to-SQL system",
"authors": [
{
"first": "Jichuan",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Victoria",
"middle": [],
"last": "Xi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "C",
"middle": [
"H"
],
"last": "Steven",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Hoi",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Irwin",
"middle": [],
"last": "Lyu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "King",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "204--214",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-demos.24"
]
},
"num": null,
"urls": [],
"raw_text": "Jichuan Zeng, Xi Victoria Lin, Steven C.H. Hoi, Richard Socher, Caiming Xiong, Michael Lyu, and Irwin King. 2020. Photon: A robust cross-domain text-to-SQL system. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 204- 214, Online. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Online learning of relaxed CCG grammars for parsing to logical form",
"authors": [
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)",
"volume": "",
"issue": "",
"pages": "678--687",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luke Zettlemoyer and Michael Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. In Proceedings of the 2007 Joint Con- ference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 678-687, Prague, Czech Republic. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Seq2sql: Generating structured queries from natural language using reinforcement learning",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. CoRR, abs/1709.00103.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Examples of search-style user queries.",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "SQL-Sketch fromXu et al. (2018).",
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"text": "ColloQL uses the same NN architecture as SQLova where six decoding layers (one for each component of the SQL-Sketch) are used over BERT. The SQL query (SELECT Player Name WHERE Jersey = 42) is constructed from outputs of different components. Unlike SQLova, we also contextualize the question with the table samples (underlined in the figure) delimited by special tokens.",
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"num": null,
"content": "<table><tr><td colspan=\"2\">0 (player name)</td><td colspan=\"2\">0 (no aggregation)</td><td/><td>1</td><td colspan=\"2\">[1] (Jersey)</td><td>[=]</td><td colspan=\"3\">[(2,2)] (span indices for \"42\")</td></tr><tr><td colspan=\"2\">Select Column</td><td colspan=\"2\">Aggregation Operator</td><td colspan=\"2\"># Where clauses LSTM</td><td colspan=\"2\">Where Column</td><td colspan=\"2\">Where Operator</td><td colspan=\"2\">Where Value</td></tr><tr><td>Column Attn</td><td/><td>Column Attn</td><td/><td/><td>Self Attn</td><td>Column Attn</td><td/><td>Column Attn</td><td/><td>Column Attn</td><td/></tr><tr><td>LSTM-q</td><td>LSTM-h</td><td>LSTM-q</td><td>LSTM-h</td><td>LSTM-q</td><td>LSTM-h</td><td>LSTM-q</td><td>LSTM-h</td><td>LSTM-q</td><td>LSTM-h</td><td>LSTM-q</td><td>LSTM-h</td></tr></table>",
"text": "",
"html": null,
"type_str": "table"
},
"TABREF2": {
"num": null,
"content": "<table><tr><td>SQL</td><td>SELECT (Grid) FROM 2-14125739-3 WHERE Rider = maria herrera AND Laps &lt;</td></tr><tr><td/><td>200</td></tr><tr><td/><td>fox tv series female</td></tr><tr><td/><td>Animal Name || Jack | SELECT (TV Series) FROM 2-11206371-5 WHERE Species = fox AND Gender =</td></tr><tr><td/><td>female</td></tr><tr><td/><td>Where are Charlie Freedman/Eddie Fletcher from?</td></tr><tr><td/><td>Place || 7 | 9 | 1 [SEP] SELECT (Country) FROM 2-10301911-6 WHERE Rider = charlie freedman/eddie</td></tr><tr><td/><td>fletcher</td></tr></table>",
"text": "Modifying the architecture to operate on one column at a time (HydraNet) would allow us to use more samples.Our model performs significantly grid of bmw rider with > 200 laps Rider || Nicolas Terol | Mike Di Meglio | Stevie Bonsey [SEP] Manufacturer || Derbi | Honda | KTM [SEP] Laps || 1 | 24 | 0 [SEP] Grid || 20 | 29 | 25 . . . SQL SELECT (Grid) FROM 2-14125739-3 WHERE Manufacturer = bmw AND Laps > 200 grid of maria herrera rider with < 200 laps Rider || Nicolas Terol | Mike Di Meglio | Stevie Bonsey [SEP] Manufacturer || Derbi | Honda | KTM [SEP] Laps || 1 | 24 | 0 [SEP] Grid || 20 | 29 | 25 . . . The Big Owl | The Wild Boar [SEP] Species || Fox | Badger | Boar [SEP] Books || No | Yes [SEP] Gender || male | female . . . SQL Rider || Charlie Freedman/Eddie Fletcher | Mick Horsepole/E . . . [SEP] Country || West Germany | Switzerland | United Kingdom [SEP] . . . SQL",
"html": null,
"type_str": "table"
},
"TABREF3": {
"num": null,
"content": "<table><tr><td>Model</td><td/><td/><td colspan=\"2\">LF (dev) EX (dev)</td></tr><tr><td>SQLova BASE</td><td/><td/><td>79.5</td><td>85.3</td></tr><tr><td>SQLova LARGE</td><td/><td/><td>81.6</td><td>87.2</td></tr><tr><td>HydraNet LARGE</td><td colspan=\"2\">*</td><td>83.6</td><td>89.1</td></tr><tr><td colspan=\"2\">COLLOQL rand:1</td><td>\u2020</td><td>82.0</td><td>87.6</td></tr><tr><td colspan=\"2\">COLLOQL rand:3</td><td>\u2020</td><td>83.3</td><td>89.1</td></tr><tr><td colspan=\"2\">COLLOQL rand:5</td><td>\u2020</td><td>83.5</td><td>89.3</td></tr></table>",
"text": "Some qualitative examples from our random (1,2) and relevance (3,4) sampling models. Bold values in headers indicate a match in the question.better than our base SQLova model and performs competitively with other larger models.",
"html": null,
"type_str": "table"
},
"TABREF4": {
"num": null,
"content": "<table><tr><td>: Model performance with different sampling</td></tr><tr><td>settings. Rand:[1,3,5] uses random sampling. \u2020 indi-</td></tr><tr><td>cates that data augmentation is added.</td></tr></table>",
"text": "",
"html": null,
"type_str": "table"
},
"TABREF6": {
"num": null,
"content": "<table><tr><td>: Efficacy of different content incorporation</td></tr><tr><td>strategies. Relevance sampling (with 3 samples) gives</td></tr><tr><td>the best performance. \u2021denotes our implementation of</td></tr><tr><td>Photon.</td></tr></table>",
"text": "",
"html": null,
"type_str": "table"
},
"TABREF8": {
"num": null,
"content": "<table><tr><td>: Benchmarking different content incorporation</td></tr><tr><td>strategies with respect to execution time (CPU), mem-</td></tr><tr><td>ory footprint and setup time (for indexing).</td></tr></table>",
"text": "",
"html": null,
"type_str": "table"
},
"TABREF10": {
"num": null,
"content": "<table/>",
"text": "Performance on the curated test set i.e. 400 simplified queries.",
"html": null,
"type_str": "table"
},
"TABREF12": {
"num": null,
"content": "<table><tr><td colspan=\"4\">: Comparing logical form accuracy of SQLova</td></tr><tr><td colspan=\"4\">with augmentation. LF(short) is the dev accuracy on</td></tr><tr><td colspan=\"4\">the short questions. LF(dev) is the accuracy on the Wik-</td></tr><tr><td>iSQL dev split.</td><td/><td/><td/></tr><tr><td colspan=\"4\">6.6 Performance on WikiSQL test set</td></tr><tr><td colspan=\"4\">Finally, we also show the performance of our model</td></tr><tr><td colspan=\"4\">on the WikiSQL test dataset comparing them to the</td></tr><tr><td colspan=\"4\">top approaches on the WikiSQL leaderboard 5 . As</td></tr><tr><td colspan=\"4\">we can see in Table 8, COLLOQL achieves the high-</td></tr><tr><td colspan=\"4\">est accuracy without execution guided decoding on</td></tr><tr><td colspan=\"2\">the WikiSQL test set.</td><td/><td/></tr><tr><td>Model</td><td/><td colspan=\"2\">LF(test) EX(test)</td></tr><tr><td>HydraNet LARGE</td><td/><td>83.8</td><td>89.2</td></tr><tr><td>NL2SQL BASE</td><td/><td>83.7</td><td>89.2</td></tr><tr><td>COLLOQL rel:3</td><td>\u2020</td><td>84.9</td><td>90.7</td></tr></table>",
"text": "",
"html": null,
"type_str": "table"
},
"TABREF13": {
"num": null,
"content": "<table/>",
"text": "Performance on the WikiSQL test set.",
"html": null,
"type_str": "table"
},
"TABREF14": {
"num": null,
"content": "<table><tr><td/><td colspan=\"2\">SELECT COUNT(Winning driver) from</td><td>WHERE Rnd=5</td></tr><tr><td/><td>SELECT (Title) from</td><td>WHERE US airdate=4 April 2008</td></tr><tr><td/><td>SELECT (Score) from</td><td>WHERE Semi-Finalist 1=Miami</td></tr><tr><td/><td colspan=\"2\">SELECT (School/Club Team/Country) from</td><td>WHERE No.(s)=10 AND</td></tr><tr><td/><td>Position=Forward</td></tr><tr><td>Original</td><td colspan=\"2\">Which visitors have a leading scorer of roy : 25?</td></tr><tr><td>Simple</td><td>visitor of 25-18</td></tr><tr><td/><td># SELECT (Visitor) from</td><td>WHERE Leading scorer=Roy : 25</td></tr><tr><td/><td colspan=\"2\">SELECT COUNT(Year acquired) from</td><td>WHERE Station=CHAN</td></tr><tr><td>Original</td><td colspan=\"2\">What are the names that had a finalist score of 2??</td></tr><tr><td>Simple</td><td colspan=\"2\">names that had finalist score 2?</td></tr><tr><td/><td>SELECT (School) from</td><td>WHERE Finalists=2</td></tr></table>",
"text": "Simple the amount of trees, that require replacement district motovilikhinsky? District || Total amount of trees || Prevailing types, % || Amount of old trees || Amount of trees, that require replacement || ... SQL SELECT (Amount of trees, that require replacement) from WHERE District=Leninsky Original How many winning drivers were the for the rnd equalling 5? Simple how many winning drivers for 5? Rnd || Race Name || Circuit || City/Location || Date || Pole position || Winning driver || ... SQL Original For the episode(s) aired in the U.S. on 4 april 2008, what were the names? Simple for the episode(s) aired in U.S. 4 april 2008, names? No. in season || No. in series || Title || Canadian airdate || US airdate || Production code . . . SQL Original List the scores of all games when Miami were listed as the first Semi finalist? Simple scores with miami listed as first semi finalist? Year || Champion || Score || Runner-Up || Location || Semi-Finalist #1 || Semi-Finalist #2 . . . SQL Original What school did the forward whose number is 10 belong to? Simple what school did forward 10 Player || No.(s) || Height in Ft. || Position || Years for Rockets || School/Club Team/Country . . . SQL || Date || Visitor || Score || Home || Leading scorer || Attendance || Record || Streak . . . SQL Original how any were gained as the chan Simple how many gained chan City || Station || Year acquired || Primary programming source || Other programming sources . . . SQL School || Winners || Finalists || Total Finals || Year of last win SQL",
"html": null,
"type_str": "table"
},
"TABREF15": {
"num": null,
"content": "<table/>",
"text": "Examples in simple questions dev set. We use \" \" as placeholder for table in the SQL queries. Only table headers were shown. The top 4 examples are correct while the bottom 4 have issue in the natural language annotation.",
"html": null,
"type_str": "table"
}
}
}
}