File size: 116,635 Bytes
6fa4bc9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 |
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:12:30.071614Z"
},
"title": "COLLOQL: Robust Cross-Domain Text-to-SQL Over Search Queries",
"authors": [
{
"first": "Karthik",
"middle": [],
"last": "Radhakrishnan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Srikantan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": "asrikantan@salesforce.com"
},
{
"first": "Victoria",
"middle": [],
"last": "Xi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Lin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Translating natural language utterances to executable queries is a helpful technique in making the vast amount of data stored in relational databases accessible to a wider range of nontech-savvy end users. Prior work in this area has largely focused on textual input that is linguistically correct and semantically unambiguous. However, real-world user queries are often succinct, colloquial, and noisy, resembling the input of a search engine. In this work, we introduce data augmentation techniques and a sampling-based content-aware BERT model (COLLOQL) to achieve robust text-to-SQL modeling over natural language search (NLS) questions. Due to the lack of evaluation data, we curate a new dataset of NLS questions and demonstrate the efficacy of our approach. COLLOQL's superior performance extends to well-formed text, achieving 84.9% (logical) and 90.7% (execution) accuracy on the WikiSQL dataset, making it, to the best of our knowledge, the highest performing model that does not use execution guided decoding.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Translating natural language utterances to executable queries is a helpful technique in making the vast amount of data stored in relational databases accessible to a wider range of nontech-savvy end users. Prior work in this area has largely focused on textual input that is linguistically correct and semantically unambiguous. However, real-world user queries are often succinct, colloquial, and noisy, resembling the input of a search engine. In this work, we introduce data augmentation techniques and a sampling-based content-aware BERT model (COLLOQL) to achieve robust text-to-SQL modeling over natural language search (NLS) questions. Due to the lack of evaluation data, we curate a new dataset of NLS questions and demonstrate the efficacy of our approach. COLLOQL's superior performance extends to well-formed text, achieving 84.9% (logical) and 90.7% (execution) accuracy on the WikiSQL dataset, making it, to the best of our knowledge, the highest performing model that does not use execution guided decoding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Relational databases store a vast amount of the world's data and are typically accessed via structured query languages like SQL. A natural language interface to these databases (NLIDB) could significantly improve the accessibility of this data by allowing users to retrieve and utilize the information without any programming expertise. With the release of large-scale datasets (Zhong et al., 2017; Finegan-Dollak et al., 2018; Yu et al., 2018b) , this task has gained a lot of attention and has been widely studied in recent years.",
"cite_spans": [
{
"start": 378,
"end": 398,
"text": "(Zhong et al., 2017;",
"ref_id": "BIBREF27"
},
{
"start": 399,
"end": 427,
"text": "Finegan-Dollak et al., 2018;",
"ref_id": "BIBREF4"
},
{
"start": 428,
"end": 445,
"text": "Yu et al., 2018b)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Prior research has primarily focused on translating grammatical, complete sentences to queries. However, an internal user survey on the search service of a major customer relationship management (CRM) platform 1 revealed that users have a tendency to communicate in a colloquial form which could vary from using only keywords (\"player 42\") to very short phrases (\"show player 42\") to complete sentences (\"Who is the player who wears Jersey 42?\"). Apart from variation in style, users dropping content words from their searches in the interest of brevity also has the potential consequence of making their questions ambiguous. This could render the task unsolvable even to models accustomed to the NLS style of text. For example, in Figure 1 , without the word \"Jersey\", it is impossible to identify which column's value (Id or Jersey) must equal 42.",
"cite_spans": [],
"ref_spans": [
{
"start": 732,
"end": 740,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we show that Text2SQL systems trained on only complete sentences struggle to adapt to the noisy keyword/short phrasal style of questions. To combat this, we introduce different data augmentation strategies inspired from our user search patterns and style. To tackle the induced ambiguity, a potential solution is to utilize the table content by allowing the model to scan the table for different terms present in the question and utilize that information to disambiguate (If the token \"42\" was only found in the Jersey column, then Jersey must be the column equal to 42). Though effective, this approach could become prohibitively expensive (in terms of inference time or memory required) on large tables as the model would have to search over the entire of the table content for every question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We hypothesize that in most cases, the model only needs samples from the table content and not the exact rows that match tokens in the NLS question to disambiguate columns. For example, if the Id column contained alpha-numeric IDs, Player and Nationality contained strings, and Jersey contained two digit numbers, then Jersey must be the column equal to 42. Sampling alleviates the need of a full table scan for every question. The samples for each column could be generated offline and remain unchanged across questions or periodically refreshed (to reflect potential distribution shifts in the table or user queries), allowing for adaptation and personalization without retraining the model. In summary, our contributions are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. We augment the well-formed WikiSQL dataset with synthetic search-style questions to adapt to short, colloquial input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. We propose new models which incorporate table content in a BERT encoder via two sampling strategies to handle ambiguous questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3. We perform an in-depth qualitative and quantitative (accuracy, inference time, memory) analysis to show the efficacy of each content sampling strategy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "4. We curate a dataset of 400 questions to benchmark performance of Text-to-SQL models in this setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Apart from adapting to NLS style questions, COLLOQL also achieves state-of-the-art performance on the original WikiSQL (Zhong et al., 2017) dataset, outperforming all baselines that do not use execution guided decoding. We base our work off SQLova (Hwang et al., 2019) but our methods are generalizable to other approaches 2 .",
"cite_spans": [
{
"start": 119,
"end": 139,
"text": "(Zhong et al., 2017)",
"ref_id": "BIBREF27"
},
{
"start": 248,
"end": 268,
"text": "(Hwang et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Text-to-SQL approaches for the WikiSQL benchmark Text-to-SQL falls under a broader class of semantic parsing tasks and has been widely studied in the NLP and database communities. While early works have focused on pattern-matching and rule-based techniques (Androutsopoulos et al., 1995; Li and Jagadish, 2014; Setlur et al., 2016) , with the introduction of large scale datasets such as WikiSQL (Zhong et al., 2017) , recent works have focused on neural methods for generating SQL. They can be broadly categorized into a few themes -sequence to sequence (Seq2Seq), sequence to tree (Seq2Tree), and SQL-Sketch (logical form) methods. Seq2Seq models frame the task as an encoderdecoder problem by trying to generate the SQL query token-by-token from the input question. However, as noted by Xu et al. (2018) these models suffer from the \"order matters\" issue where the model is forced to match the ordering of the where clauses. Zhong et al. (2017) employ reinforcement learning based method to overcome this issue but the gains from this has been limited as noted in Xu et al. (2018) . Seq2Tree models generate the SQL query as an abstract syntax tree (AST) instead of a token sequence Wang et al., 2020) . These approaches define a generation grammar for SQL and learn to output the action sequence for constructing the AST (Yin and Neubig, 2018) . Seq2Tree approaches are widely adopted for benchmarks that contain complex SQL queries (Yu et al., 2018b) as the syntactic constraints they adopt are effective at pruning the output search space and capturing structural dependencies. However, they do not show much advantage on the WikiSQL benchmark where the SQL ASTs are largely flat. SQLNet (Xu et al., 2018) introduces the concept of a SQL-Sketch, where it generates a sketch capturing the salient elements of the query as opposed to directly generating the query itself. SQLNet uses LSTMs to encode the question and headers and employs column attention to predict different components of the SQL-Sketch. As shown in Figure 2, the query is decomposed into different components which are predicted individually. Type-SQL (Yu et al., 2018a) extends upon this approach by augmenting each token in the question with its type (whether it resembles the name of the column, FreeBase entity type, etc). SQLova (Hwang et al., 2019) replaces the LSTMs encoder from SQLNet and uses BERT to encode the question and headers jointly. Unlike SQLNet, SQLova does not share any parameters in the decoders and identifies the where clause values using span detection instead of pointer generators. HydraNet breaks down the problem into column-wise ranking and decoding and assembles the outputs from each column to create the SQL query.",
"cite_spans": [
{
"start": 257,
"end": 287,
"text": "(Androutsopoulos et al., 1995;",
"ref_id": "BIBREF0"
},
{
"start": 288,
"end": 310,
"text": "Li and Jagadish, 2014;",
"ref_id": "BIBREF10"
},
{
"start": 311,
"end": 331,
"text": "Setlur et al., 2016)",
"ref_id": "BIBREF13"
},
{
"start": 396,
"end": 416,
"text": "(Zhong et al., 2017)",
"ref_id": "BIBREF27"
},
{
"start": 790,
"end": 806,
"text": "Xu et al. (2018)",
"ref_id": "BIBREF18"
},
{
"start": 928,
"end": 947,
"text": "Zhong et al. (2017)",
"ref_id": "BIBREF27"
},
{
"start": 1067,
"end": 1083,
"text": "Xu et al. (2018)",
"ref_id": "BIBREF18"
},
{
"start": 1186,
"end": 1204,
"text": "Wang et al., 2020)",
"ref_id": "BIBREF16"
},
{
"start": 1325,
"end": 1347,
"text": "(Yin and Neubig, 2018)",
"ref_id": "BIBREF21"
},
{
"start": 1437,
"end": 1455,
"text": "(Yu et al., 2018b)",
"ref_id": "BIBREF24"
},
{
"start": 1694,
"end": 1711,
"text": "(Xu et al., 2018)",
"ref_id": "BIBREF18"
},
{
"start": 2124,
"end": 2142,
"text": "(Yu et al., 2018a)",
"ref_id": "BIBREF23"
},
{
"start": 2306,
"end": 2326,
"text": "(Hwang et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 2021,
"end": 2027,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Text-to-SQL with table content Recent works like NL2SQL-RULE , RAT-SQL (Wang et al., 2020) and Photon (Zeng et al., 2020) have looked into incorporating table content into the SQL generation. NL2SQL-RULE augments BERT representations with mark vectors for each question and table header token to indicate a match across the two parts. Photon only incorporates the content of a limited set of categorical fields when there is an exact match with a question token. Unlike NL2SQL-RULE, ColloQL includes table content in the BERT encoder allowing it to form content-enhanced question and header representations and unlike Photon, ColloQL incorporates content for all columns and includes samples even when there is not an exact match to disambiguate columns effectively. TaBERT (Yin et al., 2020) lifted the idea further by pre-training joint representation of text and table taking into account row subsampled in a random or relevance-based manner. The pre-trained joint representation has been shown to outperform vanilla language models in several table QA and semantic parsing tasks.",
"cite_spans": [
{
"start": 71,
"end": 90,
"text": "(Wang et al., 2020)",
"ref_id": "BIBREF16"
},
{
"start": 102,
"end": 121,
"text": "(Zeng et al., 2020)",
"ref_id": "BIBREF25"
},
{
"start": 774,
"end": 792,
"text": "(Yin et al., 2020)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Text-to-SQL with execution guided decoding One common theme across all the high performing models on WikiSQL is that they all employ Execution Guided (EG) decoding. First introduced by Wang et al. (2018) , EG is a technique where partial SQL queries are executed and their results are used to guide the decoding process. While EG has been shown to boost accuracy significantly, we do not apply execution guided decoding on our models for two reasons: Firstly, most EG methods modify the predicted query based on whether an empty set is returned. While this works well in the WikiSQL setting, having no results is often not due to an erroneous query. It is not uncommon for users to issue searches like \"my escalated support cases\"(with the expectation of surfacing zero records) or \"John Doe leads\"(to ensure that a record does not already exist before creating one) and we wanted to eliminate the reliance on database outputs to translate a query correctly. Secondly, database tables could have over 1M records and performing multiple database executions for every query could be expensive and is not always feasible whilst keeping up with the latency requirements of clients.",
"cite_spans": [
{
"start": 185,
"end": 203,
"text": "Wang et al. (2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Text-to-SQL with noisy user input While recent text-to-SQL research typically focus on benchmark datasets with complete and grammatical input, noisy user queries are commonly encountered in practical NLIDBs. Previous work have proposed several ways to address this issue. Zettlemoyer and Collins (2007) introduced non-standard combinators to a combinatorial categorical grammar (GGG) based semantic parser to handle flexible word order and telegraphic language. Sajjad et al. (2012) and Yao et al. (2019a,b) developed interactive semantic parsing models that generate clarification questions for user to complete their underspecified queries. Arthur et al. (2015) paraphrases an ambiguous input into a less ambiguous form. Setlur et al. (2019) generates default logical forms for underspecified input. Zeng et al. (2020) synthesized a new dataset and trained question filter to identify noisy user input and prompt user to rephrase. Our work focus on handling short user utterances typically found in the search service of Salesforce CRM, where sampling-based content-aware models are effective at resolving most ambiguities.",
"cite_spans": [
{
"start": 272,
"end": 302,
"text": "Zettlemoyer and Collins (2007)",
"ref_id": "BIBREF26"
},
{
"start": 462,
"end": 482,
"text": "Sajjad et al. (2012)",
"ref_id": "BIBREF12"
},
{
"start": 487,
"end": 507,
"text": "Yao et al. (2019a,b)",
"ref_id": null
},
{
"start": 643,
"end": 663,
"text": "Arthur et al. (2015)",
"ref_id": "BIBREF2"
},
{
"start": 723,
"end": 743,
"text": "Setlur et al. (2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The Text2SQL task is to generate a SQL query from a natural language question and the database schema/content. In this work, we use the Wik-iSQL dataset (Zhong et al., 2017) as it most closely matches the queries we expect to serve in a CRM. Our users typically don't issue linguistically complex queries requiring joins or nesting but instead focus on filtering a single table based on certain clauses.",
"cite_spans": [
{
"start": 153,
"end": 173,
"text": "(Zhong et al., 2017)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task and Datasets",
"sec_num": "3"
},
{
"text": "WikiSQL contains over 80K natural language questions distributed across 24K tables and their gold SQL queries. The performance is typically evaluated on two different types of accuracies -Logical Form (LF) and Execution (EX). LF measures if the generated query exactly matches the gold query while EX executes the predicted and gold queries on the database and verifies if the answers returned by both are equal. Note that LF is a stricter metric as many different SQL queries could produce the same output. which deals have an expected revenue of over 10 number of deals closed in 2019 how many deals have closing year as 2019 Table 1 : WikiSQL questions and their NLS-style counterparts.",
"cite_spans": [],
"ref_spans": [
{
"start": 628,
"end": 635,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Task and Datasets",
"sec_num": "3"
},
{
"text": "The WikiSQL dataset mostly comprises of verbose questions which differ in style as compared to the NLS questions issued by our users. Table 1 shows NLS questions and their WikiSQL-style equivalents. To account for the differences in style, we augment the WikiSQL dataset with our synthetic data to simulate real-user NLS questions which is generated as follows.",
"cite_spans": [],
"ref_spans": [
{
"start": 134,
"end": 142,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Task and Datasets",
"sec_num": "3"
},
{
"text": "Synthesizing user utterances from gold SQL labels Since WikiSQL contains the gold labels for the SQL sketch, we can use this data to generate NLS-style questions. By analyzing our user search queries (which resemble those shown in Table 1 ) we built question templates which we fill based on the gold SQL-Sketch. Some examples include shuffling the ordering of where conditions (users apply filters in different order), interchange ordering of column names and values (some users type \"US region cases\" while others type \"region US cases\"), and insert the select column name in the beginning or the end of a question (\"John Doe accounts\" vs \"accounts John Doe\"). The synthetic data is used in conjunction with clean well-formed queries from the original dataset, allowing the model to generalize to other queries not present in the templates. An example of synthetic utterances generated this way is shown below.",
"cite_spans": [],
"ref_spans": [
{
"start": 231,
"end": 238,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Task and Datasets",
"sec_num": "3"
},
{
"text": "Original Query -Who is the player of Australian nationality that wears jersey number 42? Generated Queriesplayer jersey 42 australian nationality; 42 jersey australian nationality player; australian nationality jersey 42 player; . . .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task and Datasets",
"sec_num": "3"
},
{
"text": "We identify popular query ngrams when the conditional operator in the SQL-Sketch corresponds to either \">\" or \"<\" and randomly replace these ngrams (\"bigger than\", \"larger than\", etc) with the operator symbols, allowing our model to properly interpret them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supporting relational symbols in user utterance",
"sec_num": null
},
{
"text": "Controlled question simplification Since Wik-iSQL contains no keyword-based questions and only a small portion of questions that are succinct enough to require reasoning over the table content, we employ a sentence simplification model followed by manual verification to create a test dataset to evaluate performance on NLS questions. A common user behavior is to drop unnecessary words from complete sentences to create shorter questions. We simulate this behavior by simplifying/compressing sentences to reduce verbosity. Note that keyword queries can be viewed as an extreme case of sentence simplification where only the required keywords are retained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supporting relational symbols in user utterance",
"sec_num": null
},
{
"text": "We make use of the controllable sentence simplifier by Handler and O'Connor (2019) to compress sentences to a desired length whilst retaining a specified set of keywords. We specify the list of keywords to be the header name of the select column, the values in the where columns (we ignore the header names for the where columns as users tend to omit them from their queries).",
"cite_spans": [
{
"start": 55,
"end": 82,
"text": "Handler and O'Connor (2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Supporting relational symbols in user utterance",
"sec_num": null
},
{
"text": "In total, we create two datasets: short questions with gold SQL labels and replacement of relation symbols, and simple questions with controlled sentence simplification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supporting relational symbols in user utterance",
"sec_num": null
},
{
"text": "Manually verified test set We create a highquality test set by manually verifying a subset of simple questions 3 . A potential problem with sentence simplification models is ensuring that the shortened version still has enough information to execute the query correctly. This could vary based on the table content and is difficult to identify if the query is impossible to be executed correctly. We had a team of data scientists and engineers proficient in SQL to verify/correct outputs produced by the sentence simplification model and generated 400 queries for testing. We show examples in this dataset and report our manual quality evaluation in \u00a7 A.1. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supporting relational symbols in user utterance",
"sec_num": null
},
{
"text": "Following Xu et al. (2018) and Hwang et al. 2019, we decompose the SQL generation task into 6 different subtasks -one for each component of the SQL-Sketch. These subtasks all share a common encoder but use different decoder layers. The encoder is a BERT model (Devlin et al., 2018) which produces contextualized representations of the question, headers and the decoders largely use a task-specific LSTM with column-attention. Column-attention (Xu et al., 2018 ) is a mechanism where each header attends over all query tokens to produce a single representation over which a dense layer is used to predict probabilities. The select, aggregation, where-num, and whereoperator branches use LSTMs + Column-attention followed by a softmax layer to output probabilities. The where-column branch is similar but uses a sigmoid instead as multiple columns could appear in the where clause and the where-value outputs start-end spans for the values from the question. Figure 3 highlights the architecture of our model. We retain the same encoder-decoder architecture as SQLova as our main contribution lies in the data augmentation and content sampling techniques to handle NLS questions.",
"cite_spans": [
{
"start": 10,
"end": 26,
"text": "Xu et al. (2018)",
"ref_id": "BIBREF18"
},
{
"start": 260,
"end": 281,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 443,
"end": 459,
"text": "(Xu et al., 2018",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 957,
"end": 965,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "4"
},
{
"text": "As highlighted previously, table content could be a useful feature in helping the model disambiguate between different columns. Consider a table of tennis players as shown below. Now, consider a question \"courts with Rafael Nadal as winner\". A model which isn't informed about the content of the table cannot easily understand that Rafael Nadal needs to be the where clause value for Player and winner for the Result column. Allowing the model to scan the table for entities like \"Rafael Nadal\" or \"winner\" could help the model incorporate table content effectively. Consider another question \"courts with Roger Federer as winner\". It is intuitive that this query follows the same structure as the previous, except that the required value is now \"Roger Federer\". However, \"Roger Federer\" is not present in the table. We hypothesize that while table content is useful to the model, it does not need to be relevant to the query. The model, when given random samples of values for each column can infer the role of a particular column and generalize to unseen values which are similar to the column samples. In this work, we experiment with two sampling techniques -random and relevance sampling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Incorporation",
"sec_num": "4.1"
},
{
"text": "Random sampling uses a fixed set of question agnostic column values sampled randomly (without replacement) and does not require access to the table once the samples are created. Since the sampling process can be done entirely offline, it adds negligible memory and time to the query execution. Additionally, the model can now be used in privacy sensitive scenarios as it does not access the table content and the samples could be manually configured. The model, now being content informed, performs better than its non-content counterparts whilst being more efficient than its full table con-tent counterparts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Random Sampling",
"sec_num": "4.1.1"
},
{
"text": "Relevance sampling is used in cases where access to table is permitted and it includes a combination of samples relevant to question tokens and random samples. We index all cells of a table and perform a keyword search in the question to identify most relevant cells using FlashText (Singh, 2017) and include them as samples. In situations where the number of keyword matches are fewer than intended for a column or there are no matches, we fallback on random sampling to select the remaining samples.",
"cite_spans": [
{
"start": 283,
"end": 296,
"text": "(Singh, 2017)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relevance Sampling",
"sec_num": "4.1.2"
},
{
"text": "To illustrate the importance of including random samples in the relevance sampling strategy, consider the following example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relevance Sampling",
"sec_num": "4.1.2"
},
{
"text": "Question -Which countries hosted the MHL league?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relevance Sampling",
"sec_num": "4.1.2"
},
{
"text": "League values -NHL, MLB, NBA",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relevance Sampling",
"sec_num": "4.1.2"
},
{
"text": "Photon (Zeng et al., 2020) , a model which only includes up to a single matched value, interprets this query incorrectly (Select country where league = MHL league). Its value matching approach retrieves an empty set to augment the table. 4 Our model with relevance sampling tackles cases like this successfully (Select country where league = MHL) as NHL, MLB, and NBA were included as samples because of the fallback on random sampling. Including random samples improves the model's ability to interpret questions that have values not directly found in the table.",
"cite_spans": [
{
"start": 7,
"end": 26,
"text": "(Zeng et al., 2020)",
"ref_id": "BIBREF25"
},
{
"start": 238,
"end": 239,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relevance Sampling",
"sec_num": "4.1.2"
},
{
"text": "The addition of random samples also allows the model to discriminate between columns effectively. Consider Question 4 from Table 2 , the question is ambiguous without table content because it is unclear if the column to be selected is Place or Country. The pattern \"where are. . . from?\" indicates that the user's intent is to find a location and both column names seem like a reasonable choice (Place is a synonym for location and Country is a location). However, when augmented with random column samples, we see that the Place column only contains numeric values and is used as the synonym of \"rank\" in this table. Figure 3 shows our input representation to the BERT model. Our representation bears similarity to Photon where the content values are concatenated along with the headers and the question separated by special tokens. However, Photon only tackles columns with picklists (categorical columns storing small fixed set of values) while we support numeric and free-form text columns as well. Additionally, as mentioned above, since Photon only incorporates a single matched value, it doesn't gracefully interpret all questions.",
"cite_spans": [],
"ref_spans": [
{
"start": 123,
"end": 130,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 618,
"end": 626,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Relevance Sampling",
"sec_num": "4.1.2"
},
{
"text": "We concatenate the column samples to the headers with special delimiters and experiment with 1,3,5 samples for each column. The number of samples is currently limited by the maximum sequence length supported by BERT models and in the future we hope to experiment with operating on each column individually and diversity based sampling to extract the most distinctive samples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relevance Sampling",
"sec_num": "4.1.2"
},
{
"text": "We use the base version of BERT in all our experiments and made necessary changes for sampling on the original SQLova codebase. We use Adam (Kingma and Ba, 2019) optimizer with a learning rate of 1e-3 for the decoder layers and 1e-5 for the BERT model. Table 2 shows some qualitative examples from our model when augmented with 3 values included for each column. The first two examples are based on random sampling and the latter two are based on relevance sampling. Our model is able to correctly resolve phrases such as \"Maria Herrera\" and \"BMW\" to the right columns when the corresponding values were not seen during training or inference. Consider the first two examples with different modifiers of \"rider\", leveraging the sampled values, our model correctly matches \"BMW\" to Manufacturer (column storing brand name like values) and \"Maria Herrera\" to Rider (column storing human name like values).",
"cite_spans": [],
"ref_spans": [
{
"start": 253,
"end": 260,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiment Setup",
"sec_num": "5"
},
{
"text": "We show performance of our model evaluated on the original WikiSQL dev dataset under different sampling settings. Owing to the 512 token limit, we only sample upto 5 values per column in ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Random Sampling",
"sec_num": "6.2"
},
{
"text": "In addition to random sampling, we also provide results on a model that finds the most relevant samples to the question. In Table 4 , we compare our results with NL2SQL-RULE (Guo and Gao, 2019) (uses entire table content) and EM:1 (including a * Due to unavailability of code, HydraNet numbers are only reported on datasets used in their paper single exactly matched value), the content incorporation strategy adopted by Photon (Zeng et al., 2020) . Since WikiSQL does not distinguish categorical columns, we applied the exact match to all columns. Our model achieves 85.2% logical form and 90.65% execution accuracy on the original WikiSQL dataset outperforming all models without EG. We also studied the memory and time footprint for indexing cells with increasing table sizes by benchmarking the performance of random and relevance sampling on very large tables. To simulate real-world data, we used IMDB movie database -a large-scale database with tables spanning over 7M rows containing movie metadata.",
"cite_spans": [
{
"start": 428,
"end": 447,
"text": "(Zeng et al., 2020)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 124,
"end": 131,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Effect of Relevance Sampling",
"sec_num": "6.3"
},
{
"text": "The random sampling method is agnostic to table size as samples are generated just once while the relevance sampling method scans the table to pick the best samples for each query. The results are shown in Table 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 206,
"end": 213,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Effect of Relevance Sampling",
"sec_num": "6.3"
},
{
"text": "Rows ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "To measure the efficacy of content augmentation, we compared COLLOQL with other works on our dataset of 400 simplified queries which was generated by the sentence simplification model and verified/corrected by a team of data scientists and engineers. This dataset largely contains queries in which the where columns are not explicitly mentioned in the query and requires the model to infer them. We can see from Table 6 that a model uninformed of the content drops in accuracy (especially in the where column prediction) while COL-LOQL retains its performance. ",
"cite_spans": [],
"ref_spans": [
{
"start": 412,
"end": 419,
"text": "Table 6",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Performance on Simple Questions",
"sec_num": "6.4"
},
{
"text": "Since SQLova was originally trained with complete sentences, it does not adapt well to short questions. Retraining the same model with augmented data from our templates recovers the performance (tested using short questions). Additionally, the augmentation also results in improved generalization resulting in a minor LF accuracy improvement on the original dev data as shown in Table 7 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 379,
"end": 386,
"text": "Table 7",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "Effect of Augmentation",
"sec_num": "6.5"
},
{
"text": "We classified the errors made by our model on the ColloQL curated dataset into two major categories: Aggregation -Given that WikiSQL contains noisy labels for aggregation component (Hwang et al., 2019) and the model was optimized for accuracy on WikiSQL, there are some errors in predicting this slot.",
"cite_spans": [
{
"start": 181,
"end": 201,
"text": "(Hwang et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "7"
},
{
"text": "Select Columns -The simplified questions are often more ambiguous when predicting whether a column is a target to be selected or is used in a filtering condition (e.g. for the question \"smallest tiesplayed 6 years\", the model interprets it as SELECT MIN(years) WHERE tiesplayed = 6 while the correct query is SELECT MIN(tiesplayed) WHERE years = 6). Additionally, we noticed that our annotators simplified column headers like \"shortstop\" and \"rightfielder\" to \"SS\" and \"RF\", making the question very difficult to solve.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "7"
},
{
"text": "In this work we tackled the task of converting noisy (short, potentially ambiguous) search-like (NLS) questions to SQL queries. We introduced data augmentation strategies to adapt to the NLS style of text and a novel content enhancement to BERT via two sampling strategies -random and relevance sampling. Random sampling overcomes some of the performance / privacy challenges of incorporating table content and relevance sampling achieves state-of-the-art performance when access to table content is permitted. Finally, we also curated a new held-out dataset to evaluate performance against NLS questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "8"
},
{
"text": "In the future, we hope to explore different sampling techniques (based on user history, sampling to maximize discernment between columns) to enhance performance. Besides, our approach and dataset mainly target telegraphic queries that can be effectively disambiguated with table contents, which frequency occur in our search service. We plan to extend our work to handle other types of input ambiguities and other application domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "8"
},
{
"text": "One of the authors who did not participate in the dataset annotation randomly sampled 16/400 examples and manually checked the quality. 4/16 annotations were found to have issues in the natural language annotation. Table 9 shows examples from the simple question dataset. The first 4 examples are correct, highquality annotations while the bottom 4 are those with issues found during manual check. The highquality simple question annotations are readable and on average have a smaller compression ratio compared to the noisy annotations.",
"cite_spans": [],
"ref_spans": [
{
"start": 215,
"end": 222,
"text": "Table 9",
"ref_id": "TABREF15"
}
],
"eq_spans": [],
"section": "A.1 Test Set Quality",
"sec_num": null
},
{
"text": "We noticed that some errors in the WikiSQL annotation (Hwang et al., 2019) were corrected when the simplified questions were produced, but some perpetuated through. In the second example, the annotator corrected spelling errors in the original WikiSQL annotation. However, in the 7th example, the original question misinterpreted Year acquired as a quantity and our simplified question inherited that error. Similarly, in the 8th example, the original question misinterpreted the field Finalists as \"score\" (it should represent \"number of finalists\") and our simplified question inherited it.",
"cite_spans": [
{
"start": 54,
"end": 74,
"text": "(Hwang et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Test Set Quality",
"sec_num": null
},
{
"text": "The 5th and 6th examples have unreadable questions as a result of sentence simplification (but our annotators still labeled them as correct). This is an artifact of the dataset as such unreadable, keywordstyle queries may favor models that leverage table content to identify the columns. On the other hand, such queries could be useful as being able to interpret them may give users more flexibility when searching the content of a database.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Test Set Quality",
"sec_num": null
},
{
"text": "What is the amount of trees, that require replacement when the district is motovilikhinsky? ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Original",
"sec_num": null
},
{
"text": "https://www.salesforce.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Our code and annotated data can be found at https://github.com/karthikradhakrishnan96/ ColloQL.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Sentence simplification creates a diverse set of examples which contains some of those generated by gold SQL label.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We ran the evaluation on Photon's demo page.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/salesforce/WikiSQL",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Christian Posse and Mario Rodriguez for their support, help and invaluable feedback throughout the development of this work. We also would like to thank our team of expert annotators for their contribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Natural language interfaces to databases -an introduction",
"authors": [
{
"first": "Ion",
"middle": [],
"last": "Androutsopoulos",
"suffix": ""
},
{
"first": "Graeme",
"middle": [
"D"
],
"last": "Ritchie",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Thanisch",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ion Androutsopoulos, Graeme D. Ritchie, and Pe- ter Thanisch. 1995. Natural language interfaces to databases -an introduction.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Semantic parsing of ambiguous input through paraphrasing and verification",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Arthur",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Sakriani",
"middle": [],
"last": "Sakti",
"suffix": ""
},
{
"first": "Tomoki",
"middle": [],
"last": "Toda",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Nakamura",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "571--584",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00159"
]
},
"num": null,
"urls": [],
"raw_text": "Philip Arthur, Graham Neubig, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2015. Semantic pars- ing of ambiguous input through paraphrasing and verification. Transactions of the Association for Computational Linguistics, 3:571-584.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Improving text-to-sql evaluation methodology",
"authors": [
{
"first": "Catherine",
"middle": [],
"last": "Finegan-Dollak",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [
"K"
],
"last": "Kummerfeld",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Ramanathan",
"suffix": ""
},
{
"first": "Sesh",
"middle": [],
"last": "Sadasivam",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [
"R"
],
"last": "Radev",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018",
"volume": "1",
"issue": "",
"pages": "351--360",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1033"
]
},
"num": null,
"urls": [],
"raw_text": "Catherine Finegan-Dollak, Jonathan K. Kummerfeld, Li Zhang, Karthik Ramanathan, Sesh Sadasivam, Rui Zhang, and Dragomir R. Radev. 2018. Im- proving text-to-sql evaluation methodology. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics, ACL 2018, Mel- bourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 351-360. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Towards complex text-to-sql in cross-domain database with intermediate representation",
"authors": [
{
"first": "Jiaqi",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Zecheng",
"middle": [],
"last": "Zhan",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Jian-Guang",
"middle": [],
"last": "Lou",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Dongmei",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019",
"volume": "1",
"issue": "",
"pages": "4524--4535",
"other_ids": {
"DOI": [
"10.18653/v1/p19-1444"
]
},
"num": null,
"urls": [],
"raw_text": "Jiaqi Guo, Zecheng Zhan, Yan Gao, Yan Xiao, Jian-Guang Lou, Ting Liu, and Dongmei Zhang. 2019. Towards complex text-to-sql in cross-domain database with intermediate representation. In Pro- ceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Pa- pers, pages 4524-4535. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Content enhanced bert-based text-to-sql generation",
"authors": [
{
"first": "Tong",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Huilin",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.07179"
]
},
"num": null,
"urls": [],
"raw_text": "Tong Guo and Huilin Gao. 2019. Content enhanced bert-based text-to-sql generation. arXiv preprint arXiv:1910.07179.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Queryfocused sentence compression in linear time",
"authors": [
{
"first": "Abram",
"middle": [],
"last": "Handler",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Connor",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5969--5975",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1612"
]
},
"num": null,
"urls": [],
"raw_text": "Abram Handler and Brendan O'Connor. 2019. Query- focused sentence compression in linear time. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 5969- 5975, Hong Kong, China. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A comprehensive exploration on wikisql with table-aware word contextualization",
"authors": [
{
"first": "Wonseok",
"middle": [],
"last": "Hwang",
"suffix": ""
},
{
"first": "Jinyeung",
"middle": [],
"last": "Yim",
"suffix": ""
},
{
"first": "Seunghyun",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Minjoon",
"middle": [],
"last": "Seo",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wonseok Hwang, Jinyeung Yim, Seunghyun Park, and Minjoon Seo. 2019. A comprehensive exploration on wikisql with table-aware word contextualization. ArXiv, abs/1902.01069.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "J Adam",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and J Adam Ba. 2019. A method for stochastic optimization. arxiv 2014. arXiv preprint arXiv:1412.6980, 434.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Nalir: an interactive natural language interface for querying relational databases",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Hosagrahar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jagadish",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 ACM SIGMOD international conference on Management of data",
"volume": "",
"issue": "",
"pages": "709--712",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Li and Hosagrahar V Jagadish. 2014. Nalir: an in- teractive natural language interface for querying re- lational databases. In Proceedings of the 2014 ACM SIGMOD international conference on Management of data, pages 709-712.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Hybrid ranking network for text-to-sql",
"authors": [
{
"first": "Qin",
"middle": [],
"last": "Lyu",
"suffix": ""
},
{
"first": "Kaushik",
"middle": [],
"last": "Chakrabarti",
"suffix": ""
},
{
"first": "Shobhit",
"middle": [],
"last": "Hathi",
"suffix": ""
},
{
"first": "Souvik",
"middle": [],
"last": "Kundu",
"suffix": ""
},
{
"first": "Jianwen",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qin Lyu, Kaushik Chakrabarti, Shobhit Hathi, Souvik Kundu, Jianwen Zhang, and Zheng Chen. 2020. Hy- brid ranking network for text-to-sql. Technical Re- port MSR-TR-2020-7, Microsoft Dynamics 365 AI.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Underspecified query refinement via natural language question generation",
"authors": [
{
"first": "Hassan",
"middle": [],
"last": "Sajjad",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Gamon",
"suffix": ""
}
],
"year": 2012,
"venue": "The COLING 2012 Organizing Committee",
"volume": "",
"issue": "",
"pages": "2341--2356",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hassan Sajjad, Patrick Pantel, and Michael Gamon. 2012. Underspecified query refinement via natural language question generation. In Proceedings of COLING 2012, pages 2341-2356, Mumbai, India. The COLING 2012 Organizing Committee.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Eviza: A natural language interface for visual analysis",
"authors": [
{
"first": "Vidya",
"middle": [],
"last": "Setlur",
"suffix": ""
},
{
"first": "Sarah",
"middle": [
"E"
],
"last": "Battersby",
"suffix": ""
},
{
"first": "Melanie",
"middle": [],
"last": "Tory",
"suffix": ""
},
{
"first": "Rich",
"middle": [],
"last": "Gossweiler",
"suffix": ""
},
{
"first": "Angel X",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 29th Annual Symposium on User Interface Software and Technology",
"volume": "",
"issue": "",
"pages": "365--377",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vidya Setlur, Sarah E Battersby, Melanie Tory, Rich Gossweiler, and Angel X Chang. 2016. Eviza: A natural language interface for visual analysis. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology, pages 365-377.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Inferencing underspecified natural language utterances in visual analysis",
"authors": [
{
"first": "Vidya",
"middle": [],
"last": "Setlur",
"suffix": ""
},
{
"first": "Melanie",
"middle": [],
"last": "Tory",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Djalali",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 24th International Conference on Intelligent User Interfaces, IUI '19",
"volume": "",
"issue": "",
"pages": "40--51",
"other_ids": {
"DOI": [
"10.1145/3301275.3302270"
]
},
"num": null,
"urls": [],
"raw_text": "Vidya Setlur, Melanie Tory, and Alex Djalali. 2019. In- ferencing underspecified natural language utterances in visual analysis. In Proceedings of the 24th Inter- national Conference on Intelligent User Interfaces, IUI '19, page 40-51, New York, NY, USA. Associa- tion for Computing Machinery.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Replace or Retrieve Keywords In Documents at Scale. ArXiv e-prints",
"authors": [
{
"first": "V",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Singh. 2017. Replace or Retrieve Keywords In Doc- uments at Scale. ArXiv e-prints.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "RAT-SQL: relation-aware schema encoding and linking for textto-sql parsers",
"authors": [
{
"first": "Bailin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Shin",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Oleksandr",
"middle": [],
"last": "Polozov",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020",
"volume": "",
"issue": "",
"pages": "7567--7578",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bailin Wang, Richard Shin, Xiaodong Liu, Oleksandr Polozov, and Matthew Richardson. 2020. RAT-SQL: relation-aware schema encoding and linking for text- to-sql parsers. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, ACL 2020, Online, July 5-10, 2020, pages 7567-7578. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Robust text-to-sql generation with execution-guided decoding",
"authors": [
{
"first": "Chenglong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kedar",
"middle": [],
"last": "Tatwawadi",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Brockschmidt",
"suffix": ""
},
{
"first": "Po-Sen",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Oleksandr",
"middle": [],
"last": "Polozov",
"suffix": ""
},
{
"first": "Rishabh",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1807.03100"
]
},
"num": null,
"urls": [],
"raw_text": "Chenglong Wang, Kedar Tatwawadi, Marc Brockschmidt, Po-Sen Huang, Yi Mao, Olek- sandr Polozov, and Rishabh Singh. 2018. Robust text-to-sql generation with execution-guided decoding. arXiv preprint arXiv:1807.03100.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Sqlnet: Generating structured queries from natural language without reinforcement learning",
"authors": [
{
"first": "Xiaojun",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Chang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Dawn",
"middle": [],
"last": "Song",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaojun Xu, Chang Liu, and Dawn Song. 2018. Sqlnet: Generating structured queries from natural language without reinforcement learning.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Interactive semantic parsing for if-then recipes via hierarchical reinforcement learning",
"authors": [
{
"first": "Ziyu",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Xiujun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Brian",
"middle": [
"M"
],
"last": "Sadler",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2019,
"venue": "The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence",
"volume": "2019",
"issue": "",
"pages": "2547--2554",
"other_ids": {
"DOI": [
"10.1609/aaai.v33i01.33012547"
]
},
"num": null,
"urls": [],
"raw_text": "Ziyu Yao, Xiujun Li, Jianfeng Gao, Brian M. Sadler, and Huan Sun. 2019a. Interactive semantic pars- ing for if-then recipes via hierarchical reinforce- ment learning. In The Thirty-Third AAAI Con- ference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial In- telligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 -February 1, 2019, pages 2547-2554. AAAI Press.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Model-based interactive semantic parsing: A unified framework and A text-to-sql case study",
"authors": [
{
"first": "Ziyu",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019",
"volume": "",
"issue": "",
"pages": "5446--5457",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1547"
]
},
"num": null,
"urls": [],
"raw_text": "Ziyu Yao, Yu Su, Huan Sun, and Wen-tau Yih. 2019b. Model-based interactive semantic parsing: A unified framework and A text-to-sql case study. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 5446-5457. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "TRANX: A transition-based neural abstract syntax parser for semantic parsing and code generation",
"authors": [
{
"first": "Pengcheng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "2018",
"issue": "",
"pages": "7--12",
"other_ids": {
"DOI": [
"10.18653/v1/d18-2002"
]
},
"num": null,
"urls": [],
"raw_text": "Pengcheng Yin and Graham Neubig. 2018. TRANX: A transition-based neural abstract syntax parser for se- mantic parsing and code generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018: Sys- tem Demonstrations, Brussels, Belgium, October 31 -November 4, 2018, pages 7-12. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Tabert: Pretraining for joint understanding of textual and tabular data",
"authors": [
{
"first": "Pengcheng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online",
"volume": "",
"issue": "",
"pages": "8413--8426",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pengcheng Yin, Graham Neubig, Wen-tau Yih, and Se- bastian Riedel. 2020. Tabert: Pretraining for joint understanding of textual and tabular data. In Pro- ceedings of the 58th Annual Meeting of the Associ- ation for Computational Linguistics, ACL 2020, On- line, July 5-10, 2020, pages 8413-8426. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Typesql: Knowledgebased type-aware neural text-to-sql generation",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Zifan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Zilin",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [],
"last": "Radev",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.09769"
]
},
"num": null,
"urls": [],
"raw_text": "Tao Yu, Zifan Li, Zilin Zhang, Rui Zhang, and Dragomir Radev. 2018a. Typesql: Knowledge- based type-aware neural text-to-sql generation. arXiv preprint arXiv:1804.09769.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Spider: A largescale human-labeled dataset for complex and crossdomain semantic parsing and text-to-sql task",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Michihiro",
"middle": [],
"last": "Yasunaga",
"suffix": ""
},
{
"first": "Dongxu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zifan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Irene",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Qingning",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Shanelle",
"middle": [],
"last": "Roman",
"suffix": ""
},
{
"first": "Zilin",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [],
"last": "Radev",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. 2018b. Spider: A large- scale human-labeled dataset for complex and cross- domain semantic parsing and text-to-sql task. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Photon: A robust cross-domain text-to-SQL system",
"authors": [
{
"first": "Jichuan",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Victoria",
"middle": [],
"last": "Xi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "C",
"middle": [
"H"
],
"last": "Steven",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Hoi",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Irwin",
"middle": [],
"last": "Lyu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "King",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "204--214",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-demos.24"
]
},
"num": null,
"urls": [],
"raw_text": "Jichuan Zeng, Xi Victoria Lin, Steven C.H. Hoi, Richard Socher, Caiming Xiong, Michael Lyu, and Irwin King. 2020. Photon: A robust cross-domain text-to-SQL system. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 204- 214, Online. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Online learning of relaxed CCG grammars for parsing to logical form",
"authors": [
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)",
"volume": "",
"issue": "",
"pages": "678--687",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luke Zettlemoyer and Michael Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. In Proceedings of the 2007 Joint Con- ference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 678-687, Prague, Czech Republic. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Seq2sql: Generating structured queries from natural language using reinforcement learning",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. CoRR, abs/1709.00103.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Examples of search-style user queries.",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "SQL-Sketch fromXu et al. (2018).",
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"text": "ColloQL uses the same NN architecture as SQLova where six decoding layers (one for each component of the SQL-Sketch) are used over BERT. The SQL query (SELECT Player Name WHERE Jersey = 42) is constructed from outputs of different components. Unlike SQLova, we also contextualize the question with the table samples (underlined in the figure) delimited by special tokens.",
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"num": null,
"content": "<table><tr><td colspan=\"2\">0 (player name)</td><td colspan=\"2\">0 (no aggregation)</td><td/><td>1</td><td colspan=\"2\">[1] (Jersey)</td><td>[=]</td><td colspan=\"3\">[(2,2)] (span indices for \"42\")</td></tr><tr><td colspan=\"2\">Select Column</td><td colspan=\"2\">Aggregation Operator</td><td colspan=\"2\"># Where clauses LSTM</td><td colspan=\"2\">Where Column</td><td colspan=\"2\">Where Operator</td><td colspan=\"2\">Where Value</td></tr><tr><td>Column Attn</td><td/><td>Column Attn</td><td/><td/><td>Self Attn</td><td>Column Attn</td><td/><td>Column Attn</td><td/><td>Column Attn</td><td/></tr><tr><td>LSTM-q</td><td>LSTM-h</td><td>LSTM-q</td><td>LSTM-h</td><td>LSTM-q</td><td>LSTM-h</td><td>LSTM-q</td><td>LSTM-h</td><td>LSTM-q</td><td>LSTM-h</td><td>LSTM-q</td><td>LSTM-h</td></tr></table>",
"text": "",
"html": null,
"type_str": "table"
},
"TABREF2": {
"num": null,
"content": "<table><tr><td>SQL</td><td>SELECT (Grid) FROM 2-14125739-3 WHERE Rider = maria herrera AND Laps <</td></tr><tr><td/><td>200</td></tr><tr><td/><td>fox tv series female</td></tr><tr><td/><td>Animal Name || Jack | SELECT (TV Series) FROM 2-11206371-5 WHERE Species = fox AND Gender =</td></tr><tr><td/><td>female</td></tr><tr><td/><td>Where are Charlie Freedman/Eddie Fletcher from?</td></tr><tr><td/><td>Place || 7 | 9 | 1 [SEP] SELECT (Country) FROM 2-10301911-6 WHERE Rider = charlie freedman/eddie</td></tr><tr><td/><td>fletcher</td></tr></table>",
"text": "Modifying the architecture to operate on one column at a time (HydraNet) would allow us to use more samples.Our model performs significantly grid of bmw rider with > 200 laps Rider || Nicolas Terol | Mike Di Meglio | Stevie Bonsey [SEP] Manufacturer || Derbi | Honda | KTM [SEP] Laps || 1 | 24 | 0 [SEP] Grid || 20 | 29 | 25 . . . SQL SELECT (Grid) FROM 2-14125739-3 WHERE Manufacturer = bmw AND Laps > 200 grid of maria herrera rider with < 200 laps Rider || Nicolas Terol | Mike Di Meglio | Stevie Bonsey [SEP] Manufacturer || Derbi | Honda | KTM [SEP] Laps || 1 | 24 | 0 [SEP] Grid || 20 | 29 | 25 . . . The Big Owl | The Wild Boar [SEP] Species || Fox | Badger | Boar [SEP] Books || No | Yes [SEP] Gender || male | female . . . SQL Rider || Charlie Freedman/Eddie Fletcher | Mick Horsepole/E . . . [SEP] Country || West Germany | Switzerland | United Kingdom [SEP] . . . SQL",
"html": null,
"type_str": "table"
},
"TABREF3": {
"num": null,
"content": "<table><tr><td>Model</td><td/><td/><td colspan=\"2\">LF (dev) EX (dev)</td></tr><tr><td>SQLova BASE</td><td/><td/><td>79.5</td><td>85.3</td></tr><tr><td>SQLova LARGE</td><td/><td/><td>81.6</td><td>87.2</td></tr><tr><td>HydraNet LARGE</td><td colspan=\"2\">*</td><td>83.6</td><td>89.1</td></tr><tr><td colspan=\"2\">COLLOQL rand:1</td><td>\u2020</td><td>82.0</td><td>87.6</td></tr><tr><td colspan=\"2\">COLLOQL rand:3</td><td>\u2020</td><td>83.3</td><td>89.1</td></tr><tr><td colspan=\"2\">COLLOQL rand:5</td><td>\u2020</td><td>83.5</td><td>89.3</td></tr></table>",
"text": "Some qualitative examples from our random (1,2) and relevance (3,4) sampling models. Bold values in headers indicate a match in the question.better than our base SQLova model and performs competitively with other larger models.",
"html": null,
"type_str": "table"
},
"TABREF4": {
"num": null,
"content": "<table><tr><td>: Model performance with different sampling</td></tr><tr><td>settings. Rand:[1,3,5] uses random sampling. \u2020 indi-</td></tr><tr><td>cates that data augmentation is added.</td></tr></table>",
"text": "",
"html": null,
"type_str": "table"
},
"TABREF6": {
"num": null,
"content": "<table><tr><td>: Efficacy of different content incorporation</td></tr><tr><td>strategies. Relevance sampling (with 3 samples) gives</td></tr><tr><td>the best performance. \u2021denotes our implementation of</td></tr><tr><td>Photon.</td></tr></table>",
"text": "",
"html": null,
"type_str": "table"
},
"TABREF8": {
"num": null,
"content": "<table><tr><td>: Benchmarking different content incorporation</td></tr><tr><td>strategies with respect to execution time (CPU), mem-</td></tr><tr><td>ory footprint and setup time (for indexing).</td></tr></table>",
"text": "",
"html": null,
"type_str": "table"
},
"TABREF10": {
"num": null,
"content": "<table/>",
"text": "Performance on the curated test set i.e. 400 simplified queries.",
"html": null,
"type_str": "table"
},
"TABREF12": {
"num": null,
"content": "<table><tr><td colspan=\"4\">: Comparing logical form accuracy of SQLova</td></tr><tr><td colspan=\"4\">with augmentation. LF(short) is the dev accuracy on</td></tr><tr><td colspan=\"4\">the short questions. LF(dev) is the accuracy on the Wik-</td></tr><tr><td>iSQL dev split.</td><td/><td/><td/></tr><tr><td colspan=\"4\">6.6 Performance on WikiSQL test set</td></tr><tr><td colspan=\"4\">Finally, we also show the performance of our model</td></tr><tr><td colspan=\"4\">on the WikiSQL test dataset comparing them to the</td></tr><tr><td colspan=\"4\">top approaches on the WikiSQL leaderboard 5 . As</td></tr><tr><td colspan=\"4\">we can see in Table 8, COLLOQL achieves the high-</td></tr><tr><td colspan=\"4\">est accuracy without execution guided decoding on</td></tr><tr><td colspan=\"2\">the WikiSQL test set.</td><td/><td/></tr><tr><td>Model</td><td/><td colspan=\"2\">LF(test) EX(test)</td></tr><tr><td>HydraNet LARGE</td><td/><td>83.8</td><td>89.2</td></tr><tr><td>NL2SQL BASE</td><td/><td>83.7</td><td>89.2</td></tr><tr><td>COLLOQL rel:3</td><td>\u2020</td><td>84.9</td><td>90.7</td></tr></table>",
"text": "",
"html": null,
"type_str": "table"
},
"TABREF13": {
"num": null,
"content": "<table/>",
"text": "Performance on the WikiSQL test set.",
"html": null,
"type_str": "table"
},
"TABREF14": {
"num": null,
"content": "<table><tr><td/><td colspan=\"2\">SELECT COUNT(Winning driver) from</td><td>WHERE Rnd=5</td></tr><tr><td/><td>SELECT (Title) from</td><td>WHERE US airdate=4 April 2008</td></tr><tr><td/><td>SELECT (Score) from</td><td>WHERE Semi-Finalist 1=Miami</td></tr><tr><td/><td colspan=\"2\">SELECT (School/Club Team/Country) from</td><td>WHERE No.(s)=10 AND</td></tr><tr><td/><td>Position=Forward</td></tr><tr><td>Original</td><td colspan=\"2\">Which visitors have a leading scorer of roy : 25?</td></tr><tr><td>Simple</td><td>visitor of 25-18</td></tr><tr><td/><td># SELECT (Visitor) from</td><td>WHERE Leading scorer=Roy : 25</td></tr><tr><td/><td colspan=\"2\">SELECT COUNT(Year acquired) from</td><td>WHERE Station=CHAN</td></tr><tr><td>Original</td><td colspan=\"2\">What are the names that had a finalist score of 2??</td></tr><tr><td>Simple</td><td colspan=\"2\">names that had finalist score 2?</td></tr><tr><td/><td>SELECT (School) from</td><td>WHERE Finalists=2</td></tr></table>",
"text": "Simple the amount of trees, that require replacement district motovilikhinsky? District || Total amount of trees || Prevailing types, % || Amount of old trees || Amount of trees, that require replacement || ... SQL SELECT (Amount of trees, that require replacement) from WHERE District=Leninsky Original How many winning drivers were the for the rnd equalling 5? Simple how many winning drivers for 5? Rnd || Race Name || Circuit || City/Location || Date || Pole position || Winning driver || ... SQL Original For the episode(s) aired in the U.S. on 4 april 2008, what were the names? Simple for the episode(s) aired in U.S. 4 april 2008, names? No. in season || No. in series || Title || Canadian airdate || US airdate || Production code . . . SQL Original List the scores of all games when Miami were listed as the first Semi finalist? Simple scores with miami listed as first semi finalist? Year || Champion || Score || Runner-Up || Location || Semi-Finalist #1 || Semi-Finalist #2 . . . SQL Original What school did the forward whose number is 10 belong to? Simple what school did forward 10 Player || No.(s) || Height in Ft. || Position || Years for Rockets || School/Club Team/Country . . . SQL || Date || Visitor || Score || Home || Leading scorer || Attendance || Record || Streak . . . SQL Original how any were gained as the chan Simple how many gained chan City || Station || Year acquired || Primary programming source || Other programming sources . . . SQL School || Winners || Finalists || Total Finals || Year of last win SQL",
"html": null,
"type_str": "table"
},
"TABREF15": {
"num": null,
"content": "<table/>",
"text": "Examples in simple questions dev set. We use \" \" as placeholder for table in the SQL queries. Only table headers were shown. The top 4 examples are correct while the bottom 4 have issue in the natural language annotation.",
"html": null,
"type_str": "table"
}
}
}
} |