File size: 3,398 Bytes
323f2d4
 
 
 
 
 
 
 
 
 
 
 
c321444
4a68a6a
c321444
1048cd7
481f76c
 
 
c321444
52c9efa
 
 
 
 
5391641
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
76727a2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5391641
 
52c9efa
c321444
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---
license: mit
task_categories:
- table-question-answering
language:
- en
pretty_name: SQUALL
size_categories:
- 10K<n<100K
---


## SQUALL Dataset
To explore the utility of fine-grained, lexical-level supervision, authors introduce SQUALL, a dataset that enriches 11,276 WikiTableQuestions English-language questions with manually created SQL equivalents plus alignments between SQL and question fragments. 5-fold splits are applied to the full dataset (1 fold as dev set at each time). The subset defines which fold is selected as the validation dataset.

WARN: labels of test set are unknown.

## Source
Please refer to [github repo](https://github.com/tzshi/squall/) for source data.

## Use
```python
from datasets import load_dataset
dataset = load_dataset("siyue/squall","0")
```
Example:
```python
{
	'nt': 'nt-10922', 
	'tbl': '204_879', 
	'columns': 
		{
			'raw_header': ['year', 'host / location', 'division i overall', 'division i undergraduate', 'division ii overall', 'division ii community college'], 
			'tokenized_header': [['year'], ['host', '\\\\/', 'location'], ['division', 'i', 'overall'], ['division', 'i', 'undergraduate'], ['division', 'ii', 'overall'], ['division', 'ii', 'community', 'college']], 
			'column_suffixes': [['number'], ['address'], [], [], [], []], 
			'column_dtype': ['number', 'address', 'text', 'text', 'text', 'text'], 
			'example': ['1997', 'penn', 'chicago', 'swarthmore', 'harvard', 'valencia cc']
		}, 
	'nl': ['when', 'was', 'the', 'last', 'time', 'the', 'event', 'was', 'held', 'in', 'minnesota', '?'], 
	'nl_pos': ['WRB', 'VBD-AUX', 'DT', 'JJ', 'NN', 'DT', 'NN', 'VBD-AUX', 'VBN', 'IN', 'NNP', '.'], 
	'nl_ner': ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'LOCATION', 'O'], 
	'nl_incolumns': [False, False, False, False, False, False, False, False, False, False, False, False], 
	'nl_incells': [False, False, False, False, False, False, False, False, False, False, True, False], 
	'columns_innl': [False, False, False, False, False, False], 
	'tgt': '2007', 
	'sql': 
		{
			'sql_type': ['Keyword', 'Column', 'Keyword', 'Keyword', 'Keyword', 'Column', 'Keyword', 'Literal.String', 'Keyword', 'Keyword', 'Column', 'Keyword', 'Keyword', 'Keyword'], 
			'value': ['select', 'c1', 'from', 'w', 'where', 'c2', '=', "'minnesota'", 'order', 'by', 'c1_number', 'desc', 'limit', '1'], 
			'span_indices': [[], [], [], [], [], [], [], [10, 10], [], [], [], [], [], []]
		},
	'nl_ralign': 
		{
			'aligned_sql_token_type': ['None', 'None', 'Column', 'Column', 'Column', 'None', 'None', 'None', 'Column', 'Column', 'Literal', 'None'], 
			'aligned_sql_token_info': [None, None, 'c1_number', 'c1_number', 'c1', None, None, None, 'c2', 'c2', None, None], 
			'align': 
				{
					'nl_indices': [[10], [9, 8], [4], [3, 2]], 
					'sql_indices': [[7], [5], [1], [8, 9, 10, 11, 12, 13]]
				}
		},
	'align': 
		{
			'nl_indices': [[10], [9, 8], [4], [3, 2]], 
			'sql_indices': [[7], [5], [1], [8, 9, 10, 11, 12, 13]]
		}
}
```

## Contact
For any issues or questions, kindly email us at: Siyue Zhang (siyue001@e.ntu.edu.sg).

## Citation
```
@inproceedings{Shi:Zhao:Boyd-Graber:Daume-III:Lee-2020,
	Title = {On the Potential of Lexico-logical Alignments for Semantic Parsing to {SQL} Queries},
	Author = {Tianze Shi and Chen Zhao and Jordan Boyd-Graber and Hal {Daum\'{e} III} and Lillian Lee},
	Booktitle = {Findings of EMNLP},
	Year = {2020},
}
```