richardr1126 commited on
Commit
1abc35b
1 Parent(s): f27963f

Upload 2 files

Browse files
Files changed (2) hide show
  1. README.md +191 -0
  2. spider_schma.json +0 -0
README.md ADDED
@@ -0,0 +1,191 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ - machine-generated
7
+ language:
8
+ - en
9
+ license:
10
+ - cc-by-4.0
11
+ multilinguality:
12
+ - monolingual
13
+ size_categories:
14
+ - 1K<n<10K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - text2text-generation
19
+ task_ids: []
20
+ paperswithcode_id: spider-1
21
+ pretty_name: Spider
22
+ tags:
23
+ - text-to-sql
24
+ dataset_info:
25
+ features:
26
+ - name: db_id
27
+ dtype: string
28
+ - name: query
29
+ dtype: string
30
+ - name: question
31
+ dtype: string
32
+ - name: query_toks
33
+ sequence: string
34
+ - name: query_toks_no_value
35
+ sequence: string
36
+ - name: question_toks
37
+ sequence: string
38
+ config_name: spider
39
+ splits:
40
+ - name: train
41
+ num_bytes: 4743786
42
+ num_examples: 7000
43
+ - name: validation
44
+ num_bytes: 682090
45
+ num_examples: 1034
46
+ download_size: 99736136
47
+ dataset_size: 5425876
48
+ ---
49
+
50
+
51
+ # Dataset Card for Spider
52
+
53
+ ## Table of Contents
54
+ - [Dataset Description](#dataset-description)
55
+ - [Dataset Summary](#dataset-summary)
56
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
57
+ - [Languages](#languages)
58
+ - [Dataset Structure](#dataset-structure)
59
+ - [Data Instances](#data-instances)
60
+ - [Data Fields](#data-fields)
61
+ - [Data Splits](#data-splits)
62
+ - [Dataset Creation](#dataset-creation)
63
+ - [Curation Rationale](#curation-rationale)
64
+ - [Source Data](#source-data)
65
+ - [Annotations](#annotations)
66
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
67
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
68
+ - [Social Impact of Dataset](#social-impact-of-dataset)
69
+ - [Discussion of Biases](#discussion-of-biases)
70
+ - [Other Known Limitations](#other-known-limitations)
71
+ - [Additional Information](#additional-information)
72
+ - [Dataset Curators](#dataset-curators)
73
+ - [Licensing Information](#licensing-information)
74
+ - [Citation Information](#citation-information)
75
+ - [Contributions](#contributions)
76
+
77
+ ## Dataset Description
78
+
79
+ - **Homepage:** https://yale-lily.github.io/spider
80
+ - **Repository:** https://github.com/taoyds/spider
81
+ - **Paper:** https://www.aclweb.org/anthology/D18-1425/
82
+ - **Point of Contact:** [Yale LILY](https://yale-lily.github.io/)
83
+
84
+ ### Dataset Summary
85
+
86
+ Spider is a large-scale complex and cross-domain semantic parsing and text-to-SQL dataset annotated by 11 Yale students
87
+ The goal of the Spider challenge is to develop natural language interfaces to cross-domain databases
88
+
89
+ ### Supported Tasks and Leaderboards
90
+
91
+ The leaderboard can be seen at https://yale-lily.github.io/spider
92
+
93
+ ### Languages
94
+
95
+ The text in the dataset is in English.
96
+
97
+ ## Dataset Structure
98
+
99
+ ### Data Instances
100
+
101
+ **What do the instances that comprise the dataset represent?**
102
+
103
+ Each instance is natural language question and the equivalent SQL query
104
+
105
+ **How many instances are there in total?**
106
+
107
+ **What data does each instance consist of?**
108
+
109
+ [More Information Needed]
110
+
111
+ ### Data Fields
112
+
113
+ * **db_id**: Database name
114
+ * **question**: Natural language to interpret into SQL
115
+ * **query**: Target SQL query
116
+ * **query_toks**: List of tokens for the query
117
+ * **query_toks_no_value**: List of tokens for the query
118
+ * **question_toks**: List of tokens for the question
119
+
120
+ ### Data Splits
121
+
122
+ **train**: 7000 questions and SQL query pairs
123
+ **dev**: 1034 question and SQL query pairs
124
+
125
+ [More Information Needed]
126
+
127
+ ## Dataset Creation
128
+
129
+ ### Curation Rationale
130
+
131
+ [More Information Needed]
132
+
133
+ ### Source Data
134
+
135
+ #### Initial Data Collection and Normalization
136
+
137
+ #### Who are the source language producers?
138
+
139
+ [More Information Needed]
140
+
141
+ ### Annotations
142
+
143
+ The dataset was annotated by 11 college students at Yale University
144
+
145
+ #### Annotation process
146
+
147
+ #### Who are the annotators?
148
+
149
+ ### Personal and Sensitive Information
150
+
151
+ [More Information Needed]
152
+
153
+ ## Considerations for Using the Data
154
+
155
+ ### Social Impact of Dataset
156
+
157
+ ### Discussion of Biases
158
+
159
+ [More Information Needed]
160
+
161
+ ### Other Known Limitations
162
+
163
+ ## Additional Information
164
+
165
+ The listed authors in the homepage are maintaining/supporting the dataset.
166
+
167
+ ### Dataset Curators
168
+
169
+ [More Information Needed]
170
+
171
+ ### Licensing Information
172
+
173
+ The spider dataset is licensed under
174
+ the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode)
175
+
176
+ [More Information Needed]
177
+
178
+ ### Citation Information
179
+
180
+ ```
181
+ @article{yu2018spider,
182
+ title={Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task},
183
+ author={Yu, Tao and Zhang, Rui and Yang, Kai and Yasunaga, Michihiro and Wang, Dongxu and Li, Zifan and Ma, James and Li, Irene and Yao, Qingning and Roman, Shanelle and others},
184
+ journal={arXiv preprint arXiv:1809.08887},
185
+ year={2018}
186
+ }
187
+ ```
188
+
189
+ ### Contributions
190
+
191
+ Thanks to [@olinguyen](https://github.com/olinguyen) for adding this dataset.
spider_schma.json ADDED
The diff for this file is too large to render. See raw diff