Datasets:

Sub-tasks:
parsing
Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
found
Annotations Creators:
no-annotation
Source Datasets:
original
ArXiv:
Tags:
License:
system HF staff commited on
Commit
d5183dd
0 Parent(s):

Update files from the datasets library (from 1.13.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.13.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +171 -0
  3. dataset_infos.json +1 -0
  4. dummy/sede/1.0.0/dummy_data.zip +3 -0
  5. sede.py +96 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,171 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pretty_name: SEDE (Stack Exchange Data Explorer)
3
+ annotations_creators:
4
+ - no-annotation
5
+ language_creators:
6
+ - found
7
+ languages:
8
+ - en
9
+ licenses:
10
+ - apache-2-0
11
+ multilinguality:
12
+ - monolingual
13
+ paperswithcode_id: sede
14
+ size_categories:
15
+ - 10K<n<100K
16
+ source_datasets:
17
+ - original
18
+ task_categories:
19
+ - structure-prediction
20
+ task_ids:
21
+ - parsing
22
+ ---
23
+
24
+ # Dataset Card for SEDE
25
+
26
+ ## Table of Contents
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
30
+ - [Languages](#languages)
31
+ - [Dataset Structure](#dataset-structure)
32
+ - [Data Instances](#data-instances)
33
+ - [Data Fields](#data-instances)
34
+ - [Data Splits](#data-instances)
35
+ - [Dataset Creation](#dataset-creation)
36
+ - [Curation Rationale](#curation-rationale)
37
+ - [Source Data](#source-data)
38
+ - [Annotations](#annotations)
39
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
40
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
41
+ - [Social Impact of Dataset](#social-impact-of-dataset)
42
+ - [Discussion of Biases](#discussion-of-biases)
43
+ - [Other Known Limitations](#other-known-limitations)
44
+ - [Additional Information](#additional-information)
45
+ - [Dataset Curators](#dataset-curators)
46
+ - [Licensing Information](#licensing-information)
47
+ - [Citation Information](#citation-information)
48
+
49
+ ## Dataset Description
50
+
51
+ - **Repository:** https://github.com/hirupert/sede
52
+ - **Paper:** https://arxiv.org/abs/2106.05006
53
+ - **Leaderboard:** https://paperswithcode.com/sota/text-to-sql-on-sede
54
+ - **Point of Contact:** moshe@hirupert.com
55
+
56
+ ### Dataset Summary
57
+
58
+ SEDE (Stack Exchange Data Explorer) is a dataset for Text-to-SQL tasks with more than 12,000 SQL queries and their natural language description. It's based on a real usage of users from the Stack Exchange Data Explorer platform, which brings complexities and challenges never seen before in any other semantic parsing dataset like including complex nesting, dates manipulation, numeric and text manipulation, parameters, and most importantly: under-specification and hidden-assumptions.
59
+
60
+ ### Supported Tasks and Leaderboards
61
+
62
+ - `parsing`: The dataset can be used to train a model for Text-to-SQL task. A Seq2Seq model (e.g. T5) can be used to solve the task. A model with more inductive-bias (e.g. a model with a grammar-based decoder) or an interactive settings for Text-to-SQL (https://arxiv.org/abs/2005.02539) can improve the results further. The model performance is measured by how high its [PCM-F1](https://arxiv.org/abs/2106.05006) score is. A [t5-large](https://huggingface.co/t5-large) achieves a [PCM-F1 of 50.6](https://arxiv.org/abs/2106.05006).
63
+
64
+ ### Languages
65
+
66
+ The text in the dataset is in English. The associated BCP-47 code is `en`.
67
+
68
+ ## Dataset Structure
69
+
70
+ ### Data Instances
71
+
72
+ A typical data point comprises a question title, (optionally) a description and its underlying SQL query. In addition, each sample has a unique ID (from the Stack Exchange Data Explorer), its creation date and a boolean flag named `validated` if this sample was validated to be in gold quality by humans, see the paper for full details regarding the `validated` flag.
73
+
74
+ An instance for example:
75
+
76
+ ```
77
+ {
78
+ 'QuerySetId':1233,
79
+ 'Title':'Top 500 Askers on the site',
80
+ 'Description':'A list of the top 500 askers of questions ordered by average answer score excluding community wiki closed posts.',
81
+ 'QueryBody':'SELECT * FROM (\nSELECT \n TOP 500\n OwnerUserId as [User Link],\n Count(Posts.Id) AS Questions,\n CAST(AVG(CAST(Score AS float)) as numeric(6,2)) AS [Average Question Score]\nFROM\n Posts\nWHERE \n PostTypeId = 1 and CommunityOwnedDate is null and ClosedDate is null\nGROUP BY\n OwnerUserId\nORDER BY\n Count(Posts.Id) DESC\n)ORDER BY\n [Average Question Score] DESC',
82
+ 'CreationDate':'2010-05-27 20:08:16',
83
+ 'validated':true
84
+ }
85
+ ```
86
+
87
+ ### Data Fields
88
+
89
+ - QuerySetId: a unique ID coming from the Stack Exchange Data Explorer.
90
+ - Title: utterance title.
91
+ - Description: utterance description (might be empty).
92
+ - QueryBody: the underlying SQL query.
93
+ - CreationDate: when this sample was created.
94
+ - validated: `true` if this sample was validated to be in gold quality by humans.
95
+
96
+ ### Data Splits
97
+
98
+ The data is split into a training, validation and test set. The validation and test set contain only samples that were validated by humans to be in gold quality.
99
+
100
+ Train Valid Test
101
+ 10309 857 857
102
+
103
+ ## Dataset Creation
104
+
105
+ ### Curation Rationale
106
+
107
+ Most available semantic parsing datasets, comprising of pairs of natural utterances and logical forms, were collected solely for the purpose of training and evaluation of natural language understanding systems. As a result, they do not contain any of the richness and variety of natural-occurring utterances, where humans ask about data they need or are curious about. SEDE contains a variety of real-world challenges which were rarely reflected so far in any other semantic parsing dataset. There is a large gap between the performance on SEDE compared to other common datasets, which leaves a room for future research for generalisation of Text-to-SQL models.
108
+
109
+ ### Source Data
110
+
111
+ #### Initial Data Collection and Normalization
112
+
113
+ To introduce a realistic Text-to-SQL benchmark, we gather SQL queries together with their titles and descriptions from a naturally occurring dataset: the Stack Exchange Data Explorer. Stack Exchange is an online question & answers community, with over 3 million questions asked. However in its raw form many of the rows are duplicated or contain unusable queries or titles. The reason for this large difference between the original data size and the cleaned version is that any time that the author of the query executes it, an entry is saved to the log. To alleviate these issues, we write rule-based filters that remove bad queries/descriptions pairs with high precision. For example, we filter out examples with numbers in the description, if these numbers do not appear in the query (refer to the preprocessing script in the repository for the complete list of filters and the number of examples each of them filter). Whenever a query has multiple versions due to multiple executions, we take the last executed query which passed all filters. After this filtering step, we are left with 12,309 examples. Using these filters cleans most of the noise, but not all of it. To complete the cleaning process, we manually go over the examples in the validation and test sets, and either filter-out wrong examples or perform minimal changes to either the utterances or the queries (for example, fix a wrong textual value) to ensure that models are evaluated with correct data. The final number of all training, validation and test examples is 12,023.
114
+
115
+ #### Who are the source language producers?
116
+
117
+ The language producers are Stack Exchange Data Explorer (https://data.stackexchange.com/) users.
118
+
119
+ ### Annotations
120
+
121
+ #### Annotation process
122
+
123
+ [N/A]
124
+
125
+ #### Who are the annotators?
126
+
127
+ [N/A]
128
+
129
+ ### Personal and Sensitive Information
130
+
131
+ All the data in the dataset is for public use.
132
+
133
+ ## Considerations for Using the Data
134
+
135
+ ### Social Impact of Dataset
136
+
137
+ We hope that the release of this challenging dataset will encourage research on improving generalisation for real-world SQL prediction that will help non technical business users acquire the data they need from their company's database.
138
+
139
+ ### Discussion of Biases
140
+
141
+ [N/A]
142
+
143
+ ### Other Known Limitations
144
+
145
+ [Needs More Information]
146
+
147
+ ## Additional Information
148
+
149
+ ### Dataset Curators
150
+
151
+ The dataset was initially created by Moshe Hazoom, Vibhor Malik and Ben Bogin, during work done at Ruper.
152
+
153
+ ### Licensing Information
154
+
155
+ Apache-2.0 License
156
+
157
+ ### Citation Information
158
+
159
+ ```
160
+ @misc{hazoom2021texttosql,
161
+ title={Text-to-SQL in the Wild: A Naturally-Occurring Dataset Based on Stack Exchange Data},
162
+ author={Moshe Hazoom and Vibhor Malik and Ben Bogin},
163
+ year={2021},
164
+ eprint={2106.05006},
165
+ archivePrefix={arXiv},
166
+ primaryClass={cs.CL}
167
+ }
168
+ ```
169
+
170
+ ### Contributions
171
+ Thanks to [@Hazoom](https://github.com/Hazoom) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"sede": {"description": "SEDE (Stack Exchange Data Explorer) is new dataset for Text-to-SQL tasks with more than 12,000 SQL queries and their\nnatural language description. It's based on a real usage of users from the Stack Exchange Data Explorer platform,\nwhich brings complexities and challenges never seen before in any other semantic parsing dataset like\nincluding complex nesting, dates manipulation, numeric and text manipulation, parameters, and most\nimportantly: under-specification and hidden-assumptions.\n\nPaper (NLP4Prog workshop at ACL2021): https://arxiv.org/abs/2106.05006\n", "citation": "@misc{hazoom2021texttosql,\n title={Text-to-SQL in the Wild: A Naturally-Occurring Dataset Based on Stack Exchange Data},\n author={Moshe Hazoom and Vibhor Malik and Ben Bogin},\n year={2021},\n eprint={2106.05006},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://github.com/hirupert/sede", "license": "Apache-2.0 License", "features": {"QuerySetId": {"dtype": "uint32", "id": null, "_type": "Value"}, "Title": {"dtype": "string", "id": null, "_type": "Value"}, "Description": {"dtype": "string", "id": null, "_type": "Value"}, "QueryBody": {"dtype": "string", "id": null, "_type": "Value"}, "CreationDate": {"dtype": "string", "id": null, "_type": "Value"}, "validated": {"dtype": "bool", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "sede", "config_name": "sede", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 4410584, "num_examples": 10309, "dataset_name": "sede"}, "validation": {"name": "validation", "num_bytes": 380942, "num_examples": 857, "dataset_name": "sede"}, "test": {"name": "test", "num_bytes": 386599, "num_examples": 857, "dataset_name": "sede"}}, "download_checksums": {"https://raw.githubusercontent.com/hirupert/sede/main/data/sede/train.jsonl": {"num_bytes": 5390659, "checksum": "a03a0cfc0dd04158cc98fd5467ad5b45e001455459f18a4b9e30caeedf333871"}, "https://raw.githubusercontent.com/hirupert/sede/main/data/sede/val.jsonl": {"num_bytes": 460550, "checksum": "4495a0f12266c9c79682217a5a240f38c79ba7f52a4662721b27d9f547b4b090"}, "https://raw.githubusercontent.com/hirupert/sede/main/data/sede/test.jsonl": {"num_bytes": 467750, "checksum": "e7c0737408f8b22259b70805fcb13c6253b41178469156bbe458d39173ec9c49"}}, "download_size": 6318959, "post_processing_size": null, "dataset_size": 5178125, "size_in_bytes": 11497084}}
dummy/sede/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6e8c07400caefb248becc7d54cecba35214b0b1fb947a498084d4b68afe8ecaf
3
+ size 6714
sede.py ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """SEDE: Text-to-SQL in the Wild: A Naturally-Occurring Dataset Based on Stack Exchange Data."""
2
+
3
+
4
+ import json
5
+
6
+ import datasets
7
+
8
+
9
+ logger = datasets.logging.get_logger(__name__)
10
+
11
+ _CITATION = """\
12
+ @misc{hazoom2021texttosql,
13
+ title={Text-to-SQL in the Wild: A Naturally-Occurring Dataset Based on Stack Exchange Data},
14
+ author={Moshe Hazoom and Vibhor Malik and Ben Bogin},
15
+ year={2021},
16
+ eprint={2106.05006},
17
+ archivePrefix={arXiv},
18
+ primaryClass={cs.CL}
19
+ }
20
+ """
21
+
22
+ _DESCRIPTION = """\
23
+ SEDE (Stack Exchange Data Explorer) is new dataset for Text-to-SQL tasks with more than 12,000 SQL queries and their
24
+ natural language description. It's based on a real usage of users from the Stack Exchange Data Explorer platform,
25
+ which brings complexities and challenges never seen before in any other semantic parsing dataset like
26
+ including complex nesting, dates manipulation, numeric and text manipulation, parameters, and most
27
+ importantly: under-specification and hidden-assumptions.
28
+
29
+ Paper (NLP4Prog workshop at ACL2021): https://arxiv.org/abs/2106.05006
30
+ """
31
+
32
+
33
+ class SEDEConfig(datasets.BuilderConfig):
34
+ """BuilderConfig for SEDE."""
35
+
36
+ def __init__(self, **kwargs):
37
+ """BuilderConfig for SEDE.
38
+
39
+ Args:
40
+ **kwargs: keyword arguments forwarded to super.
41
+ """
42
+ super(SEDEConfig, self).__init__(**kwargs)
43
+
44
+
45
+ class SEDE(datasets.GeneratorBasedBuilder):
46
+ """SEDE Dataset: A Naturally-Occurring Dataset Based on Stack Exchange Data."""
47
+
48
+ _DOWNLOAD_URL = "https://raw.githubusercontent.com/hirupert/sede/main/data/sede"
49
+ _TRAIN_FILE = "train.jsonl"
50
+ _VAL_FILE = "val.jsonl"
51
+ _TEST_FILE = "test.jsonl"
52
+
53
+ BUILDER_CONFIGS = [
54
+ SEDEConfig(
55
+ name="sede",
56
+ version=datasets.Version("1.0.0"),
57
+ description="SEDE Dataset: A Naturally-Occurring Dataset Based on Stack Exchange Data.",
58
+ ),
59
+ ]
60
+
61
+ def _info(self):
62
+ return datasets.DatasetInfo(
63
+ description=_DESCRIPTION,
64
+ features=datasets.Features(
65
+ {
66
+ "QuerySetId": datasets.Value("uint32"),
67
+ "Title": datasets.Value("string"),
68
+ "Description": datasets.Value("string"),
69
+ "QueryBody": datasets.Value("string"),
70
+ "CreationDate": datasets.Value("string"),
71
+ "validated": datasets.Value("bool"),
72
+ }
73
+ ),
74
+ license="Apache-2.0 License",
75
+ supervised_keys=None,
76
+ homepage="https://github.com/hirupert/sede",
77
+ citation=_CITATION,
78
+ )
79
+
80
+ def _split_generators(self, dl_manager):
81
+ train_path = dl_manager.download_and_extract(self._DOWNLOAD_URL + "/" + self._TRAIN_FILE)
82
+ val_path = dl_manager.download_and_extract(self._DOWNLOAD_URL + "/" + self._VAL_FILE)
83
+ test_path = dl_manager.download_and_extract(self._DOWNLOAD_URL + "/" + self._TEST_FILE)
84
+ return [
85
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"data_filepath": train_path}),
86
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"data_filepath": val_path}),
87
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"data_filepath": test_path}),
88
+ ]
89
+
90
+ def _generate_examples(self, data_filepath):
91
+ """Generate SEDE examples."""
92
+ logger.info("generating examples from = %s", data_filepath)
93
+ with open(data_filepath, encoding="utf-8") as f:
94
+ for idx, sample_str in enumerate(f):
95
+ sample_json = json.loads(sample_str)
96
+ yield idx, sample_json