parquet-converter commited on
Commit
0a91462
1 Parent(s): 404aa8c

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,218 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - crowdsourced
4
- language:
5
- - en
6
- language_creators:
7
- - found
8
- license:
9
- - cc-by-4.0
10
- multilinguality:
11
- - monolingual
12
- pretty_name: Quoref
13
- size_categories:
14
- - 10K<n<100K
15
- source_datasets:
16
- - original
17
- task_categories:
18
- - question-answering
19
- task_ids: []
20
- paperswithcode_id: quoref
21
- tags:
22
- - coreference-resolution
23
- dataset_info:
24
- features:
25
- - name: id
26
- dtype: string
27
- - name: question
28
- dtype: string
29
- - name: context
30
- dtype: string
31
- - name: title
32
- dtype: string
33
- - name: url
34
- dtype: string
35
- - name: answers
36
- sequence:
37
- - name: answer_start
38
- dtype: int32
39
- - name: text
40
- dtype: string
41
- splits:
42
- - name: train
43
- num_bytes: 44377729
44
- num_examples: 19399
45
- - name: validation
46
- num_bytes: 5442031
47
- num_examples: 2418
48
- download_size: 5078438
49
- dataset_size: 49819760
50
- ---
51
-
52
- # Dataset Card for "quoref"
53
-
54
- ## Table of Contents
55
- - [Dataset Description](#dataset-description)
56
- - [Dataset Summary](#dataset-summary)
57
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
58
- - [Languages](#languages)
59
- - [Dataset Structure](#dataset-structure)
60
- - [Data Instances](#data-instances)
61
- - [Data Fields](#data-fields)
62
- - [Data Splits](#data-splits)
63
- - [Dataset Creation](#dataset-creation)
64
- - [Curation Rationale](#curation-rationale)
65
- - [Source Data](#source-data)
66
- - [Annotations](#annotations)
67
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
68
- - [Considerations for Using the Data](#considerations-for-using-the-data)
69
- - [Social Impact of Dataset](#social-impact-of-dataset)
70
- - [Discussion of Biases](#discussion-of-biases)
71
- - [Other Known Limitations](#other-known-limitations)
72
- - [Additional Information](#additional-information)
73
- - [Dataset Curators](#dataset-curators)
74
- - [Licensing Information](#licensing-information)
75
- - [Citation Information](#citation-information)
76
- - [Contributions](#contributions)
77
-
78
- ## Dataset Description
79
-
80
- - **Homepage:** https://allenai.org/data/quoref
81
- - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
82
- - **Paper:** [Quoref: A Reading Comprehension Dataset with Questions Requiring Coreferential Reasoning](https://aclanthology.org/D19-1606/)
83
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
84
- - **Size of downloaded dataset files:** 4.84 MB
85
- - **Size of the generated dataset:** 47.51 MB
86
- - **Total amount of disk used:** 52.36 MB
87
-
88
- ### Dataset Summary
89
-
90
- Quoref is a QA dataset which tests the coreferential reasoning capability of reading comprehension systems. In this
91
- span-selection benchmark containing 24K questions over 4.7K paragraphs from Wikipedia, a system must resolve hard
92
- coreferences before selecting the appropriate span(s) in the paragraphs for answering questions.
93
-
94
- ### Supported Tasks and Leaderboards
95
-
96
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
97
-
98
- ### Languages
99
-
100
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
101
-
102
- ## Dataset Structure
103
-
104
- ### Data Instances
105
-
106
- #### default
107
-
108
- - **Size of downloaded dataset files:** 4.84 MB
109
- - **Size of the generated dataset:** 47.51 MB
110
- - **Total amount of disk used:** 52.36 MB
111
-
112
- An example of 'validation' looks as follows.
113
- ```
114
- This example was too long and was cropped:
115
-
116
- {
117
- "answers": {
118
- "answer_start": [1633],
119
- "text": ["Frankie"]
120
- },
121
- "context": "\"Frankie Bono, a mentally disturbed hitman from Cleveland, comes back to his hometown in New York City during Christmas week to ...",
122
- "id": "bfc3b34d6b7e73c0bd82a009db12e9ce196b53e6",
123
- "question": "What is the first name of the person who has until New Year's Eve to perform a hit?",
124
- "title": "Blast of Silence",
125
- "url": "https://en.wikipedia.org/wiki/Blast_of_Silence"
126
- }
127
- ```
128
-
129
- ### Data Fields
130
-
131
- The data fields are the same among all splits.
132
-
133
- #### default
134
- - `id`: a `string` feature.
135
- - `question`: a `string` feature.
136
- - `context`: a `string` feature.
137
- - `title`: a `string` feature.
138
- - `url`: a `string` feature.
139
- - `answers`: a dictionary feature containing:
140
- - `answer_start`: a `int32` feature.
141
- - `text`: a `string` feature.
142
-
143
- ### Data Splits
144
-
145
- | name |train|validation|
146
- |-------|----:|---------:|
147
- |default|19399| 2418|
148
-
149
- ## Dataset Creation
150
-
151
- ### Curation Rationale
152
-
153
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
154
-
155
- ### Source Data
156
-
157
- #### Initial Data Collection and Normalization
158
-
159
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
160
-
161
- #### Who are the source language producers?
162
-
163
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
164
-
165
- ### Annotations
166
-
167
- #### Annotation process
168
-
169
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
170
-
171
- #### Who are the annotators?
172
-
173
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
174
-
175
- ### Personal and Sensitive Information
176
-
177
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
178
-
179
- ## Considerations for Using the Data
180
-
181
- ### Social Impact of Dataset
182
-
183
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
184
-
185
- ### Discussion of Biases
186
-
187
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
188
-
189
- ### Other Known Limitations
190
-
191
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
192
-
193
- ## Additional Information
194
-
195
- ### Dataset Curators
196
-
197
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
198
-
199
- ### Licensing Information
200
-
201
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
202
-
203
- ### Citation Information
204
-
205
- ```
206
- @article{allenai:quoref,
207
- author = {Pradeep Dasigi and Nelson F. Liu and Ana Marasovic and Noah A. Smith and Matt Gardner},
208
- title = {Quoref: A Reading Comprehension Dataset with Questions Requiring Coreferential Reasoning},
209
- journal = {arXiv:1908.05803v2 },
210
- year = {2019},
211
- }
212
-
213
- ```
214
-
215
-
216
- ### Contributions
217
-
218
- Thanks to [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "Quoref is a QA dataset which tests the coreferential reasoning capability of reading comprehension systems. In this \nspan-selection benchmark containing 24K questions over 4.7K paragraphs from Wikipedia, a system must resolve hard \ncoreferences before selecting the appropriate span(s) in the paragraphs for answering questions.\n", "citation": "@article{allenai:quoref,\n author = {Pradeep Dasigi and Nelson F. Liu and Ana Marasovic and Noah A. Smith and Matt Gardner},\n title = {Quoref: A Reading Comprehension Dataset with Questions Requiring Coreferential Reasoning},\n journal = {arXiv:1908.05803v2 },\n year = {2019},\n}\n", "homepage": "https://leaderboard.allenai.org/quoref/submissions/get-started", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "context": {"dtype": "string", "id": null, "_type": "Value"}, "title": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}, "answers": {"feature": {"answer_start": {"dtype": "int32", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "supervised_keys": null, "builder_name": "quoref", "config_name": "default", "version": {"version_str": "0.1.0", "description": null, "datasets_version_to_prepare": null, "major": 0, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 44377729, "num_examples": 19399, "dataset_name": "quoref"}, "validation": {"name": "validation", "num_bytes": 5442031, "num_examples": 2418, "dataset_name": "quoref"}}, "download_checksums": {"https://quoref-dataset.s3-us-west-2.amazonaws.com/train_and_dev/quoref-train-dev-v0.1.zip": {"num_bytes": 5078438, "checksum": "aacde0863c04ba6e9ab46995ea844a5b0c6cea58a77ab6fd86a128e33a3ad8fb"}}, "download_size": 5078438, "dataset_size": 49819760, "size_in_bytes": 54898198}}
 
 
default/quoref-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f4950e5d11fd56be19912297ce16a72b3f69434d9e3f73b1f5c9bbcbc996eac
3
+ size 7062528
default/quoref-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bfe5fccde7c9bd54d9c813f9c0e19f61d3b04b8a5779ee7d6d0b7988cc65ea6a
3
+ size 867674
quoref.py DELETED
@@ -1,116 +0,0 @@
1
- """TODO(quoref): Add a description here."""
2
-
3
-
4
- import json
5
- import os
6
-
7
- import datasets
8
-
9
-
10
- # TODO(quoref): BibTeX citation
11
- _CITATION = """\
12
- @article{allenai:quoref,
13
- author = {Pradeep Dasigi and Nelson F. Liu and Ana Marasovic and Noah A. Smith and Matt Gardner},
14
- title = {Quoref: A Reading Comprehension Dataset with Questions Requiring Coreferential Reasoning},
15
- journal = {arXiv:1908.05803v2 },
16
- year = {2019},
17
- }
18
- """
19
-
20
- # TODO(quoref):
21
- _DESCRIPTION = """\
22
- Quoref is a QA dataset which tests the coreferential reasoning capability of reading comprehension systems. In this
23
- span-selection benchmark containing 24K questions over 4.7K paragraphs from Wikipedia, a system must resolve hard
24
- coreferences before selecting the appropriate span(s) in the paragraphs for answering questions.
25
- """
26
-
27
- _URL = "https://quoref-dataset.s3-us-west-2.amazonaws.com/train_and_dev/quoref-train-dev-v0.1.zip"
28
-
29
-
30
- class Quoref(datasets.GeneratorBasedBuilder):
31
- """TODO(quoref): Short description of my dataset."""
32
-
33
- # TODO(quoref): Set up version.
34
- VERSION = datasets.Version("0.1.0")
35
-
36
- def _info(self):
37
- # TODO(quoref): Specifies the datasets.DatasetInfo object
38
- return datasets.DatasetInfo(
39
- # This is the description that will appear on the datasets page.
40
- description=_DESCRIPTION,
41
- # datasets.features.FeatureConnectors
42
- features=datasets.Features(
43
- {
44
- "id": datasets.Value("string"),
45
- "question": datasets.Value("string"),
46
- "context": datasets.Value("string"),
47
- "title": datasets.Value("string"),
48
- "url": datasets.Value("string"),
49
- "answers": datasets.features.Sequence(
50
- {
51
- "answer_start": datasets.Value("int32"),
52
- "text": datasets.Value("string"),
53
- }
54
- )
55
- # These are the features of your dataset like images, labels ...
56
- }
57
- ),
58
- # If there's a common (input, target) tuple from the features,
59
- # specify them here. They'll be used if as_supervised=True in
60
- # builder.as_dataset.
61
- supervised_keys=None,
62
- # Homepage of the dataset for documentation
63
- homepage="https://leaderboard.allenai.org/quoref/submissions/get-started",
64
- citation=_CITATION,
65
- )
66
-
67
- def _split_generators(self, dl_manager):
68
- """Returns SplitGenerators."""
69
- # TODO(quoref): Downloads the data and defines the splits
70
- # dl_manager is a datasets.download.DownloadManager that can be used to
71
- # download and extract URLs
72
- dl_dir = dl_manager.download_and_extract(_URL)
73
- data_dir = os.path.join(dl_dir, "quoref-train-dev-v0.1")
74
- return [
75
- datasets.SplitGenerator(
76
- name=datasets.Split.TRAIN,
77
- # These kwargs will be passed to _generate_examples
78
- gen_kwargs={"filepath": os.path.join(data_dir, "quoref-train-v0.1.json")},
79
- ),
80
- datasets.SplitGenerator(
81
- name=datasets.Split.VALIDATION,
82
- # These kwargs will be passed to _generate_examples
83
- gen_kwargs={"filepath": os.path.join(data_dir, "quoref-dev-v0.1.json")},
84
- ),
85
- ]
86
-
87
- def _generate_examples(self, filepath):
88
- """Yields examples."""
89
- # TODO(quoref): Yields (key, example) tuples from the dataset
90
- with open(filepath, encoding="utf-8") as f:
91
- data = json.load(f)
92
- for article in data["data"]:
93
- title = article.get("title", "").strip()
94
- url = article.get("url", "").strip()
95
- for paragraph in article["paragraphs"]:
96
- context = paragraph["context"].strip()
97
- for qa in paragraph["qas"]:
98
- question = qa["question"].strip()
99
- id_ = qa["id"]
100
-
101
- answer_starts = [answer["answer_start"] for answer in qa["answers"]]
102
- answers = [answer["text"].strip() for answer in qa["answers"]]
103
-
104
- # Features currently used are "context", "question", and "answers".
105
- # Others are extracted here for the ease of future expansions.
106
- yield id_, {
107
- "title": title,
108
- "context": context,
109
- "question": question,
110
- "id": id_,
111
- "answers": {
112
- "answer_start": answer_starts,
113
- "text": answers,
114
- },
115
- "url": url,
116
- }