parquet-converter commited on
Commit
20eb104
1 Parent(s): 5a5ac1f

Update parquet files

Browse files
README.md DELETED
@@ -1,213 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - no-annotation
4
- language_creators:
5
- - expert-generated
6
- language:
7
- - en
8
- license:
9
- - unknown
10
- multilinguality:
11
- - monolingual
12
- pretty_name: QaZre
13
- size_categories:
14
- - 1M<n<10M
15
- source_datasets:
16
- - original
17
- task_categories:
18
- - question-answering
19
- task_ids: []
20
- paperswithcode_id: null
21
- tags:
22
- - zero-shot-relation-extraction
23
- dataset_info:
24
- features:
25
- - name: relation
26
- dtype: string
27
- - name: question
28
- dtype: string
29
- - name: subject
30
- dtype: string
31
- - name: context
32
- dtype: string
33
- - name: answers
34
- sequence: string
35
- splits:
36
- - name: test
37
- num_bytes: 29410194
38
- num_examples: 120000
39
- - name: validation
40
- num_bytes: 1481430
41
- num_examples: 6000
42
- - name: train
43
- num_bytes: 2054954011
44
- num_examples: 8400000
45
- download_size: 516061636
46
- dataset_size: 2085845635
47
- ---
48
-
49
- # Dataset Card for QaZre
50
-
51
- ## Table of Contents
52
- - [Dataset Description](#dataset-description)
53
- - [Dataset Summary](#dataset-summary)
54
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
55
- - [Languages](#languages)
56
- - [Dataset Structure](#dataset-structure)
57
- - [Data Instances](#data-instances)
58
- - [Data Fields](#data-fields)
59
- - [Data Splits](#data-splits)
60
- - [Dataset Creation](#dataset-creation)
61
- - [Curation Rationale](#curation-rationale)
62
- - [Source Data](#source-data)
63
- - [Annotations](#annotations)
64
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
65
- - [Considerations for Using the Data](#considerations-for-using-the-data)
66
- - [Social Impact of Dataset](#social-impact-of-dataset)
67
- - [Discussion of Biases](#discussion-of-biases)
68
- - [Other Known Limitations](#other-known-limitations)
69
- - [Additional Information](#additional-information)
70
- - [Dataset Curators](#dataset-curators)
71
- - [Licensing Information](#licensing-information)
72
- - [Citation Information](#citation-information)
73
- - [Contributions](#contributions)
74
-
75
- ## Dataset Description
76
-
77
- - **Homepage:** [http://nlp.cs.washington.edu/zeroshot](http://nlp.cs.washington.edu/zeroshot)
78
- - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
79
- - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
80
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
81
- - **Size of downloaded dataset files:** 492.15 MB
82
- - **Size of the generated dataset:** 1989.22 MB
83
- - **Total amount of disk used:** 2481.37 MB
84
-
85
- ### Dataset Summary
86
-
87
- A dataset reducing relation extraction to simple reading comprehension questions
88
-
89
- ### Supported Tasks and Leaderboards
90
-
91
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
92
-
93
- ### Languages
94
-
95
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
96
-
97
- ## Dataset Structure
98
-
99
- ### Data Instances
100
-
101
- #### default
102
-
103
- - **Size of downloaded dataset files:** 492.15 MB
104
- - **Size of the generated dataset:** 1989.22 MB
105
- - **Total amount of disk used:** 2481.37 MB
106
-
107
- An example of 'validation' looks as follows.
108
- ```
109
- {
110
- "answers": [],
111
- "context": "answer",
112
- "question": "What is XXX in this question?",
113
- "relation": "relation_name",
114
- "subject": "Some entity Here is a bit of context which will explain the question in some way"
115
- }
116
- ```
117
-
118
- ### Data Fields
119
-
120
- The data fields are the same among all splits.
121
-
122
- #### default
123
- - `relation`: a `string` feature.
124
- - `question`: a `string` feature.
125
- - `subject`: a `string` feature.
126
- - `context`: a `string` feature.
127
- - `answers`: a `list` of `string` features.
128
-
129
- ### Data Splits
130
-
131
- | name | train | validation | test |
132
- |---------|--------:|-----------:|-------:|
133
- | default | 8400000 | 6000 | 120000 |
134
-
135
- ## Dataset Creation
136
-
137
- ### Curation Rationale
138
-
139
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
140
-
141
- ### Source Data
142
-
143
- #### Initial Data Collection and Normalization
144
-
145
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
146
-
147
- #### Who are the source language producers?
148
-
149
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
150
-
151
- ### Annotations
152
-
153
- #### Annotation process
154
-
155
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
156
-
157
- #### Who are the annotators?
158
-
159
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
160
-
161
- ### Personal and Sensitive Information
162
-
163
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
164
-
165
- ## Considerations for Using the Data
166
-
167
- ### Social Impact of Dataset
168
-
169
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
170
-
171
- ### Discussion of Biases
172
-
173
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
174
-
175
- ### Other Known Limitations
176
-
177
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
178
-
179
- ## Additional Information
180
-
181
- ### Dataset Curators
182
-
183
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
184
-
185
- ### Licensing Information
186
-
187
- Unknown.
188
-
189
- ### Citation Information
190
-
191
- ```
192
- @inproceedings{levy-etal-2017-zero,
193
- title = "Zero-Shot Relation Extraction via Reading Comprehension",
194
- author = "Levy, Omer and
195
- Seo, Minjoon and
196
- Choi, Eunsol and
197
- Zettlemoyer, Luke",
198
- booktitle = "Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017)",
199
- month = aug,
200
- year = "2017",
201
- address = "Vancouver, Canada",
202
- publisher = "Association for Computational Linguistics",
203
- url = "https://www.aclweb.org/anthology/K17-1034",
204
- doi = "10.18653/v1/K17-1034",
205
- pages = "333--342",
206
- }
207
-
208
- ```
209
-
210
-
211
- ### Contributions
212
-
213
- Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@ghomasHudson](https://github.com/ghomasHudson), [@lewtun](https://github.com/lewtun) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "A dataset reducing relation extraction to simple reading comprehension questions\n", "citation": "@inproceedings{levy-etal-2017-zero,\n title = \"Zero-Shot Relation Extraction via Reading Comprehension\",\n author = \"Levy, Omer and\n Seo, Minjoon and\n Choi, Eunsol and\n Zettlemoyer, Luke\",\n booktitle = \"Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017)\",\n month = aug,\n year = \"2017\",\n address = \"Vancouver, Canada\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/K17-1034\",\n doi = \"10.18653/v1/K17-1034\",\n pages = \"333--342\",\n}\n", "homepage": "http://nlp.cs.washington.edu/zeroshot", "license": "", "features": {"relation": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "subject": {"dtype": "string", "id": null, "_type": "Value"}, "context": {"dtype": "string", "id": null, "_type": "Value"}, "answers": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "supervised_keys": null, "builder_name": "qa_zre", "config_name": "default", "version": {"version_str": "0.1.0", "description": null, "datasets_version_to_prepare": null, "major": 0, "minor": 1, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 29410194, "num_examples": 120000, "dataset_name": "qa_zre"}, "validation": {"name": "validation", "num_bytes": 1481430, "num_examples": 6000, "dataset_name": "qa_zre"}, "train": {"name": "train", "num_bytes": 2054954011, "num_examples": 8400000, "dataset_name": "qa_zre"}}, "download_checksums": {"http://nlp.cs.washington.edu/zeroshot/relation_splits.tar.bz2": {"num_bytes": 516061636, "checksum": "e33d0e367b6e837370da17a2d09d217e0a92f8d180f7abb3fd543a2d1726b2b4"}}, "download_size": 516061636, "dataset_size": 2085845635, "size_in_bytes": 2601907271}}
 
 
default/qa_zre-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a617e9bfc50aa0491dbf1f3719bff386694a2689befb86577bfcc564d9744906
3
+ size 16279741
default/qa_zre-train-00000-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:716d956b028ef65ec9b8319b38d7700fb1043b8baaa8bdae064d7f3a958cf76c
3
+ size 302841429
default/qa_zre-train-00001-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74d9d48c860fadb0b1d78e2bf6d7f96d4f4fdae674b528bc8630b53a5b74685b
3
+ size 302820523
default/qa_zre-train-00002-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf9f2d3b559fcd70307a61ce842c3545bbdaea7a613709b1c82cc6b592da7a08
3
+ size 303026926
default/qa_zre-train-00003-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:65a8a29ad6c54de7e4a829207b1cf441a33134870662227b49bc1c1cdbee88c2
3
+ size 303133431
default/qa_zre-train-00004-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ee9f7fc923134399bb9eaef58e30402800ce285f6a617fdf962bcc592fed0a3
3
+ size 33288457
default/qa_zre-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:52f12264e2a0ba864bbfcba67ea002abaa0858d8670c90d86d2e99c8384c1f2e
3
+ size 833652
qa_zre.py DELETED
@@ -1,99 +0,0 @@
1
- """A dataset reducing relation extraction to simple reading comprehension questions"""
2
-
3
- import csv
4
- import os
5
-
6
- import datasets
7
-
8
-
9
- _CITATION = """\
10
- @inproceedings{levy-etal-2017-zero,
11
- title = "Zero-Shot Relation Extraction via Reading Comprehension",
12
- author = "Levy, Omer and
13
- Seo, Minjoon and
14
- Choi, Eunsol and
15
- Zettlemoyer, Luke",
16
- booktitle = "Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017)",
17
- month = aug,
18
- year = "2017",
19
- address = "Vancouver, Canada",
20
- publisher = "Association for Computational Linguistics",
21
- url = "https://www.aclweb.org/anthology/K17-1034",
22
- doi = "10.18653/v1/K17-1034",
23
- pages = "333--342",
24
- }
25
- """
26
-
27
- _DESCRIPTION = """\
28
- A dataset reducing relation extraction to simple reading comprehension questions
29
- """
30
-
31
- _DATA_URL = "http://nlp.cs.washington.edu/zeroshot/relation_splits.tar.bz2"
32
-
33
-
34
- class QaZre(datasets.GeneratorBasedBuilder):
35
- """QA-ZRE: Reducing relation extraction to simple reading comprehension questions"""
36
-
37
- VERSION = datasets.Version("0.1.0")
38
-
39
- def _info(self):
40
- return datasets.DatasetInfo(
41
- description=_DESCRIPTION,
42
- features=datasets.Features(
43
- {
44
- "relation": datasets.Value("string"),
45
- "question": datasets.Value("string"),
46
- "subject": datasets.Value("string"),
47
- "context": datasets.Value("string"),
48
- "answers": datasets.features.Sequence(datasets.Value("string")),
49
- }
50
- ),
51
- # If there's a common (input, target) tuple from the features,
52
- # specify them here. They'll be used if as_supervised=True in
53
- # builder.as_dataset.
54
- supervised_keys=None,
55
- # Homepage of the dataset for documentation
56
- homepage="http://nlp.cs.washington.edu/zeroshot",
57
- citation=_CITATION,
58
- )
59
-
60
- def _split_generators(self, dl_manager):
61
- """Returns SplitGenerators."""
62
- dl_dir = dl_manager.download_and_extract(_DATA_URL)
63
- dl_dir = os.path.join(dl_dir, "relation_splits")
64
-
65
- return [
66
- datasets.SplitGenerator(
67
- name=datasets.Split.TEST,
68
- gen_kwargs={
69
- "filepaths": [os.path.join(dl_dir, "test." + str(i)) for i in range(10)],
70
- },
71
- ),
72
- datasets.SplitGenerator(
73
- name=datasets.Split.VALIDATION,
74
- gen_kwargs={
75
- "filepaths": [os.path.join(dl_dir, "dev." + str(i)) for i in range(10)],
76
- },
77
- ),
78
- datasets.SplitGenerator(
79
- name=datasets.Split.TRAIN,
80
- gen_kwargs={
81
- "filepaths": [os.path.join(dl_dir, "train." + str(i)) for i in range(10)],
82
- },
83
- ),
84
- ]
85
-
86
- def _generate_examples(self, filepaths):
87
- """Yields examples."""
88
-
89
- for file_idx, filepath in enumerate(filepaths):
90
- with open(filepath, encoding="utf-8") as f:
91
- data = csv.reader(f, delimiter="\t")
92
- for idx, row in enumerate(data):
93
- yield f"{file_idx}_{idx}", {
94
- "relation": row[0],
95
- "question": row[1],
96
- "subject": row[2],
97
- "context": row[3],
98
- "answers": row[4:],
99
- }