Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
parquet-converter commited on
Commit
5ffd52a
·
1 Parent(s): 8a06652

Update parquet files

Browse files
README.md DELETED
@@ -1,276 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - machine-generated
4
- language:
5
- - en
6
- language_creators:
7
- - found
8
- license:
9
- - cc-by-sa-3.0
10
- multilinguality:
11
- - monolingual
12
- pretty_name: DiscoFuse
13
- size_categories:
14
- - 10M<n<100M
15
- source_datasets:
16
- - original
17
- task_categories:
18
- - text2text-generation
19
- task_ids: []
20
- paperswithcode_id: discofuse
21
- tags:
22
- - sentence-fusion
23
- dataset_info:
24
- - config_name: discofuse-sport
25
- features:
26
- - name: connective_string
27
- dtype: string
28
- - name: discourse_type
29
- dtype: string
30
- - name: coherent_second_sentence
31
- dtype: string
32
- - name: has_coref_type_pronoun
33
- dtype: float32
34
- - name: incoherent_first_sentence
35
- dtype: string
36
- - name: incoherent_second_sentence
37
- dtype: string
38
- - name: has_coref_type_nominal
39
- dtype: float32
40
- - name: coherent_first_sentence
41
- dtype: string
42
- splits:
43
- - name: train
44
- num_bytes: 14736279993
45
- num_examples: 43291020
46
- - name: test
47
- num_bytes: 151656323
48
- num_examples: 445521
49
- - name: validation
50
- num_bytes: 150207737
51
- num_examples: 440902
52
- download_size: 4326637746
53
- dataset_size: 15038144053
54
- - config_name: discofuse-wikipedia
55
- features:
56
- - name: connective_string
57
- dtype: string
58
- - name: discourse_type
59
- dtype: string
60
- - name: coherent_second_sentence
61
- dtype: string
62
- - name: has_coref_type_pronoun
63
- dtype: float32
64
- - name: incoherent_first_sentence
65
- dtype: string
66
- - name: incoherent_second_sentence
67
- dtype: string
68
- - name: has_coref_type_nominal
69
- dtype: float32
70
- - name: coherent_first_sentence
71
- dtype: string
72
- splits:
73
- - name: train
74
- num_bytes: 6377924196
75
- num_examples: 16310585
76
- - name: test
77
- num_bytes: 64008158
78
- num_examples: 163657
79
- - name: validation
80
- num_bytes: 65682035
81
- num_examples: 168081
82
- download_size: 1717422334
83
- dataset_size: 6507614389
84
- ---
85
-
86
- # Dataset Card for "discofuse"
87
-
88
- ## Table of Contents
89
- - [Dataset Description](#dataset-description)
90
- - [Dataset Summary](#dataset-summary)
91
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
92
- - [Languages](#languages)
93
- - [Dataset Structure](#dataset-structure)
94
- - [Data Instances](#data-instances)
95
- - [Data Fields](#data-fields)
96
- - [Data Splits](#data-splits)
97
- - [Dataset Creation](#dataset-creation)
98
- - [Curation Rationale](#curation-rationale)
99
- - [Source Data](#source-data)
100
- - [Annotations](#annotations)
101
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
102
- - [Considerations for Using the Data](#considerations-for-using-the-data)
103
- - [Social Impact of Dataset](#social-impact-of-dataset)
104
- - [Discussion of Biases](#discussion-of-biases)
105
- - [Other Known Limitations](#other-known-limitations)
106
- - [Additional Information](#additional-information)
107
- - [Dataset Curators](#dataset-curators)
108
- - [Licensing Information](#licensing-information)
109
- - [Citation Information](#citation-information)
110
- - [Contributions](#contributions)
111
-
112
- ## Dataset Description
113
-
114
- - **Repository:** https://github.com/google-research-datasets/discofuse
115
- - **Paper:** [DiscoFuse: A Large-Scale Dataset for Discourse-Based Sentence Fusion](https://arxiv.org/abs/1902.10526)
116
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
117
- - **Size of downloaded dataset files:** 5764.06 MB
118
- - **Size of the generated dataset:** 20547.64 MB
119
- - **Total amount of disk used:** 26311.70 MB
120
-
121
- ### Dataset Summary
122
-
123
- DiscoFuse is a large scale dataset for discourse-based sentence fusion.
124
-
125
- ### Supported Tasks and Leaderboards
126
-
127
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
128
-
129
- ### Languages
130
-
131
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
132
-
133
- ## Dataset Structure
134
-
135
- ### Data Instances
136
-
137
- #### discofuse-sport
138
-
139
- - **Size of downloaded dataset files:** 4126.20 MB
140
- - **Size of the generated dataset:** 14341.49 MB
141
- - **Total amount of disk used:** 18467.70 MB
142
-
143
- An example of 'train' looks as follows.
144
- ```
145
- {
146
- "coherent_first_sentence": "Four LPr and three LC2000r HP Netservers handle customer management and web server functions .",
147
- "coherent_second_sentence": "Finally , an HP Netserver LT6000r hosts i2 Demand Planner and i2 Collaboration Planner .",
148
- "connective_string": "finally ,",
149
- "discourse_type": "PAIR_CONN",
150
- "has_coref_type_nominal": 0.0,
151
- "has_coref_type_pronoun": 0.0,
152
- "incoherent_first_sentence": "Four LPr and three LC2000r HP Netservers handle customer management and web server functions .",
153
- "incoherent_second_sentence": "An HP Netserver LT6000r hosts i2 Demand Planner and i2 Collaboration Planner ."
154
- }
155
- ```
156
-
157
- #### discofuse-wikipedia
158
-
159
- - **Size of downloaded dataset files:** 1637.86 MB
160
- - **Size of the generated dataset:** 6206.14 MB
161
- - **Total amount of disk used:** 7844.01 MB
162
-
163
- An example of 'validation' looks as follows.
164
- ```
165
- {
166
- "coherent_first_sentence": "Four LPr and three LC2000r HP Netservers handle customer management and web server functions .",
167
- "coherent_second_sentence": "Finally , an HP Netserver LT6000r hosts i2 Demand Planner and i2 Collaboration Planner .",
168
- "connective_string": "finally ,",
169
- "discourse_type": "PAIR_CONN",
170
- "has_coref_type_nominal": 0.0,
171
- "has_coref_type_pronoun": 0.0,
172
- "incoherent_first_sentence": "Four LPr and three LC2000r HP Netservers handle customer management and web server functions .",
173
- "incoherent_second_sentence": "An HP Netserver LT6000r hosts i2 Demand Planner and i2 Collaboration Planner ."
174
- }
175
- ```
176
-
177
- ### Data Fields
178
-
179
- The data fields are the same among all splits.
180
-
181
- #### discofuse-sport
182
- - `connective_string`: a `string` feature.
183
- - `discourse_type`: a `string` feature.
184
- - `coherent_second_sentence`: a `string` feature.
185
- - `has_coref_type_pronoun`: a `float32` feature.
186
- - `incoherent_first_sentence`: a `string` feature.
187
- - `incoherent_second_sentence`: a `string` feature.
188
- - `has_coref_type_nominal`: a `float32` feature.
189
- - `coherent_first_sentence`: a `string` feature.
190
-
191
- #### discofuse-wikipedia
192
- - `connective_string`: a `string` feature.
193
- - `discourse_type`: a `string` feature.
194
- - `coherent_second_sentence`: a `string` feature.
195
- - `has_coref_type_pronoun`: a `float32` feature.
196
- - `incoherent_first_sentence`: a `string` feature.
197
- - `incoherent_second_sentence`: a `string` feature.
198
- - `has_coref_type_nominal`: a `float32` feature.
199
- - `coherent_first_sentence`: a `string` feature.
200
-
201
- ### Data Splits
202
-
203
- | name | train |validation| test |
204
- |-------------------|-------:|---------:|-----:|
205
- |discofuse-sport |43291020| 440902|445521|
206
- |discofuse-wikipedia|16310585| 168081|163657|
207
-
208
- ## Dataset Creation
209
-
210
- ### Curation Rationale
211
-
212
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
213
-
214
- ### Source Data
215
-
216
- #### Initial Data Collection and Normalization
217
-
218
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
219
-
220
- #### Who are the source language producers?
221
-
222
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
223
-
224
- ### Annotations
225
-
226
- #### Annotation process
227
-
228
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
229
-
230
- #### Who are the annotators?
231
-
232
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
233
-
234
- ### Personal and Sensitive Information
235
-
236
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
237
-
238
- ## Considerations for Using the Data
239
-
240
- ### Social Impact of Dataset
241
-
242
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
243
-
244
- ### Discussion of Biases
245
-
246
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
247
-
248
- ### Other Known Limitations
249
-
250
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
251
-
252
- ## Additional Information
253
-
254
- ### Dataset Curators
255
-
256
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
257
-
258
- ### Licensing Information
259
-
260
- The data is licensed under [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/) license.
261
-
262
- ### Citation Information
263
-
264
- ```
265
- @InProceedings{GevaEtAl2019,
266
- title = {DiscoFuse: A Large-Scale Dataset for Discourse-Based Sentence Fusion},
267
- author = {Geva, Mor and Malmi, Eric and Szpektor, Idan and Berant, Jonathan},
268
- booktitle = {Proceedings of the 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics},
269
- note = {arXiv preprint arXiv:1902.10526},
270
- year = {2019}
271
- }
272
- ```
273
-
274
- ### Contributions
275
-
276
- Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"discofuse-sport": {"description": " DISCOFUSE is a large scale dataset for discourse-based sentence fusion.\n", "citation": "@InProceedings{GevaEtAl2019,\n title = {DiscoFuse: A Large-Scale Dataset for Discourse-Based Sentence Fusion},\n author = {Geva, Mor and Malmi, Eric and Szpektor, Idan and Berant, Jonathan},\n booktitle = {Proceedings of the 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics},\n note = {arXiv preprint arXiv:1902.10526},\n year = {2019}\n}\n\n", "homepage": "https://github.com/google-research-datasets/discofuse", "license": "", "features": {"connective_string": {"dtype": "string", "id": null, "_type": "Value"}, "discourse_type": {"dtype": "string", "id": null, "_type": "Value"}, "coherent_second_sentence": {"dtype": "string", "id": null, "_type": "Value"}, "has_coref_type_pronoun": {"dtype": "float32", "id": null, "_type": "Value"}, "incoherent_first_sentence": {"dtype": "string", "id": null, "_type": "Value"}, "incoherent_second_sentence": {"dtype": "string", "id": null, "_type": "Value"}, "has_coref_type_nominal": {"dtype": "float32", "id": null, "_type": "Value"}, "coherent_first_sentence": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "discofuse", "config_name": "discofuse-sport", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 14736279993, "num_examples": 43291020, "dataset_name": "discofuse"}, "test": {"name": "test", "num_bytes": 151656323, "num_examples": 445521, "dataset_name": "discofuse"}, "validation": {"name": "validation", "num_bytes": 150207737, "num_examples": 440902, "dataset_name": "discofuse"}}, "download_checksums": {"https://storage.googleapis.com/gresearch/discofuse/discofuse_v1_sports.zip": {"num_bytes": 4326637746, "checksum": "a390083c7923e11efeeea04a9a79074149e5ef9be614466f50aec28f1a5eec41"}}, "download_size": 4326637746, "post_processing_size": null, "dataset_size": 15038144053, "size_in_bytes": 19364781799}, "discofuse-wikipedia": {"description": " DISCOFUSE is a large scale dataset for discourse-based sentence fusion.\n", "citation": "@InProceedings{GevaEtAl2019,\n title = {DiscoFuse: A Large-Scale Dataset for Discourse-Based Sentence Fusion},\n author = {Geva, Mor and Malmi, Eric and Szpektor, Idan and Berant, Jonathan},\n booktitle = {Proceedings of the 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics},\n note = {arXiv preprint arXiv:1902.10526},\n year = {2019}\n}\n\n", "homepage": "https://github.com/google-research-datasets/discofuse", "license": "", "features": {"connective_string": {"dtype": "string", "id": null, "_type": "Value"}, "discourse_type": {"dtype": "string", "id": null, "_type": "Value"}, "coherent_second_sentence": {"dtype": "string", "id": null, "_type": "Value"}, "has_coref_type_pronoun": {"dtype": "float32", "id": null, "_type": "Value"}, "incoherent_first_sentence": {"dtype": "string", "id": null, "_type": "Value"}, "incoherent_second_sentence": {"dtype": "string", "id": null, "_type": "Value"}, "has_coref_type_nominal": {"dtype": "float32", "id": null, "_type": "Value"}, "coherent_first_sentence": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "discofuse", "config_name": "discofuse-wikipedia", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 6377924196, "num_examples": 16310585, "dataset_name": "discofuse"}, "test": {"name": "test", "num_bytes": 64008158, "num_examples": 163657, "dataset_name": "discofuse"}, "validation": {"name": "validation", "num_bytes": 65682035, "num_examples": 168081, "dataset_name": "discofuse"}}, "download_checksums": {"https://storage.googleapis.com/gresearch/discofuse/discofuse_v1_wikipedia.zip": {"num_bytes": 1717422334, "checksum": "e8a5ec52cdd9820ce9b410b47c3a57a49a300470c976202cd7caab613658ebfe"}}, "download_size": 1717422334, "post_processing_size": null, "dataset_size": 6507614389, "size_in_bytes": 8225036723}}
 
 
discofuse-sport/partial-test/0000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:970dad792cfcbb43261a0b96131451822d6cc62337539186e97e2369372d6fb3
3
+ size 94994467
discofuse-sport/partial-train/0000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:947187c3e923e5e9c4b65055d67b8dc1ed2c6f8efd45e7dbc2e4f3d70c25594f
3
+ size 313837809
discofuse-sport/partial-train/0001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:af080486a57fe4da6cbe849c802a8790e53ad878cf3bfe5981613b2fc95e8ee3
3
+ size 312850715
discofuse-sport/partial-train/0002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4b2001b5be033e63114b2da56cc9737250f124ad14fa8f5fe28ef2ae82701bc
3
+ size 313261296
discofuse-sport/partial-train/0003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:79637acee65b2ed6d3e3018390125ff33ffb8fc1c436ad725b56b5efa23f89d3
3
+ size 313259289
discofuse-sport/partial-train/0004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d4da3c9a328b55ef72e7aeaabf2471f35182ff3e9607e85fac711beb5e1d4db0
3
+ size 313907933
discofuse-sport/partial-train/0005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e73271b4997b9f5e29416be725b236b61ae9430ce9d33107f13b3e24d2153b7b
3
+ size 313005576
discofuse-sport/partial-train/0006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:919a36443c25052f4bb8d61407eba44039dd72a3a59031751876affcf166c9c3
3
+ size 313682549
discofuse-sport/partial-train/0007.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e536c3be3aa7832d0a0c9576e68e1268c97bc6f2fb6f7a55c92cb1c296a52d52
3
+ size 313729057
discofuse-sport/partial-train/0008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7613d11a54ddc908da174fdee7daa4ce2211c04124e2e7f0436e0d4117565d89
3
+ size 313270000
discofuse-sport/partial-train/0009.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:05c3a38687513ed37ac443178bf278b631264ab96d574c1e673b90330da5d413
3
+ size 312643909
discofuse-sport/partial-validation/0000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e833eaac217b5533637c8eac283d0ef8b7eba85700269cf28658efae5c4e59c7
3
+ size 94078054
discofuse.py DELETED
@@ -1,194 +0,0 @@
1
- """TODO(discofuse): Add a description here."""
2
-
3
-
4
- import csv
5
- import os
6
-
7
- import datasets
8
-
9
-
10
- _URL_ = "https://storage.googleapis.com/gresearch/discofuse/"
11
- _CITATION = """\
12
- @InProceedings{GevaEtAl2019,
13
- title = {DiscoFuse: A Large-Scale Dataset for Discourse-Based Sentence Fusion},
14
- author = {Geva, Mor and Malmi, Eric and Szpektor, Idan and Berant, Jonathan},
15
- booktitle = {Proceedings of the 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics},
16
- note = {arXiv preprint arXiv:1902.10526},
17
- year = {2019}
18
- }
19
-
20
- """
21
-
22
- # TODO(discofuse):
23
- _DESCRIPTION = """\
24
- DISCOFUSE is a large scale dataset for discourse-based sentence fusion.
25
- """
26
-
27
-
28
- class DiscofuseConfig(datasets.BuilderConfig):
29
-
30
- """BuilderConfig for Discofuse"""
31
-
32
- def __init__(self, data_url, balanced=False, **kwargs):
33
- """
34
-
35
- Args:
36
- balanced: to specify if we want to load the balanced file or the full file
37
- **kwargs: keyword arguments forwarded to super.
38
- """
39
- super(DiscofuseConfig, self).__init__(version=datasets.Version("1.0.0", ""), **kwargs)
40
- self.balanced = balanced
41
- self.data_url = data_url
42
-
43
-
44
- class Discofuse(datasets.GeneratorBasedBuilder):
45
- """TODO(discofuse): Short description of my dataset."""
46
-
47
- # TODO(discofuse): Set up version.
48
- VERSION = datasets.Version("1.0.0")
49
- BUILDER_CONFIGS = [
50
- DiscofuseConfig(
51
- name="discofuse-sport", description="sentence fusion", data_url=_URL_ + "discofuse_v1_sports.zip"
52
- ),
53
- DiscofuseConfig(
54
- name="discofuse-wikipedia", description="sentence fusion", data_url=_URL_ + "discofuse_v1_wikipedia.zip"
55
- ),
56
- ]
57
-
58
- def _info(self):
59
- # TODO(discofuse): Specifies the datasets.DatasetInfo object
60
- return datasets.DatasetInfo(
61
- # This is the description that will appear on the datasets page.
62
- description=_DESCRIPTION,
63
- # datasets.features.FeatureConnectors
64
- features=datasets.Features(
65
- {
66
- "connective_string": datasets.Value("string"),
67
- "discourse_type": datasets.Value("string"),
68
- "coherent_second_sentence": datasets.Value("string"),
69
- "has_coref_type_pronoun": datasets.Value("float32"),
70
- "incoherent_first_sentence": datasets.Value("string"),
71
- "incoherent_second_sentence": datasets.Value("string"),
72
- "has_coref_type_nominal": datasets.Value("float32"),
73
- "coherent_first_sentence": datasets.Value("string"),
74
- # These are the features of your dataset like images, labels ...
75
- }
76
- ),
77
- # If there's a common (input, target) tuple from the features,
78
- # specify them here. They'll be used if as_supervised=True in
79
- # builder.as_dataset.
80
- supervised_keys=None,
81
- # Homepage of the dataset for documentation
82
- homepage="https://github.com/google-research-datasets/discofuse",
83
- citation=_CITATION,
84
- )
85
-
86
- def _split_generators(self, dl_manager):
87
- """Returns SplitGenerators."""
88
- # TODO(discofuse): Downloads the data and defines the splits
89
- # dl_manager is a datasets.download.DownloadManager that can be used to
90
- # download and extract URLs
91
- if self.config.name == "discofuse-sport":
92
- dl_dir = dl_manager.download_and_extract(self.config.data_url)
93
- data_dir = os.path.join(dl_dir, "discofuse_v1/sports")
94
- if self.config.balanced:
95
- return [
96
- datasets.SplitGenerator(
97
- name=datasets.Split.TRAIN,
98
- # These kwargs will be passed to _generate_examples
99
- gen_kwargs={"filepath": os.path.join(data_dir, "train_balanced.tsv")},
100
- ),
101
- datasets.SplitGenerator(
102
- name=datasets.Split.TEST,
103
- # These kwargs will be passed to _generate_examples
104
- gen_kwargs={"filepath": os.path.join(data_dir, "test_balanced.tsv")},
105
- ),
106
- datasets.SplitGenerator(
107
- name=datasets.Split.VALIDATION,
108
- # These kwargs will be passed to _generate_examples
109
- gen_kwargs={"filepath": os.path.join(data_dir, "dev_balanced.tsv")},
110
- ),
111
- ]
112
- else:
113
- return [
114
- datasets.SplitGenerator(
115
- name=datasets.Split.TRAIN,
116
- # These kwargs will be passed to _generate_examples
117
- gen_kwargs={"filepath": os.path.join(data_dir, "train.tsv")},
118
- ),
119
- datasets.SplitGenerator(
120
- name=datasets.Split.TEST,
121
- # These kwargs will be passed to _generate_examples
122
- gen_kwargs={"filepath": os.path.join(data_dir, "test.tsv")},
123
- ),
124
- datasets.SplitGenerator(
125
- name=datasets.Split.VALIDATION,
126
- # These kwargs will be passed to _generate_examples
127
- gen_kwargs={"filepath": os.path.join(data_dir, "dev.tsv")},
128
- ),
129
- ]
130
- else:
131
- if self.config.name == "discofuse-wikipedia":
132
- dl_dir = dl_manager.download_and_extract(self.config.data_url)
133
- data_dir = os.path.join(dl_dir, "discofuse_v1/wikipedia")
134
- if self.config.balanced:
135
- return [
136
- datasets.SplitGenerator(
137
- name=datasets.Split.TRAIN,
138
- # These kwargs will be passed to _generate_examples
139
- gen_kwargs={"filepath": os.path.join(data_dir, "train_balanced.tsv")},
140
- ),
141
- datasets.SplitGenerator(
142
- name=datasets.Split.TEST,
143
- # These kwargs will be passed to _generate_examples
144
- gen_kwargs={"filepath": os.path.join(data_dir, "test_balanced.tsv")},
145
- ),
146
- datasets.SplitGenerator(
147
- name=datasets.Split.VALIDATION,
148
- # These kwargs will be passed to _generate_examples
149
- gen_kwargs={"filepath": os.path.join(data_dir, "dev_balanced.tsv")},
150
- ),
151
- ]
152
- else:
153
- return [
154
- datasets.SplitGenerator(
155
- name=datasets.Split.TRAIN,
156
- # These kwargs will be passed to _generate_examples
157
- gen_kwargs={"filepath": os.path.join(data_dir, "train.tsv")},
158
- ),
159
- datasets.SplitGenerator(
160
- name=datasets.Split.TEST,
161
- # These kwargs will be passed to _generate_examples
162
- gen_kwargs={"filepath": os.path.join(data_dir, "test.tsv")},
163
- ),
164
- datasets.SplitGenerator(
165
- name=datasets.Split.VALIDATION,
166
- # These kwargs will be passed to _generate_examples
167
- gen_kwargs={"filepath": os.path.join(data_dir, "dev.tsv")},
168
- ),
169
- ]
170
-
171
- def _generate_examples(self, filepath):
172
- """Yields examples."""
173
- # TODO(discofuse): Yields (key, example) tuples from the dataset
174
- with open(filepath, encoding="utf-8") as f:
175
- data = csv.DictReader(f, delimiter="\t")
176
- for id_, row in enumerate(data):
177
- co_first_sent = row["coherent_first_sentence"]
178
- co_second_sent = row["coherent_second_sentence"]
179
- connect_str = row["connective_string"]
180
- discourse_type = row["discourse_type"]
181
- has_coref_pronoun = row["has_coref_type_pronoun"]
182
- has_coref_nominal = row["has_coref_type_nominal"]
183
- inco_first_sent = row["incoherent_first_sentence"]
184
- inco_second_sent = row["incoherent_second_sentence"]
185
- yield id_, {
186
- "connective_string": connect_str,
187
- "discourse_type": discourse_type,
188
- "coherent_second_sentence": co_second_sent,
189
- "has_coref_type_pronoun": has_coref_pronoun,
190
- "incoherent_first_sentence": inco_first_sent,
191
- "incoherent_second_sentence": inco_second_sent,
192
- "has_coref_type_nominal": has_coref_nominal,
193
- "coherent_first_sentence": co_first_sent,
194
- }