parquet-converter commited on
Commit
51e0720
1 Parent(s): a5e0535

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,308 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - found
4
- language_creators:
5
- - crowdsourced
6
- language:
7
- - de
8
- - en
9
- license:
10
- - cc-by-sa-4.0
11
- multilinguality:
12
- - monolingual
13
- size_categories:
14
- - 1K<n<10K
15
- source_datasets:
16
- - extended|other-web-nlg
17
- task_categories:
18
- - tabular-to-text
19
- task_ids:
20
- - rdf-to-text
21
- paperswithcode_id: null
22
- pretty_name: Enriched WebNLG
23
- configs:
24
- - de
25
- - en
26
- dataset_info:
27
- - config_name: en
28
- features:
29
- - name: category
30
- dtype: string
31
- - name: size
32
- dtype: int32
33
- - name: eid
34
- dtype: string
35
- - name: original_triple_sets
36
- sequence:
37
- - name: otriple_set
38
- sequence: string
39
- - name: modified_triple_sets
40
- sequence:
41
- - name: mtriple_set
42
- sequence: string
43
- - name: shape
44
- dtype: string
45
- - name: shape_type
46
- dtype: string
47
- - name: lex
48
- sequence:
49
- - name: comment
50
- dtype: string
51
- - name: lid
52
- dtype: string
53
- - name: text
54
- dtype: string
55
- - name: template
56
- dtype: string
57
- - name: sorted_triple_sets
58
- sequence: string
59
- - name: lexicalization
60
- dtype: string
61
- splits:
62
- - name: train
63
- num_bytes: 14665155
64
- num_examples: 6940
65
- - name: dev
66
- num_bytes: 1843787
67
- num_examples: 872
68
- - name: test
69
- num_bytes: 3931381
70
- num_examples: 1862
71
- download_size: 44284508
72
- dataset_size: 20440323
73
- - config_name: de
74
- features:
75
- - name: category
76
- dtype: string
77
- - name: size
78
- dtype: int32
79
- - name: eid
80
- dtype: string
81
- - name: original_triple_sets
82
- sequence:
83
- - name: otriple_set
84
- sequence: string
85
- - name: modified_triple_sets
86
- sequence:
87
- - name: mtriple_set
88
- sequence: string
89
- - name: shape
90
- dtype: string
91
- - name: shape_type
92
- dtype: string
93
- - name: lex
94
- sequence:
95
- - name: comment
96
- dtype: string
97
- - name: lid
98
- dtype: string
99
- - name: text
100
- dtype: string
101
- - name: template
102
- dtype: string
103
- - name: sorted_triple_sets
104
- sequence: string
105
- splits:
106
- - name: train
107
- num_bytes: 9748193
108
- num_examples: 6940
109
- - name: dev
110
- num_bytes: 1238609
111
- num_examples: 872
112
- download_size: 44284508
113
- dataset_size: 10986802
114
- ---
115
-
116
- # Dataset Card for WebNLG
117
-
118
- ## Table of Contents
119
- - [Dataset Description](#dataset-description)
120
- - [Dataset Summary](#dataset-summary)
121
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
122
- - [Languages](#languages)
123
- - [Dataset Structure](#dataset-structure)
124
- - [Data Instances](#data-instances)
125
- - [Data Fields](#data-fields)
126
- - [Data Splits](#data-splits)
127
- - [Dataset Creation](#dataset-creation)
128
- - [Curation Rationale](#curation-rationale)
129
- - [Source Data](#source-data)
130
- - [Annotations](#annotations)
131
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
132
- - [Considerations for Using the Data](#considerations-for-using-the-data)
133
- - [Social Impact of Dataset](#social-impact-of-dataset)
134
- - [Discussion of Biases](#discussion-of-biases)
135
- - [Other Known Limitations](#other-known-limitations)
136
- - [Additional Information](#additional-information)
137
- - [Dataset Curators](#dataset-curators)
138
- - [Licensing Information](#licensing-information)
139
- - [Citation Information](#citation-information)
140
- - [Contributions](#contributions)
141
-
142
- ## Dataset Description
143
-
144
- - **Homepage:** [WebNLG challenge website](https://webnlg-challenge.loria.fr/)
145
- - **Repository:** [Enriched WebNLG Github repository](https://github.com/ThiagoCF05/webnlg)
146
- - **Paper:** [Enriching the WebNLG corpus](https://www.aclweb.org/anthology/W18-6521/)
147
-
148
- ### Dataset Summary
149
-
150
- The WebNLG challenge consists in mapping data to text. The training data consists of Data/Text pairs where the data is a
151
- set of triples extracted from DBpedia and the text is a verbalisation of these triples. For instance, given the 3
152
- DBpedia triples shown in (a), the aim is to generate a text such as (b). It is a valuable resource and benchmark for the Natural Language Generation (NLG) community. However, as other NLG benchmarks, it only consists of a collection of parallel raw representations and their corresponding textual realizations. This work aimed to provide intermediate representations of the data for the development and evaluation of popular tasks in the NLG pipeline architecture, such as Discourse Ordering, Lexicalization, Aggregation and Referring Expression Generation.
153
-
154
- ### Supported Tasks and Leaderboards
155
-
156
- The dataset supports a `other-rdf-to-text` task which requires a model takes a set of RDF (Resource Description
157
- Format) triples from a database (DBpedia) of the form (subject, property, object) as input and write out a natural
158
- language sentence expressing the information contained in the triples.
159
-
160
- ### Languages
161
-
162
- The dataset is presented in two versions: English (config `en`) and German (config `de`)
163
-
164
- ## Dataset Structure
165
-
166
- ### Data Instances
167
-
168
- A typical example contains the original RDF triples in the set, a modified version which presented to crowd workers, and
169
- a set of possible verbalizations for this set of triples:
170
-
171
- ```
172
- { 'category': 'Politician',
173
- 'eid': 'Id10',
174
- 'lex': {'comment': ['good', 'good', 'good'],
175
- 'lid': ['Id1', 'Id2', 'Id3'],
176
- 'text': ['World War II had Chiang Kai-shek as a commander and United States Army soldier Abner W. Sibal.',
177
- 'Abner W. Sibal served in the United States Army during the Second World War and during that war Chiang Kai-shek was one of the commanders.',
178
- 'Abner W. Sibal, served in the United States Army and fought in World War II, one of the commanders of which, was Chiang Kai-shek.']},
179
- 'modified_triple_sets': {'mtriple_set': [['Abner_W._Sibal | battle | World_War_II',
180
- 'World_War_II | commander | Chiang_Kai-shek',
181
- 'Abner_W._Sibal | militaryBranch | United_States_Army']]},
182
- 'original_triple_sets': {'otriple_set': [['Abner_W._Sibal | battles | World_War_II', 'World_War_II | commander | Chiang_Kai-shek', 'Abner_W._Sibal | branch | United_States_Army'],
183
- ['Abner_W._Sibal | militaryBranch | United_States_Army',
184
- 'Abner_W._Sibal | battles | World_War_II',
185
- 'World_War_II | commander | Chiang_Kai-shek']]},
186
- 'shape': '(X (X) (X (X)))',
187
- 'shape_type': 'mixed',
188
- 'size': 3}
189
- ```
190
-
191
- ### Data Fields
192
-
193
- The following fields can be found in the instances:
194
-
195
- - `category`: the category of the DBpedia entites present in the RDF triples.
196
- - `eid`: an example ID, only unique per split per category.
197
- - `size`: number of RDF triples in the set.
198
- - `shape`: (for v3 only) Each set of RDF-triples is a tree, which is characterised by its shape and shape type. `shape`
199
- is a string representation of the tree with nested parentheses where X is a node (
200
- see [Newick tree format](https://en.wikipedia.org/wiki/Newick_format))
201
- - `shape_type`: (for v3 only) is a type of the tree shape, which can be: `chain` (the object of one triple is the
202
- subject of the other); `sibling` (triples with a shared subject); `mixed` (both chain and sibling types present).
203
- - `2017_test_category`: (for `webnlg_challenge_2017`) tells whether the set of RDF triples was present in the training
204
- set or not.
205
- - `lex`: the lexicalizations, with:
206
- - `text`: the text to be predicted.
207
- - `lid`: a lexicalizayion ID, unique per example.
208
- - `comment`: the lexicalizations were rated by crowd workers are either `good` or `bad`
209
-
210
- ### Data Splits
211
-
212
- The `en` version has `train`, `test` and `dev` splits; the `de` version, only `train` and `dev`.
213
-
214
- ## Dataset Creation
215
-
216
- ### Curation Rationale
217
-
218
- Natural Language Generation (NLG) is the process of automatically converting non-linguistic data into a linguistic output format (Reiter andDale, 2000; Gatt and Krahmer, 2018). Recently, the field has seen an increase in the number of available focused data resources as E2E (Novikova et al., 2017), ROTOWIRE(Wise-man et al., 2017) and WebNLG (Gardent et al.,2017a,b) corpora. Although theses recent releases are highly valuable resources for the NLG community in general,nall of them were designed to work with end-to-end NLG models. Hence, they consist of a collection of parallel raw representations and their corresponding textual realizations. No intermediate representations are available so researchersncan straight-forwardly use them to develop or evaluate popular tasks in NLG pipelines (Reiter and Dale, 2000), such as Discourse Ordering, Lexicalization, Aggregation, Referring Expression Generation, among others. Moreover, these new corpora, like many other resources in Computational Linguistics more in general, are only available in English, limiting the development of NLG-applications to other languages.
219
-
220
-
221
- ### Source Data
222
-
223
- #### Initial Data Collection and Normalization
224
-
225
- [More Information Needed]
226
-
227
- #### Who are the source language producers?
228
-
229
- [More Information Needed]
230
-
231
- ### Annotations
232
-
233
- #### Annotation process
234
-
235
- [More Information Needed]
236
-
237
- #### Who are the annotators?
238
-
239
- [More Information Needed]
240
-
241
- ### Personal and Sensitive Information
242
-
243
- [More Information Needed]
244
-
245
- ## Considerations for Using the Data
246
-
247
- ### Social Impact of Dataset
248
-
249
- [More Information Needed]
250
-
251
- ### Discussion of Biases
252
-
253
- [More Information Needed]
254
-
255
- ### Other Known Limitations
256
-
257
- [More Information Needed]
258
-
259
- ## Additional Information
260
-
261
- ### Dataset Curators
262
-
263
- [More Information Needed]
264
-
265
- ### Licensing Information
266
-
267
- The dataset uses the `cc-by-nc-sa-4.0` license. The source DBpedia project uses the `cc-by-sa-3.0` and `gfdl-1.1`
268
- licenses.
269
-
270
- ### Citation Information
271
-
272
- - If you use the Enriched WebNLG corpus, cite:
273
-
274
- ```
275
- @InProceedings{ferreiraetal2018,
276
- author = "Castro Ferreira, Thiago
277
- and Moussallem, Diego
278
- and Wubben, Sander
279
- and Krahmer, Emiel",
280
- title = "Enriching the WebNLG corpus",
281
- booktitle = "Proceedings of the 11th International Conference on Natural Language Generation",
282
- year = "2018",
283
- series = {INLG'18},
284
- publisher = "Association for Computational Linguistics",
285
- address = "Tilburg, The Netherlands",
286
- }
287
-
288
- @inproceedings{web_nlg,
289
- author = {Claire Gardent and
290
- Anastasia Shimorina and
291
- Shashi Narayan and
292
- Laura Perez{-}Beltrachini},
293
- editor = {Regina Barzilay and
294
- Min{-}Yen Kan},
295
- title = {Creating Training Corpora for {NLG} Micro-Planners},
296
- booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational
297
- Linguistics, {ACL} 2017, Vancouver, Canada, July 30 - August 4, Volume
298
- 1: Long Papers},
299
- pages = {179--188},
300
- publisher = {Association for Computational Linguistics},
301
- year = {2017},
302
- url = {https://doi.org/10.18653/v1/P17-1017},
303
- doi = {10.18653/v1/P17-1017}
304
- }
305
- ```
306
- ### Contributions
307
-
308
- Thanks to [@TevenLeScao](https://github.com/TevenLeScao) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"en": {"description": "WebNLG is a valuable resource and benchmark for the Natural Language Generation (NLG) community. However, as other NLG benchmarks, it only consists of a collection of parallel raw representations and their corresponding textual realizations. This work aimed to provide intermediate representations of the data for the development and evaluation of popular tasks in the NLG pipeline architecture (Reiter and Dale, 2000), such as Discourse Ordering, Lexicalization, Aggregation and Referring Expression Generation.\n", "citation": "@InProceedings{ferreiraetal2018,\n author = \t\"Castro Ferreira, Thiago and Moussallem, Diego and Wubben, Sander and Krahmer, Emiel\",\n title = \t\"Enriching the WebNLG corpus\",\n booktitle = \t\"Proceedings of the 11th International Conference on Natural Language Generation\",\n year = \t\"2018\",\n series = {INLG'18},\n publisher = \t\"Association for Computational Linguistics\",\n address = \t\"Tilburg, The Netherlands\",\n}\n", "homepage": "https://github.com/ThiagoCF05/webnlg", "license": "CC Attribution-Noncommercial-Share Alike 4.0 International", "features": {"category": {"dtype": "string", "id": null, "_type": "Value"}, "size": {"dtype": "int32", "id": null, "_type": "Value"}, "eid": {"dtype": "string", "id": null, "_type": "Value"}, "original_triple_sets": {"feature": {"otriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "modified_triple_sets": {"feature": {"mtriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "shape": {"dtype": "string", "id": null, "_type": "Value"}, "shape_type": {"dtype": "string", "id": null, "_type": "Value"}, "lex": {"feature": {"comment": {"dtype": "string", "id": null, "_type": "Value"}, "lid": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "template": {"dtype": "string", "id": null, "_type": "Value"}, "sorted_triple_sets": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "lexicalization": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "enriched_web_nlg", "config_name": "en", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 14665155, "num_examples": 6940, "dataset_name": "enriched_web_nlg"}, "dev": {"name": "dev", "num_bytes": 1843787, "num_examples": 872, "dataset_name": "enriched_web_nlg"}, "test": {"name": "test", "num_bytes": 3931381, "num_examples": 1862, "dataset_name": "enriched_web_nlg"}}, "download_checksums": {"https://github.com/ThiagoCF05/webnlg/archive/12ca34880b225ebd1eb9db07c64e8dd76f7e5784.zip": {"num_bytes": 44284508, "checksum": "624f8c4bc1ef9f59851d92dec1456607ad2b2dc9242e107a4cb62dad774f68cb"}}, "download_size": 44284508, "post_processing_size": null, "dataset_size": 20440323, "size_in_bytes": 64724831}, "de": {"description": "WebNLG is a valuable resource and benchmark for the Natural Language Generation (NLG) community. However, as other NLG benchmarks, it only consists of a collection of parallel raw representations and their corresponding textual realizations. This work aimed to provide intermediate representations of the data for the development and evaluation of popular tasks in the NLG pipeline architecture (Reiter and Dale, 2000), such as Discourse Ordering, Lexicalization, Aggregation and Referring Expression Generation.\n", "citation": "@InProceedings{ferreiraetal2018,\n author = \t\"Castro Ferreira, Thiago and Moussallem, Diego and Wubben, Sander and Krahmer, Emiel\",\n title = \t\"Enriching the WebNLG corpus\",\n booktitle = \t\"Proceedings of the 11th International Conference on Natural Language Generation\",\n year = \t\"2018\",\n series = {INLG'18},\n publisher = \t\"Association for Computational Linguistics\",\n address = \t\"Tilburg, The Netherlands\",\n}\n", "homepage": "https://github.com/ThiagoCF05/webnlg", "license": "CC Attribution-Noncommercial-Share Alike 4.0 International", "features": {"category": {"dtype": "string", "id": null, "_type": "Value"}, "size": {"dtype": "int32", "id": null, "_type": "Value"}, "eid": {"dtype": "string", "id": null, "_type": "Value"}, "original_triple_sets": {"feature": {"otriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "modified_triple_sets": {"feature": {"mtriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "shape": {"dtype": "string", "id": null, "_type": "Value"}, "shape_type": {"dtype": "string", "id": null, "_type": "Value"}, "lex": {"feature": {"comment": {"dtype": "string", "id": null, "_type": "Value"}, "lid": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "template": {"dtype": "string", "id": null, "_type": "Value"}, "sorted_triple_sets": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "enriched_web_nlg", "config_name": "de", "version": "0.0.0", "splits": {"train": {"name": "train", "num_bytes": 9748193, "num_examples": 6940, "dataset_name": "enriched_web_nlg"}, "dev": {"name": "dev", "num_bytes": 1238609, "num_examples": 872, "dataset_name": "enriched_web_nlg"}}, "download_checksums": {"https://github.com/ThiagoCF05/webnlg/archive/12ca34880b225ebd1eb9db07c64e8dd76f7e5784.zip": {"num_bytes": 44284508, "checksum": "624f8c4bc1ef9f59851d92dec1456607ad2b2dc9242e107a4cb62dad774f68cb"}}, "download_size": 44284508, "post_processing_size": null, "dataset_size": 10986802, "size_in_bytes": 55271310}}
 
 
de/enriched_web_nlg-dev.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4c5cf9523e42be8cda8a068c5a0cae175e150fdd1d757abfeae27b9e72b4ecd
3
+ size 302507
de/enriched_web_nlg-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3d520c292b5283bfc59b29e4666e617e647577320917dcd22946599ed053b4c8
3
+ size 1890904
en/enriched_web_nlg-dev.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5082a6eb7ac925e1d95374f0c626335c98a1633efa4355fc9e7e4d187bbd309f
3
+ size 381054
en/enriched_web_nlg-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c681f09fd94d25ca4519711da77c2f86933479983e0e795cfd5f8c66a15ce98d
3
+ size 865880
en/enriched_web_nlg-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e8d2f490ea7f9276e7e0f641110ec6fffd8fe78f769842bbde40a08ad08b5242
3
+ size 2466306
enriched_web_nlg.py DELETED
@@ -1,236 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- """The Enriched WebNLG corpus"""
16
-
17
-
18
- import itertools
19
- import os
20
- import xml.etree.cElementTree as ET
21
- from collections import defaultdict
22
- from glob import glob
23
- from os.path import join as pjoin
24
-
25
- import datasets
26
-
27
-
28
- _CITATION = """\
29
- @InProceedings{ferreiraetal2018,
30
- author = "Castro Ferreira, Thiago and Moussallem, Diego and Wubben, Sander and Krahmer, Emiel",
31
- title = "Enriching the WebNLG corpus",
32
- booktitle = "Proceedings of the 11th International Conference on Natural Language Generation",
33
- year = "2018",
34
- series = {INLG'18},
35
- publisher = "Association for Computational Linguistics",
36
- address = "Tilburg, The Netherlands",
37
- }
38
- """
39
-
40
- _DESCRIPTION = """\
41
- WebNLG is a valuable resource and benchmark for the Natural Language Generation (NLG) community. However, as other NLG benchmarks, it only consists of a collection of parallel raw representations and their corresponding textual realizations. This work aimed to provide intermediate representations of the data for the development and evaluation of popular tasks in the NLG pipeline architecture (Reiter and Dale, 2000), such as Discourse Ordering, Lexicalization, Aggregation and Referring Expression Generation.
42
- """
43
-
44
- _HOMEPAGE = "https://github.com/ThiagoCF05/webnlg"
45
-
46
- _LICENSE = "CC Attribution-Noncommercial-Share Alike 4.0 International"
47
-
48
- _SHA = "12ca34880b225ebd1eb9db07c64e8dd76f7e5784"
49
-
50
- _URL = f"https://github.com/ThiagoCF05/webnlg/archive/{_SHA}.zip"
51
-
52
- _FILE_PATHS = {
53
- "en": {
54
- "train": [f"webnlg-{_SHA}/data/v1.5/en/train/{i}triples/" for i in range(1, 8)],
55
- "dev": [f"webnlg-{_SHA}/data/v1.5/en/dev/{i}triples/" for i in range(1, 8)],
56
- "test": [f"webnlg-{_SHA}/data/v1.5/en/test/{i}triples/" for i in range(1, 8)],
57
- },
58
- "de": {
59
- "train": [f"webnlg-{_SHA}/data/v1.5/de/train/{i}triples/" for i in range(1, 8)],
60
- "dev": [f"webnlg-{_SHA}/data/v1.5/de/dev/{i}triples/" for i in range(1, 8)],
61
- },
62
- }
63
-
64
-
65
- def et_to_dict(tree):
66
- """Takes the xml tree within a dataset file and returns a dictionary with entry data"""
67
- dct = {tree.tag: {} if tree.attrib else None}
68
- children = list(tree)
69
- if children:
70
- dd = defaultdict(list)
71
- for dc in map(et_to_dict, children):
72
- for k, v in dc.items():
73
- dd[k].append(v)
74
- dct = {tree.tag: dd}
75
- if tree.attrib:
76
- dct[tree.tag].update((k, v) for k, v in tree.attrib.items())
77
- if tree.text:
78
- text = tree.text.strip()
79
- if children or tree.attrib:
80
- if text:
81
- dct[tree.tag]["text"] = text
82
- else:
83
- dct[tree.tag] = text
84
- return dct
85
-
86
-
87
- def parse_entry(entry, config_name):
88
- """Takes the dictionary corresponding to an entry and returns a dictionary with:
89
- - Proper feature naming
90
- - Default values
91
- - Proper typing"""
92
- res = {}
93
- otriple_set_list = entry["originaltripleset"]
94
- res["original_triple_sets"] = [{"otriple_set": otriple_set["otriple"]} for otriple_set in otriple_set_list]
95
- mtriple_set_list = entry["modifiedtripleset"]
96
- res["modified_triple_sets"] = [{"mtriple_set": mtriple_set["mtriple"]} for mtriple_set in mtriple_set_list]
97
- res["category"] = entry["category"]
98
- res["eid"] = entry["eid"]
99
- res["size"] = int(entry["size"])
100
- lex = entry["lex"]
101
- # Some entries are misformed, with None instead of the sorted triplet information.
102
- entry_triples = [
103
- ex["sortedtripleset"][0] if ex["sortedtripleset"][0] is not None else {"sentence": []} for ex in lex
104
- ]
105
- # the xml structure is inconsistent; sorted triplets are often separated in several dictionaries, so we sum them.
106
- sorted_triples = [
107
- list(itertools.chain.from_iterable(item.get("striple", []) for item in entry["sentence"]))
108
- for entry in entry_triples
109
- ]
110
- res["lex"] = {
111
- "comment": [ex.get("comment", "") for ex in lex],
112
- "lid": [ex.get("lid", "") for ex in lex],
113
- # all of the sequence are within their own 1-element sublist, thus the [0]
114
- "text": [ex.get("text", [""])[0] for ex in lex],
115
- "template": [ex.get("template", [""])[0] for ex in lex],
116
- "sorted_triple_sets": sorted_triples,
117
- }
118
- # only present in the en version
119
- if config_name == "en":
120
- res["lex"]["lexicalization"] = [ex.get("lexicalization", [""])[0] for ex in lex]
121
- res["shape"] = entry.get("shape", "")
122
- res["shape_type"] = entry.get("shape_type", "")
123
- return res
124
-
125
-
126
- def xml_file_to_examples(filename, config_name):
127
- tree = ET.parse(filename).getroot()
128
- examples = et_to_dict(tree)["benchmark"]["entries"][0]["entry"]
129
- return [parse_entry(entry, config_name) for entry in examples]
130
-
131
-
132
- class EnrichedWebNlg(datasets.GeneratorBasedBuilder):
133
- """The WebNLG corpus"""
134
-
135
- VERSION = datasets.Version("1.5.0")
136
-
137
- BUILDER_CONFIGS = [
138
- datasets.BuilderConfig(name="en", description="Enriched English version of the WebNLG data"),
139
- datasets.BuilderConfig(name="de", description="Enriched German version of the WebNLG data"),
140
- ]
141
-
142
- def _info(self):
143
- if self.config.name == "en":
144
- features = datasets.Features(
145
- {
146
- "category": datasets.Value("string"),
147
- "size": datasets.Value("int32"),
148
- "eid": datasets.Value("string"),
149
- "original_triple_sets": datasets.Sequence(
150
- {"otriple_set": datasets.Sequence(datasets.Value("string"))}
151
- ),
152
- "modified_triple_sets": datasets.Sequence(
153
- {"mtriple_set": datasets.Sequence(datasets.Value("string"))}
154
- ),
155
- "shape": datasets.Value("string"),
156
- "shape_type": datasets.Value("string"),
157
- "lex": datasets.Sequence(
158
- {
159
- "comment": datasets.Value("string"),
160
- "lid": datasets.Value("string"),
161
- "text": datasets.Value("string"),
162
- "template": datasets.Value("string"),
163
- "sorted_triple_sets": datasets.Sequence(datasets.Value("string")),
164
- # only present in the en version
165
- "lexicalization": datasets.Value("string"),
166
- }
167
- ),
168
- }
169
- )
170
- else:
171
- features = datasets.Features(
172
- {
173
- "category": datasets.Value("string"),
174
- "size": datasets.Value("int32"),
175
- "eid": datasets.Value("string"),
176
- "original_triple_sets": datasets.Sequence(
177
- {"otriple_set": datasets.Sequence(datasets.Value("string"))}
178
- ),
179
- "modified_triple_sets": datasets.Sequence(
180
- {"mtriple_set": datasets.Sequence(datasets.Value("string"))}
181
- ),
182
- "shape": datasets.Value("string"),
183
- "shape_type": datasets.Value("string"),
184
- "lex": datasets.Sequence(
185
- {
186
- "comment": datasets.Value("string"),
187
- "lid": datasets.Value("string"),
188
- "text": datasets.Value("string"),
189
- "template": datasets.Value("string"),
190
- "sorted_triple_sets": datasets.Sequence(datasets.Value("string")),
191
- }
192
- ),
193
- }
194
- )
195
- return datasets.DatasetInfo(
196
- # This is the description that will appear on the datasets page.
197
- description=_DESCRIPTION,
198
- # This defines the different columns of the dataset and their types
199
- features=features, # Here we define them above because they are different between the two configurations
200
- # If there's a common (input, target) tuple from the features,
201
- # specify them here. They'll be used if as_supervised=True in
202
- # builder.as_dataset.
203
- supervised_keys=None,
204
- # Homepage of the dataset for documentation
205
- homepage=_HOMEPAGE,
206
- citation=_CITATION,
207
- license=_LICENSE,
208
- )
209
-
210
- def _split_generators(self, dl_manager):
211
- """Returns SplitGenerators."""
212
- data_dir = dl_manager.download_and_extract(_URL)
213
- # Downloading the repo adds the current commit sha to the directory, so we can't specify the directory
214
- # name in advance.
215
- split_files = {
216
- split: [os.path.join(data_dir, dir_suf) for dir_suf in dir_suffix_list]
217
- for split, dir_suffix_list in _FILE_PATHS[self.config.name].items()
218
- }
219
- return [
220
- datasets.SplitGenerator(
221
- name=split,
222
- # These kwargs will be passed to _generate_examples
223
- gen_kwargs={"filedirs": filedirs},
224
- )
225
- for split, filedirs in split_files.items()
226
- ]
227
-
228
- def _generate_examples(self, filedirs):
229
- """Yields examples."""
230
-
231
- id_ = 0
232
- for xml_location in filedirs:
233
- for xml_file in sorted(glob(pjoin(xml_location, "*.xml"))):
234
- for exple_dict in xml_file_to_examples(xml_file, self.config.name):
235
- id_ += 1
236
- yield id_, exple_dict