system HF staff commited on
Commit
2cb637c
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,265 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - crowdsourced
6
+ languages:
7
+ release_v1:
8
+ - en
9
+ release_v2:
10
+ - en
11
+ release_v2-1:
12
+ - en
13
+ release_v2-1_constrained:
14
+ - en
15
+ release_v2_constrained:
16
+ - en
17
+ release_v3-0_en:
18
+ - en
19
+ release_v3-0_ru:
20
+ - ru
21
+ webnlg_challenge_2017:
22
+ - en
23
+ licenses:
24
+ - cc-by-sa-3-0
25
+ - cc-by-nc-sa-4-0
26
+ - gfdl-1-1
27
+ multilinguality:
28
+ - monolingual
29
+ size_categories:
30
+ - 10K<n<100K
31
+ source_datasets:
32
+ - extended|other-db_pedia
33
+ - original
34
+ task_categories:
35
+ release_v1:
36
+ - conditional-text-generation
37
+ release_v2:
38
+ - conditional-text-generation
39
+ release_v2-1:
40
+ - conditional-text-generation
41
+ release_v2-1_constrained:
42
+ - conditional-text-generation
43
+ release_v2_constrained:
44
+ - conditional-text-generation
45
+ release_v3-0_en:
46
+ - conditional-text-generation
47
+ - structure-prediction
48
+ release_v3-0_ru:
49
+ - conditional-text-generation
50
+ - structure-prediction
51
+ webnlg_challenge_2017:
52
+ - conditional-text-generation
53
+ task_ids:
54
+ release_v1:
55
+ - other-stuctured-to-text
56
+ release_v2:
57
+ - other-stuctured-to-text
58
+ release_v2-1:
59
+ - other-stuctured-to-text
60
+ release_v2-1_constrained:
61
+ - other-stuctured-to-text
62
+ release_v2_constrained:
63
+ - other-stuctured-to-text
64
+ release_v3-0_en:
65
+ - conditional-text-generation
66
+ - parsing
67
+ release_v3-0_ru:
68
+ - conditional-text-generation
69
+ - parsing
70
+ webnlg_challenge_2017:
71
+ - other-stuctured-to-text
72
+ ---
73
+
74
+ # Dataset Card for WebNLG
75
+
76
+ ## Table of Contents
77
+ - [Dataset Description](#dataset-description)
78
+ - [Dataset Summary](#dataset-summary)
79
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
80
+ - [Languages](#languages)
81
+ - [Dataset Structure](#dataset-structure)
82
+ - [Data Instances](#data-instances)
83
+ - [Data Fields](#data-instances)
84
+ - [Data Splits](#data-instances)
85
+ - [Dataset Creation](#dataset-creation)
86
+ - [Curation Rationale](#curation-rationale)
87
+ - [Source Data](#source-data)
88
+ - [Annotations](#annotations)
89
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
90
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
91
+ - [Social Impact of Dataset](#social-impact-of-dataset)
92
+ - [Discussion of Biases](#discussion-of-biases)
93
+ - [Other Known Limitations](#other-known-limitations)
94
+ - [Additional Information](#additional-information)
95
+ - [Dataset Curators](#dataset-curators)
96
+ - [Licensing Information](#licensing-information)
97
+ - [Citation Information](#citation-information)
98
+
99
+ ## Dataset Description
100
+
101
+ - **Homepage:** [WebNLG challenge website](https://webnlg-challenge.loria.fr/)
102
+ - **Repository:** [WebNLG GitLab repository](https://gitlab.com/shimorina/webnlg-dataset/-/tree/master/)
103
+ - **Paper:** [Creating Training Corpora for NLG Micro-Planning](https://www.aclweb.org/anthology/P17-1017.pdf)
104
+ - **Leaderboard:** [WebNLG leaderboards](https://gerbil-nlg.dice-research.org/gerbil/webnlg2020results)
105
+ - **Point of Contact:** [anastasia.shimorina@loria.fr](anastasia.shimorina@loria.fr)
106
+
107
+ ### Dataset Summary
108
+
109
+ The WebNLG challenge consists in mapping data to text. The training data consists
110
+ of Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation
111
+ of these triples. For instance, given the 3 DBpedia triples shown in (a), the aim is to generate a text such as (b).
112
+
113
+ ```
114
+ a. (John_E_Blaha birthDate 1942_08_26) (John_E_Blaha birthPlace San_Antonio) (John_E_Blaha occupation Fighter_pilot)
115
+ b. John E Blaha, born in San Antonio on 1942-08-26, worked as a fighter pilot
116
+ ```
117
+
118
+ As the example illustrates, the task involves specific NLG subtasks such as sentence segmentation
119
+ (how to chunk the input data into sentences), lexicalisation (of the DBpedia properties),
120
+ aggregation (how to avoid repetitions) and surface realisation
121
+ (how to build a syntactically correct and natural sounding text).
122
+
123
+ ### Supported Tasks and Leaderboards
124
+
125
+ The dataset supports a `other-structured-to-text` task which requires a model takes a set of RDF (Resource Description Format) triples from a database (DBpedia) of the form (subject, property, object) as input and write out a natural language sentence expressing the information contained in the triples. The dataset has supportd two challenges: the [WebNLG2017](https://www.aclweb.org/anthology/W17-3518/) and [WebNLG2020](https://gerbil-nlg.dice-research.org/gerbil/webnlg2020results) challenge. Results were ordered by their [METEOR](https://huggingface.co/metrics/meteor) to the reference, but the leaderboards report a range of other metrics including [BLEU](https://huggingface.co/metrics/bleu), [BERTscore](https://huggingface.co/metrics/bertscore), and [BLEURT](https://huggingface.co/metrics/bleurt). The v3 release (`release_v3.0_en`, `release_v3.0_ru`) for the WebNLG2020 challenge also supports a semantic `parsing` task.
126
+
127
+ ### Languages
128
+
129
+ All releases contain English (`en`) data. The v3 release (`release_v3.0_ru`) also contains Russian (`ru`) examples.
130
+
131
+ ## Dataset Structure
132
+
133
+ ### Data Instances
134
+
135
+ A typical example contains the original RDF triples in the set, a modified version which presented to crowd workers, and a set of possible verbalizations for this set of triples:
136
+ ```
137
+ {'2017_test_category': '',
138
+ 'category': 'Politician',
139
+ 'eid': 'Id10',
140
+ 'lex': {'comment': ['good', 'good', 'good'],
141
+ 'lid': ['Id1', 'Id2', 'Id3'],
142
+ 'text': ['World War II had Chiang Kai-shek as a commander and United States Army soldier Abner W. Sibal.',
143
+ 'Abner W. Sibal served in the United States Army during the Second World War and during that war Chiang Kai-shek was one of the commanders.',
144
+ 'Abner W. Sibal, served in the United States Army and fought in World War II, one of the commanders of which, was Chiang Kai-shek.']},
145
+ 'modified_triple_sets': {'mtriple_set': [['Abner_W._Sibal | battle | World_War_II',
146
+ 'World_War_II | commander | Chiang_Kai-shek',
147
+ 'Abner_W._Sibal | militaryBranch | United_States_Army']]},
148
+ 'original_triple_sets': {'otriple_set': [['Abner_W._Sibal | battles | World_War_II', 'World_War_II | commander | Chiang_Kai-shek', 'Abner_W._Sibal | branch | United_States_Army'],
149
+ ['Abner_W._Sibal | militaryBranch | United_States_Army',
150
+ 'Abner_W._Sibal | battles | World_War_II',
151
+ 'World_War_II | commander | Chiang_Kai-shek']]},
152
+ 'shape': '(X (X) (X (X)))',
153
+ 'shape_type': 'mixed',
154
+ 'size': 3}
155
+ ```
156
+
157
+ ### Data Fields
158
+
159
+ The following fields can be found in the instances:
160
+ - `category`: the category of the DBpedia entites present in the RDF triples.
161
+ - `eid`: an example ID, only unique per split per category.
162
+ - `size`: number of RDF triples in the set.
163
+ - `shape`: (for v3 only) Each set of RDF-triples is a tree, which is characterised by its shape and shape type. `shape` is a string representation of the tree with nested parentheses where X is a node (see [Newick tree format](https://en.wikipedia.org/wiki/Newick_format))
164
+ - `shape_type`: (for v3 only) is a type of the tree shape, which can be: `chain` (the object of one triple is the subject of the other); `sibling` (triples with a shared subject); `mixed` (both chain and sibling types present).
165
+ - `2017_test_category`: (for `webnlg_challenge_2017`) tells whether the set of RDF triples was present in the training set or not.
166
+ - `lex`: the lexicalizations, with:
167
+ - `text`: the text to be predicted.
168
+ - `lid`: a lexicalizayion ID, unique per example.
169
+ - `comment`: the lexicalizations were rated by crowd workers are either `good` or `bad`
170
+
171
+ ### Data Splits
172
+
173
+ [More Information Needed]
174
+
175
+ ## Dataset Creation
176
+
177
+ ### Curation Rationale
178
+
179
+ [More Information Needed]
180
+
181
+ ### Source Data
182
+
183
+ #### Initial Data Collection and Normalization
184
+
185
+ [More Information Needed]
186
+
187
+ #### Who are the source language producers?
188
+
189
+ [More Information Needed]
190
+
191
+ ### Annotations
192
+
193
+ #### Annotation process
194
+
195
+ [More Information Needed]
196
+
197
+ #### Who are the annotators?
198
+
199
+ [More Information Needed]
200
+
201
+ ### Personal and Sensitive Information
202
+
203
+ [More Information Needed]
204
+
205
+ ## Considerations for Using the Data
206
+
207
+ ### Social Impact of Dataset
208
+
209
+ [More Information Needed]
210
+
211
+ ### Discussion of Biases
212
+
213
+ [More Information Needed]
214
+
215
+ ### Other Known Limitations
216
+
217
+ [More Information Needed]
218
+
219
+ ## Additional Information
220
+
221
+ ### Dataset Curators
222
+
223
+ [More Information Needed]
224
+
225
+ ### Licensing Information
226
+
227
+ The dataset uses the `cc-by-nc-sa-4.0` license. The source DBpedia project uses the `cc-by-sa-3.0` and `gfdl-1.1` licenses.
228
+
229
+ ### Citation Information
230
+
231
+ - If you use the WebNLG corpus, cite:
232
+ ```
233
+ @inproceedings{web_nlg,
234
+ author = {Claire Gardent and
235
+ Anastasia Shimorina and
236
+ Shashi Narayan and
237
+ Laura Perez{-}Beltrachini},
238
+ editor = {Regina Barzilay and
239
+ Min{-}Yen Kan},
240
+ title = {Creating Training Corpora for {NLG} Micro-Planners},
241
+ booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational
242
+ Linguistics, {ACL} 2017, Vancouver, Canada, July 30 - August 4, Volume
243
+ 1: Long Papers},
244
+ pages = {179--188},
245
+ publisher = {Association for Computational Linguistics},
246
+ year = {2017},
247
+ url = {https://doi.org/10.18653/v1/P17-1017},
248
+ doi = {10.18653/v1/P17-1017}
249
+ }
250
+ ```
251
+
252
+ - If you use `release_v2_constrained` in particular, cite:
253
+ ```
254
+ @InProceedings{shimorina2018handling,
255
+ author = "Shimorina, Anastasia
256
+ and Gardent, Claire",
257
+ title = "Handling Rare Items in Data-to-Text Generation",
258
+ booktitle = "Proceedings of the 11th International Conference on Natural Language Generation",
259
+ year = "2018",
260
+ publisher = "Association for Computational Linguistics",
261
+ pages = "360--370",
262
+ location = "Tilburg University, The Netherlands",
263
+ url = "http://aclweb.org/anthology/W18-6543"
264
+ }
265
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"webnlg_challenge_2017": {"description": "The WebNLG challenge consists in mapping data to text. The training data consists\nof Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation\nof these triples. For instance, given the 3 DBpedia triples shown in (a), the aim is to generate a text such as (b).\n\na. (John_E_Blaha birthDate 1942_08_26) (John_E_Blaha birthPlace San_Antonio) (John_E_Blaha occupation Fighter_pilot)\nb. John E Blaha, born in San Antonio on 1942-08-26, worked as a fighter pilot\n\nAs the example illustrates, the task involves specific NLG subtasks such as sentence segmentation\n(how to chunk the input data into sentences), lexicalisation (of the DBpedia properties),\naggregation (how to avoid repetitions) and surface realisation\n(how to build a syntactically correct and natural sounding text).\n", "citation": "@inproceedings{web_nlg,\n author = {Claire Gardent and\n Anastasia Shimorina and\n Shashi Narayan and\n Laura Perez{-}Beltrachini},\n editor = {Regina Barzilay and\n Min{-}Yen Kan},\n title = {Creating Training Corpora for {NLG} Micro-Planners},\n booktitle = {Proceedings of the 55th Annual Meeting of the\n Association for Computational Linguistics,\n {ACL} 2017, Vancouver, Canada, July 30 - August 4,\n Volume 1: Long Papers},\n pages = {179--188},\n publisher = {Association for Computational Linguistics},\n year = {2017},\n url = {https://doi.org/10.18653/v1/P17-1017},\n doi = {10.18653/v1/P17-1017}\n}\n", "homepage": "https://webnlg-challenge.loria.fr/", "license": "", "features": {"category": {"dtype": "string", "id": null, "_type": "Value"}, "size": {"dtype": "int32", "id": null, "_type": "Value"}, "eid": {"dtype": "string", "id": null, "_type": "Value"}, "original_triple_sets": {"feature": {"otriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "modified_triple_sets": {"feature": {"mtriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "shape": {"dtype": "string", "id": null, "_type": "Value"}, "shape_type": {"dtype": "string", "id": null, "_type": "Value"}, "lex": {"feature": {"comment": {"dtype": "string", "id": null, "_type": "Value"}, "lid": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "2017_test_category": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "web_nlg", "config_name": "webnlg_challenge_2017", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 5439100, "num_examples": 6940, "dataset_name": "web_nlg"}, "dev": {"name": "dev", "num_bytes": 687093, "num_examples": 872, "dataset_name": "web_nlg"}, "test": {"name": "test", "num_bytes": 3037685, "num_examples": 4615, "dataset_name": "web_nlg"}}, "download_checksums": {"https://gitlab.com/shimorina/webnlg-dataset/-/archive/master/webnlg-dataset-master.zip": {"num_bytes": 25390588, "checksum": "287290957f7352c9e3b64cdc5957faba8ed5d835f34f2106ba5666a77fdb1cfb"}}, "download_size": 25390588, "post_processing_size": null, "dataset_size": 9163878, "size_in_bytes": 34554466}, "release_v1": {"description": "The WebNLG challenge consists in mapping data to text. The training data consists\nof Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation\nof these triples. For instance, given the 3 DBpedia triples shown in (a), the aim is to generate a text such as (b).\n\na. (John_E_Blaha birthDate 1942_08_26) (John_E_Blaha birthPlace San_Antonio) (John_E_Blaha occupation Fighter_pilot)\nb. John E Blaha, born in San Antonio on 1942-08-26, worked as a fighter pilot\n\nAs the example illustrates, the task involves specific NLG subtasks such as sentence segmentation\n(how to chunk the input data into sentences), lexicalisation (of the DBpedia properties),\naggregation (how to avoid repetitions) and surface realisation\n(how to build a syntactically correct and natural sounding text).\n", "citation": "@inproceedings{web_nlg,\n author = {Claire Gardent and\n Anastasia Shimorina and\n Shashi Narayan and\n Laura Perez{-}Beltrachini},\n editor = {Regina Barzilay and\n Min{-}Yen Kan},\n title = {Creating Training Corpora for {NLG} Micro-Planners},\n booktitle = {Proceedings of the 55th Annual Meeting of the\n Association for Computational Linguistics,\n {ACL} 2017, Vancouver, Canada, July 30 - August 4,\n Volume 1: Long Papers},\n pages = {179--188},\n publisher = {Association for Computational Linguistics},\n year = {2017},\n url = {https://doi.org/10.18653/v1/P17-1017},\n doi = {10.18653/v1/P17-1017}\n}\n", "homepage": "https://webnlg-challenge.loria.fr/", "license": "", "features": {"category": {"dtype": "string", "id": null, "_type": "Value"}, "size": {"dtype": "int32", "id": null, "_type": "Value"}, "eid": {"dtype": "string", "id": null, "_type": "Value"}, "original_triple_sets": {"feature": {"otriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "modified_triple_sets": {"feature": {"mtriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "shape": {"dtype": "string", "id": null, "_type": "Value"}, "shape_type": {"dtype": "string", "id": null, "_type": "Value"}, "lex": {"feature": {"comment": {"dtype": "string", "id": null, "_type": "Value"}, "lid": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "2017_test_category": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "web_nlg", "config_name": "release_v1", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"full": {"name": "full", "num_bytes": 11361516, "num_examples": 14237, "dataset_name": "web_nlg"}}, "download_checksums": {"https://gitlab.com/shimorina/webnlg-dataset/-/archive/master/webnlg-dataset-master.zip": {"num_bytes": 25390588, "checksum": "287290957f7352c9e3b64cdc5957faba8ed5d835f34f2106ba5666a77fdb1cfb"}}, "download_size": 25390588, "post_processing_size": null, "dataset_size": 11361516, "size_in_bytes": 36752104}, "release_v2": {"description": "The WebNLG challenge consists in mapping data to text. The training data consists\nof Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation\nof these triples. For instance, given the 3 DBpedia triples shown in (a), the aim is to generate a text such as (b).\n\na. (John_E_Blaha birthDate 1942_08_26) (John_E_Blaha birthPlace San_Antonio) (John_E_Blaha occupation Fighter_pilot)\nb. John E Blaha, born in San Antonio on 1942-08-26, worked as a fighter pilot\n\nAs the example illustrates, the task involves specific NLG subtasks such as sentence segmentation\n(how to chunk the input data into sentences), lexicalisation (of the DBpedia properties),\naggregation (how to avoid repetitions) and surface realisation\n(how to build a syntactically correct and natural sounding text).\n", "citation": "@inproceedings{web_nlg,\n author = {Claire Gardent and\n Anastasia Shimorina and\n Shashi Narayan and\n Laura Perez{-}Beltrachini},\n editor = {Regina Barzilay and\n Min{-}Yen Kan},\n title = {Creating Training Corpora for {NLG} Micro-Planners},\n booktitle = {Proceedings of the 55th Annual Meeting of the\n Association for Computational Linguistics,\n {ACL} 2017, Vancouver, Canada, July 30 - August 4,\n Volume 1: Long Papers},\n pages = {179--188},\n publisher = {Association for Computational Linguistics},\n year = {2017},\n url = {https://doi.org/10.18653/v1/P17-1017},\n doi = {10.18653/v1/P17-1017}\n}\n", "homepage": "https://webnlg-challenge.loria.fr/", "license": "", "features": {"category": {"dtype": "string", "id": null, "_type": "Value"}, "size": {"dtype": "int32", "id": null, "_type": "Value"}, "eid": {"dtype": "string", "id": null, "_type": "Value"}, "original_triple_sets": {"feature": {"otriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "modified_triple_sets": {"feature": {"mtriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "shape": {"dtype": "string", "id": null, "_type": "Value"}, "shape_type": {"dtype": "string", "id": null, "_type": "Value"}, "lex": {"feature": {"comment": {"dtype": "string", "id": null, "_type": "Value"}, "lid": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "2017_test_category": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "web_nlg", "config_name": "release_v2", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 10538445, "num_examples": 12876, "dataset_name": "web_nlg"}, "dev": {"name": "dev", "num_bytes": 1323317, "num_examples": 1619, "dataset_name": "web_nlg"}, "test": {"name": "test", "num_bytes": 1288814, "num_examples": 1600, "dataset_name": "web_nlg"}}, "download_checksums": {"https://gitlab.com/shimorina/webnlg-dataset/-/archive/master/webnlg-dataset-master.zip": {"num_bytes": 25390588, "checksum": "287290957f7352c9e3b64cdc5957faba8ed5d835f34f2106ba5666a77fdb1cfb"}}, "download_size": 25390588, "post_processing_size": null, "dataset_size": 13150576, "size_in_bytes": 38541164}, "release_v2_constrained": {"description": "The WebNLG challenge consists in mapping data to text. The training data consists\nof Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation\nof these triples. For instance, given the 3 DBpedia triples shown in (a), the aim is to generate a text such as (b).\n\na. (John_E_Blaha birthDate 1942_08_26) (John_E_Blaha birthPlace San_Antonio) (John_E_Blaha occupation Fighter_pilot)\nb. John E Blaha, born in San Antonio on 1942-08-26, worked as a fighter pilot\n\nAs the example illustrates, the task involves specific NLG subtasks such as sentence segmentation\n(how to chunk the input data into sentences), lexicalisation (of the DBpedia properties),\naggregation (how to avoid repetitions) and surface realisation\n(how to build a syntactically correct and natural sounding text).\n", "citation": "@inproceedings{web_nlg,\n author = {Claire Gardent and\n Anastasia Shimorina and\n Shashi Narayan and\n Laura Perez{-}Beltrachini},\n editor = {Regina Barzilay and\n Min{-}Yen Kan},\n title = {Creating Training Corpora for {NLG} Micro-Planners},\n booktitle = {Proceedings of the 55th Annual Meeting of the\n Association for Computational Linguistics,\n {ACL} 2017, Vancouver, Canada, July 30 - August 4,\n Volume 1: Long Papers},\n pages = {179--188},\n publisher = {Association for Computational Linguistics},\n year = {2017},\n url = {https://doi.org/10.18653/v1/P17-1017},\n doi = {10.18653/v1/P17-1017}\n}\n", "homepage": "https://webnlg-challenge.loria.fr/", "license": "", "features": {"category": {"dtype": "string", "id": null, "_type": "Value"}, "size": {"dtype": "int32", "id": null, "_type": "Value"}, "eid": {"dtype": "string", "id": null, "_type": "Value"}, "original_triple_sets": {"feature": {"otriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "modified_triple_sets": {"feature": {"mtriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "shape": {"dtype": "string", "id": null, "_type": "Value"}, "shape_type": {"dtype": "string", "id": null, "_type": "Value"}, "lex": {"feature": {"comment": {"dtype": "string", "id": null, "_type": "Value"}, "lid": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "2017_test_category": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "web_nlg", "config_name": "release_v2_constrained", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 10560502, "num_examples": 12895, "dataset_name": "web_nlg"}, "dev": {"name": "dev", "num_bytes": 1385570, "num_examples": 1594, "dataset_name": "web_nlg"}, "test": {"name": "test", "num_bytes": 1207294, "num_examples": 1606, "dataset_name": "web_nlg"}}, "download_checksums": {"https://gitlab.com/shimorina/webnlg-dataset/-/archive/master/webnlg-dataset-master.zip": {"num_bytes": 25390588, "checksum": "287290957f7352c9e3b64cdc5957faba8ed5d835f34f2106ba5666a77fdb1cfb"}}, "download_size": 25390588, "post_processing_size": null, "dataset_size": 13153366, "size_in_bytes": 38543954}, "release_v2.1": {"description": "The WebNLG challenge consists in mapping data to text. The training data consists\nof Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation\nof these triples. For instance, given the 3 DBpedia triples shown in (a), the aim is to generate a text such as (b).\n\na. (John_E_Blaha birthDate 1942_08_26) (John_E_Blaha birthPlace San_Antonio) (John_E_Blaha occupation Fighter_pilot)\nb. John E Blaha, born in San Antonio on 1942-08-26, worked as a fighter pilot\n\nAs the example illustrates, the task involves specific NLG subtasks such as sentence segmentation\n(how to chunk the input data into sentences), lexicalisation (of the DBpedia properties),\naggregation (how to avoid repetitions) and surface realisation\n(how to build a syntactically correct and natural sounding text).\n", "citation": "@inproceedings{web_nlg,\n author = {Claire Gardent and\n Anastasia Shimorina and\n Shashi Narayan and\n Laura Perez{-}Beltrachini},\n editor = {Regina Barzilay and\n Min{-}Yen Kan},\n title = {Creating Training Corpora for {NLG} Micro-Planners},\n booktitle = {Proceedings of the 55th Annual Meeting of the\n Association for Computational Linguistics,\n {ACL} 2017, Vancouver, Canada, July 30 - August 4,\n Volume 1: Long Papers},\n pages = {179--188},\n publisher = {Association for Computational Linguistics},\n year = {2017},\n url = {https://doi.org/10.18653/v1/P17-1017},\n doi = {10.18653/v1/P17-1017}\n}\n", "homepage": "https://webnlg-challenge.loria.fr/", "license": "", "features": {"category": {"dtype": "string", "id": null, "_type": "Value"}, "size": {"dtype": "int32", "id": null, "_type": "Value"}, "eid": {"dtype": "string", "id": null, "_type": "Value"}, "original_triple_sets": {"feature": {"otriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "modified_triple_sets": {"feature": {"mtriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "shape": {"dtype": "string", "id": null, "_type": "Value"}, "shape_type": {"dtype": "string", "id": null, "_type": "Value"}, "lex": {"feature": {"comment": {"dtype": "string", "id": null, "_type": "Value"}, "lid": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "2017_test_category": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "web_nlg", "config_name": "release_v2.1", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 10556881, "num_examples": 12876, "dataset_name": "web_nlg"}, "dev": {"name": "dev", "num_bytes": 1325368, "num_examples": 1619, "dataset_name": "web_nlg"}, "test": {"name": "test", "num_bytes": 1289748, "num_examples": 1600, "dataset_name": "web_nlg"}}, "download_checksums": {"https://gitlab.com/shimorina/webnlg-dataset/-/archive/master/webnlg-dataset-master.zip": {"num_bytes": 25390588, "checksum": "287290957f7352c9e3b64cdc5957faba8ed5d835f34f2106ba5666a77fdb1cfb"}}, "download_size": 25390588, "post_processing_size": null, "dataset_size": 13171997, "size_in_bytes": 38562585}, "release_v2.1_constrained": {"description": "The WebNLG challenge consists in mapping data to text. The training data consists\nof Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation\nof these triples. For instance, given the 3 DBpedia triples shown in (a), the aim is to generate a text such as (b).\n\na. (John_E_Blaha birthDate 1942_08_26) (John_E_Blaha birthPlace San_Antonio) (John_E_Blaha occupation Fighter_pilot)\nb. John E Blaha, born in San Antonio on 1942-08-26, worked as a fighter pilot\n\nAs the example illustrates, the task involves specific NLG subtasks such as sentence segmentation\n(how to chunk the input data into sentences), lexicalisation (of the DBpedia properties),\naggregation (how to avoid repetitions) and surface realisation\n(how to build a syntactically correct and natural sounding text).\n", "citation": "@inproceedings{web_nlg,\n author = {Claire Gardent and\n Anastasia Shimorina and\n Shashi Narayan and\n Laura Perez{-}Beltrachini},\n editor = {Regina Barzilay and\n Min{-}Yen Kan},\n title = {Creating Training Corpora for {NLG} Micro-Planners},\n booktitle = {Proceedings of the 55th Annual Meeting of the\n Association for Computational Linguistics,\n {ACL} 2017, Vancouver, Canada, July 30 - August 4,\n Volume 1: Long Papers},\n pages = {179--188},\n publisher = {Association for Computational Linguistics},\n year = {2017},\n url = {https://doi.org/10.18653/v1/P17-1017},\n doi = {10.18653/v1/P17-1017}\n}\n", "homepage": "https://webnlg-challenge.loria.fr/", "license": "", "features": {"category": {"dtype": "string", "id": null, "_type": "Value"}, "size": {"dtype": "int32", "id": null, "_type": "Value"}, "eid": {"dtype": "string", "id": null, "_type": "Value"}, "original_triple_sets": {"feature": {"otriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "modified_triple_sets": {"feature": {"mtriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "shape": {"dtype": "string", "id": null, "_type": "Value"}, "shape_type": {"dtype": "string", "id": null, "_type": "Value"}, "lex": {"feature": {"comment": {"dtype": "string", "id": null, "_type": "Value"}, "lid": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "2017_test_category": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "web_nlg", "config_name": "release_v2.1_constrained", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 10747616, "num_examples": 12895, "dataset_name": "web_nlg"}, "dev": {"name": "dev", "num_bytes": 1247988, "num_examples": 1594, "dataset_name": "web_nlg"}, "test": {"name": "test", "num_bytes": 1176393, "num_examples": 1606, "dataset_name": "web_nlg"}}, "download_checksums": {"https://gitlab.com/shimorina/webnlg-dataset/-/archive/master/webnlg-dataset-master.zip": {"num_bytes": 25390588, "checksum": "287290957f7352c9e3b64cdc5957faba8ed5d835f34f2106ba5666a77fdb1cfb"}}, "download_size": 25390588, "post_processing_size": null, "dataset_size": 13171997, "size_in_bytes": 38562585}, "release_v3.0_en": {"description": "The WebNLG challenge consists in mapping data to text. The training data consists\nof Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation\nof these triples. For instance, given the 3 DBpedia triples shown in (a), the aim is to generate a text such as (b).\n\na. (John_E_Blaha birthDate 1942_08_26) (John_E_Blaha birthPlace San_Antonio) (John_E_Blaha occupation Fighter_pilot)\nb. John E Blaha, born in San Antonio on 1942-08-26, worked as a fighter pilot\n\nAs the example illustrates, the task involves specific NLG subtasks such as sentence segmentation\n(how to chunk the input data into sentences), lexicalisation (of the DBpedia properties),\naggregation (how to avoid repetitions) and surface realisation\n(how to build a syntactically correct and natural sounding text).\n", "citation": "@inproceedings{web_nlg,\n author = {Claire Gardent and\n Anastasia Shimorina and\n Shashi Narayan and\n Laura Perez{-}Beltrachini},\n editor = {Regina Barzilay and\n Min{-}Yen Kan},\n title = {Creating Training Corpora for {NLG} Micro-Planners},\n booktitle = {Proceedings of the 55th Annual Meeting of the\n Association for Computational Linguistics,\n {ACL} 2017, Vancouver, Canada, July 30 - August 4,\n Volume 1: Long Papers},\n pages = {179--188},\n publisher = {Association for Computational Linguistics},\n year = {2017},\n url = {https://doi.org/10.18653/v1/P17-1017},\n doi = {10.18653/v1/P17-1017}\n}\n", "homepage": "https://webnlg-challenge.loria.fr/", "license": "", "features": {"category": {"dtype": "string", "id": null, "_type": "Value"}, "size": {"dtype": "int32", "id": null, "_type": "Value"}, "eid": {"dtype": "string", "id": null, "_type": "Value"}, "original_triple_sets": {"feature": {"otriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "modified_triple_sets": {"feature": {"mtriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "shape": {"dtype": "string", "id": null, "_type": "Value"}, "shape_type": {"dtype": "string", "id": null, "_type": "Value"}, "lex": {"feature": {"comment": {"dtype": "string", "id": null, "_type": "Value"}, "lid": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "2017_test_category": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "web_nlg", "config_name": "release_v3.0_en", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 10784576, "num_examples": 13211, "dataset_name": "web_nlg"}, "dev": {"name": "dev", "num_bytes": 1356359, "num_examples": 1667, "dataset_name": "web_nlg"}, "test": {"name": "test", "num_bytes": 25813556, "num_examples": 39991, "dataset_name": "web_nlg"}}, "download_checksums": {"https://gitlab.com/shimorina/webnlg-dataset/-/archive/master/webnlg-dataset-master.zip": {"num_bytes": 25390588, "checksum": "287290957f7352c9e3b64cdc5957faba8ed5d835f34f2106ba5666a77fdb1cfb"}}, "download_size": 25390588, "post_processing_size": null, "dataset_size": 37954491, "size_in_bytes": 63345079}, "release_v3.0_ru": {"description": "The WebNLG challenge consists in mapping data to text. The training data consists\nof Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation\nof these triples. For instance, given the 3 DBpedia triples shown in (a), the aim is to generate a text such as (b).\n\na. (John_E_Blaha birthDate 1942_08_26) (John_E_Blaha birthPlace San_Antonio) (John_E_Blaha occupation Fighter_pilot)\nb. John E Blaha, born in San Antonio on 1942-08-26, worked as a fighter pilot\n\nAs the example illustrates, the task involves specific NLG subtasks such as sentence segmentation\n(how to chunk the input data into sentences), lexicalisation (of the DBpedia properties),\naggregation (how to avoid repetitions) and surface realisation\n(how to build a syntactically correct and natural sounding text).\n", "citation": "@inproceedings{web_nlg,\n author = {Claire Gardent and\n Anastasia Shimorina and\n Shashi Narayan and\n Laura Perez{-}Beltrachini},\n editor = {Regina Barzilay and\n Min{-}Yen Kan},\n title = {Creating Training Corpora for {NLG} Micro-Planners},\n booktitle = {Proceedings of the 55th Annual Meeting of the\n Association for Computational Linguistics,\n {ACL} 2017, Vancouver, Canada, July 30 - August 4,\n Volume 1: Long Papers},\n pages = {179--188},\n publisher = {Association for Computational Linguistics},\n year = {2017},\n url = {https://doi.org/10.18653/v1/P17-1017},\n doi = {10.18653/v1/P17-1017}\n}\n", "homepage": "https://webnlg-challenge.loria.fr/", "license": "", "features": {"category": {"dtype": "string", "id": null, "_type": "Value"}, "size": {"dtype": "int32", "id": null, "_type": "Value"}, "eid": {"dtype": "string", "id": null, "_type": "Value"}, "original_triple_sets": {"feature": {"otriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "modified_triple_sets": {"feature": {"mtriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "shape": {"dtype": "string", "id": null, "_type": "Value"}, "shape_type": {"dtype": "string", "id": null, "_type": "Value"}, "lex": {"feature": {"comment": {"dtype": "string", "id": null, "_type": "Value"}, "lid": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "2017_test_category": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "web_nlg", "config_name": "release_v3.0_ru", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 7972852, "num_examples": 5573, "dataset_name": "web_nlg"}, "dev": {"name": "dev", "num_bytes": 1097883, "num_examples": 790, "dataset_name": "web_nlg"}, "test": {"name": "test", "num_bytes": 18023181, "num_examples": 23870, "dataset_name": "web_nlg"}}, "download_checksums": {"https://gitlab.com/shimorina/webnlg-dataset/-/archive/master/webnlg-dataset-master.zip": {"num_bytes": 25390588, "checksum": "287290957f7352c9e3b64cdc5957faba8ed5d835f34f2106ba5666a77fdb1cfb"}}, "download_size": 25390588, "post_processing_size": null, "dataset_size": 27093916, "size_in_bytes": 52484504}}
dummy/release_v1/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:927fe13bdca5ec9b74fe7a89736538705282091cb01fa677e81176d061dfd91b
3
+ size 28001
dummy/release_v2.1/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a846066b8c8e41f4ed7e73fd15fd94484c85a24ea8fb8916293d7183c3067f39
3
+ size 143046
dummy/release_v2.1_constrained/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae2ccc8ba3b44416596f14b378c2b97e085fcb592c2f08be3f64398bfe161237
3
+ size 117960
dummy/release_v2/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f5345b127c1dbb36ebdf44230dcca370f5bccc9f9305ed8e918bb813665082b6
3
+ size 52902
dummy/release_v2_constrained/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:125b87290b2b3cdeccc9c843dbadf2bee65908472653209a252464d449216623
3
+ size 84166
dummy/release_v3.0_en/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac7629784540a3d7924ca8a30da22fe0b2fccc194c177438a6dfa12ce8ab84c9
3
+ size 20018
dummy/release_v3.0_ru/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5907f25ac8ba6450edb8d4b9a9994151b3b941f097b622f443a9e43f4a86a630
3
+ size 26568
dummy/webnlg_challenge_2017/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f36d7fcf4a9f106c3078421d50fed63de64e53e0a6710a5845e785c24809359d
3
+ size 184135
web_nlg.py ADDED
@@ -0,0 +1,245 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """The WebNLG corpus"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import os
20
+ import xml.etree.cElementTree as ET
21
+ from collections import defaultdict
22
+ from glob import glob
23
+ from os.path import join as pjoin
24
+
25
+ import datasets
26
+
27
+
28
+ _CITATION = """\
29
+ @inproceedings{web_nlg,
30
+ author = {Claire Gardent and
31
+ Anastasia Shimorina and
32
+ Shashi Narayan and
33
+ Laura Perez{-}Beltrachini},
34
+ editor = {Regina Barzilay and
35
+ Min{-}Yen Kan},
36
+ title = {Creating Training Corpora for {NLG} Micro-Planners},
37
+ booktitle = {Proceedings of the 55th Annual Meeting of the
38
+ Association for Computational Linguistics,
39
+ {ACL} 2017, Vancouver, Canada, July 30 - August 4,
40
+ Volume 1: Long Papers},
41
+ pages = {179--188},
42
+ publisher = {Association for Computational Linguistics},
43
+ year = {2017},
44
+ url = {https://doi.org/10.18653/v1/P17-1017},
45
+ doi = {10.18653/v1/P17-1017}
46
+ }
47
+ """
48
+
49
+ _DESCRIPTION = """\
50
+ The WebNLG challenge consists in mapping data to text. The training data consists
51
+ of Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation
52
+ of these triples. For instance, given the 3 DBpedia triples shown in (a), the aim is to generate a text such as (b).
53
+
54
+ a. (John_E_Blaha birthDate 1942_08_26) (John_E_Blaha birthPlace San_Antonio) (John_E_Blaha occupation Fighter_pilot)
55
+ b. John E Blaha, born in San Antonio on 1942-08-26, worked as a fighter pilot
56
+
57
+ As the example illustrates, the task involves specific NLG subtasks such as sentence segmentation
58
+ (how to chunk the input data into sentences), lexicalisation (of the DBpedia properties),
59
+ aggregation (how to avoid repetitions) and surface realisation
60
+ (how to build a syntactically correct and natural sounding text).
61
+ """
62
+
63
+ _URL = "https://gitlab.com/shimorina/webnlg-dataset/-/archive/master/webnlg-dataset-master.zip"
64
+
65
+ _FILE_PATHS = {
66
+ "webnlg_challenge_2017": {
67
+ "train": [f"webnlg_challenge_2017/train/{i}triples/" for i in range(1, 8)],
68
+ "dev": [f"webnlg_challenge_2017/dev/{i}triples/" for i in range(1, 8)],
69
+ "test": ["webnlg_challenge_2017/test/"],
70
+ },
71
+ "release_v1": {"full": [f"release_v1/xml/{i}triples" for i in range(1, 8)]},
72
+ "release_v2": {
73
+ "train": [f"release_v2/xml/train/{i}triples/" for i in range(1, 8)],
74
+ "dev": [f"release_v2/xml/dev/{i}triples/" for i in range(1, 8)],
75
+ "test": [f"release_v2/xml/test/{i}triples/" for i in range(1, 8)],
76
+ },
77
+ "release_v2_constrained": {
78
+ "train": [f"release_v2_constrained/xml/train/{i}triples/" for i in range(1, 8)],
79
+ "dev": [f"release_v2_constrained/xml/dev/{i}triples/" for i in range(1, 8)],
80
+ "test": [f"release_v2_constrained/xml/test/{i}triples/" for i in range(1, 8)],
81
+ },
82
+ "release_v2.1": {
83
+ "train": [f"release_v2.1/xml/train/{i}triples/" for i in range(1, 8)],
84
+ "dev": [f"release_v2.1/xml/dev/{i}triples/" for i in range(1, 8)],
85
+ "test": [f"release_v2.1/xml/test/{i}triples/" for i in range(1, 8)],
86
+ },
87
+ "release_v2.1_constrained": {
88
+ "train": [f"release_v2.1_constrained/xml/train/{i}triples/" for i in range(1, 8)],
89
+ "dev": [f"release_v2.1_constrained/xml/dev/{i}triples/" for i in range(1, 8)],
90
+ "test": [f"release_v2.1_constrained/xml/test/{i}triples/" for i in range(1, 8)],
91
+ },
92
+ "release_v3.0_en": {
93
+ "train": [f"release_v3.0/en/train/{i}triples/" for i in range(1, 8)],
94
+ "dev": [f"release_v3.0/en/dev/{i}triples/" for i in range(1, 8)],
95
+ "test": [f"release_v3.0/en/test/" for i in range(1, 8)],
96
+ },
97
+ "release_v3.0_ru": {
98
+ "train": [f"release_v3.0/ru/train/{i}triples/" for i in range(1, 8)],
99
+ "dev": [f"release_v3.0/ru/dev/{i}triples/" for i in range(1, 8)],
100
+ "test": [f"release_v3.0/ru/test/" for i in range(1, 8)],
101
+ },
102
+ }
103
+
104
+
105
+ def et_to_dict(tree):
106
+ dct = {tree.tag: {} if tree.attrib else None}
107
+ children = list(tree)
108
+ if children:
109
+ dd = defaultdict(list)
110
+ for dc in map(et_to_dict, children):
111
+ for k, v in dc.items():
112
+ dd[k].append(v)
113
+ dct = {tree.tag: dd}
114
+ if tree.attrib:
115
+ dct[tree.tag].update((k, v) for k, v in tree.attrib.items())
116
+ if tree.text:
117
+ text = tree.text.strip()
118
+ if children or tree.attrib:
119
+ if text:
120
+ dct[tree.tag]["text"] = text
121
+ else:
122
+ dct[tree.tag] = text
123
+ return dct
124
+
125
+
126
+ def parse_entry(entry):
127
+ res = {}
128
+ otriple_set_list = entry["originaltripleset"]
129
+ res["original_triple_sets"] = [{"otriple_set": otriple_set["otriple"]} for otriple_set in otriple_set_list]
130
+ mtriple_set_list = entry["modifiedtripleset"]
131
+ res["modified_triple_sets"] = [{"mtriple_set": mtriple_set["mtriple"]} for mtriple_set in mtriple_set_list]
132
+ res["category"] = entry["category"]
133
+ res["eid"] = entry["eid"]
134
+ res["size"] = int(entry["size"])
135
+ res["lex"] = {
136
+ "comment": [ex.get("comment", "") for ex in entry.get("lex", [])],
137
+ "lid": [ex.get("lid", "") for ex in entry.get("lex", [])],
138
+ "text": [ex.get("text", "") for ex in entry.get("lex", [])],
139
+ }
140
+ res["shape"] = entry.get("shape", "")
141
+ res["shape_type"] = entry.get("shape_type", "")
142
+ return res
143
+
144
+
145
+ def xml_file_to_examples(filename):
146
+ tree = ET.parse(filename).getroot()
147
+ examples = et_to_dict(tree)["benchmark"]["entries"][0]["entry"]
148
+ return [parse_entry(entry) for entry in examples]
149
+
150
+
151
+ class WebNlg(datasets.GeneratorBasedBuilder):
152
+ """The WebNLG corpus"""
153
+
154
+ VERSION = datasets.Version("3.0.0")
155
+
156
+ BUILDER_CONFIGS = [
157
+ datasets.BuilderConfig(
158
+ name="webnlg_challenge_2017", description="WebNLG Challenge 2017 data, covers 10 DBpedia categories."
159
+ ),
160
+ datasets.BuilderConfig(name="release_v1", description="Covers 15 DBpedia categories."),
161
+ datasets.BuilderConfig(
162
+ name="release_v2", description="Includes release_v1 and test data from the WebNLG challenge."
163
+ ),
164
+ datasets.BuilderConfig(
165
+ name="release_v2_constrained",
166
+ description="Same data as v2, the split into train/dev/test is more challenging.",
167
+ ),
168
+ datasets.BuilderConfig(name="release_v2.1", description="5,667 texts from v2 were cleaned."),
169
+ datasets.BuilderConfig(
170
+ name="release_v2.1_constrained",
171
+ description="Same data as v2.1, the split into train/dev/test is more challenging.",
172
+ ),
173
+ datasets.BuilderConfig(
174
+ name="release_v3.0_en", description="WebNLG+ data used in the WebNLG challenge 2020. English."
175
+ ),
176
+ datasets.BuilderConfig(
177
+ name="release_v3.0_ru", description="WebNLG+ data used in the WebNLG challenge 2020. Russian."
178
+ ),
179
+ ]
180
+
181
+ def _info(self):
182
+ features = datasets.Features(
183
+ {
184
+ "category": datasets.Value("string"),
185
+ "size": datasets.Value("int32"),
186
+ "eid": datasets.Value("string"),
187
+ "original_triple_sets": datasets.Sequence(
188
+ {"otriple_set": datasets.Sequence(datasets.Value("string"))}
189
+ ),
190
+ "modified_triple_sets": datasets.Sequence(
191
+ {"mtriple_set": datasets.Sequence(datasets.Value("string"))}
192
+ ),
193
+ "shape": datasets.Value("string"),
194
+ "shape_type": datasets.Value("string"),
195
+ "lex": datasets.Sequence(
196
+ {
197
+ "comment": datasets.Value("string"),
198
+ "lid": datasets.Value("string"),
199
+ "text": datasets.Value("string"),
200
+ }
201
+ ),
202
+ "2017_test_category": datasets.Value("string"),
203
+ }
204
+ )
205
+ return datasets.DatasetInfo(
206
+ # This is the description that will appear on the datasets page.
207
+ description=_DESCRIPTION,
208
+ # This defines the different columns of the dataset and their types
209
+ features=features, # Here we define them above because they are different between the two configurations
210
+ # If there's a common (input, target) tuple from the features,
211
+ # specify them here. They'll be used if as_supervised=True in
212
+ # builder.as_dataset.
213
+ supervised_keys=None,
214
+ # Homepage of the dataset for documentation
215
+ homepage="https://webnlg-challenge.loria.fr/",
216
+ citation=_CITATION,
217
+ )
218
+
219
+ def _split_generators(self, dl_manager):
220
+ """Returns SplitGenerators."""
221
+ data_dir = dl_manager.download_and_extract(_URL)
222
+ return [
223
+ datasets.SplitGenerator(
224
+ name=spl,
225
+ # These kwargs will be passed to _generate_examples
226
+ gen_kwargs={
227
+ "filedirs": [
228
+ os.path.join(data_dir, "webnlg-dataset-master", dir_suf) for dir_suf in dir_suffix_list
229
+ ],
230
+ },
231
+ )
232
+ for spl, dir_suffix_list in _FILE_PATHS[self.config.name].items()
233
+ ]
234
+
235
+ def _generate_examples(self, filedirs):
236
+ """ Yields examples. """
237
+
238
+ id_ = 0
239
+ for xml_location in filedirs:
240
+ for xml_file in sorted(glob(pjoin(xml_location, "*.xml"))):
241
+ test_cat = xml_file.split("/")[-1][:-4] if "webnlg_challenge_2017/test" in xml_file else ""
242
+ for exple_dict in xml_file_to_examples(xml_file):
243
+ exple_dict["2017_test_category"] = test_cat
244
+ id_ += 1
245
+ yield id_, exple_dict