Datasets:

Sub-tasks:
rdf-to-text
License:
system HF staff commited on
Commit
3a2f7a7
1 Parent(s): 2cb637c

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (3) hide show
  1. README.md +61 -16
  2. dataset_infos.json +1 -1
  3. web_nlg.py +29 -5
README.md CHANGED
@@ -95,6 +95,7 @@ task_ids:
95
  - [Dataset Curators](#dataset-curators)
96
  - [Licensing Information](#licensing-information)
97
  - [Citation Information](#citation-information)
 
98
 
99
  ## Dataset Description
100
 
@@ -157,70 +158,110 @@ A typical example contains the original RDF triples in the set, a modified versi
157
  ### Data Fields
158
 
159
  The following fields can be found in the instances:
160
- - `category`: the category of the DBpedia entites present in the RDF triples.
161
  - `eid`: an example ID, only unique per split per category.
162
  - `size`: number of RDF triples in the set.
163
- - `shape`: (for v3 only) Each set of RDF-triples is a tree, which is characterised by its shape and shape type. `shape` is a string representation of the tree with nested parentheses where X is a node (see [Newick tree format](https://en.wikipedia.org/wiki/Newick_format))
164
- - `shape_type`: (for v3 only) is a type of the tree shape, which can be: `chain` (the object of one triple is the subject of the other); `sibling` (triples with a shared subject); `mixed` (both chain and sibling types present).
165
- - `2017_test_category`: (for `webnlg_challenge_2017`) tells whether the set of RDF triples was present in the training set or not.
166
  - `lex`: the lexicalizations, with:
167
  - `text`: the text to be predicted.
168
- - `lid`: a lexicalizayion ID, unique per example.
169
  - `comment`: the lexicalizations were rated by crowd workers are either `good` or `bad`
 
 
 
 
 
 
 
170
 
171
  ### Data Splits
172
 
173
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
174
 
175
  ## Dataset Creation
176
 
177
  ### Curation Rationale
178
 
179
- [More Information Needed]
180
 
181
  ### Source Data
182
 
 
 
183
  #### Initial Data Collection and Normalization
184
 
185
- [More Information Needed]
 
 
 
 
 
 
 
 
 
186
 
187
  #### Who are the source language producers?
188
 
189
- [More Information Needed]
190
 
191
  ### Annotations
192
 
193
  #### Annotation process
194
 
195
- [More Information Needed]
 
 
196
 
197
  #### Who are the annotators?
198
 
199
- [More Information Needed]
200
 
201
  ### Personal and Sensitive Information
202
 
203
- [More Information Needed]
204
 
205
  ## Considerations for Using the Data
206
 
207
  ### Social Impact of Dataset
208
 
209
- [More Information Needed]
 
 
210
 
211
  ### Discussion of Biases
212
 
213
- [More Information Needed]
 
 
214
 
215
  ### Other Known Limitations
216
 
217
- [More Information Needed]
 
 
218
 
219
  ## Additional Information
220
 
221
  ### Dataset Curators
222
 
223
- [More Information Needed]
 
224
 
225
  ### Licensing Information
226
 
@@ -263,3 +304,7 @@ The dataset uses the `cc-by-nc-sa-4.0` license. The source DBpedia project uses
263
  url = "http://aclweb.org/anthology/W18-6543"
264
  }
265
  ```
 
 
 
 
 
95
  - [Dataset Curators](#dataset-curators)
96
  - [Licensing Information](#licensing-information)
97
  - [Citation Information](#citation-information)
98
+ - [Contributions](#contributions)
99
 
100
  ## Dataset Description
101
 
 
158
  ### Data Fields
159
 
160
  The following fields can be found in the instances:
161
+ - `category`: the category of the DBpedia entities present in the RDF triples.
162
  - `eid`: an example ID, only unique per split per category.
163
  - `size`: number of RDF triples in the set.
164
+ - `shape`: (since v2) Each set of RDF-triples is a tree, which is characterised by its shape and shape type. `shape` is a string representation of the tree with nested parentheses where X is a node (see [Newick tree format](https://en.wikipedia.org/wiki/Newick_format))
165
+ - `shape_type`: (since v2) is a type of the tree shape, which can be: `chain` (the object of one triple is the subject of the other); `sibling` (triples with a shared subject); `mixed` (both chain and sibling types present).
166
+ - `test_category`: (for `webnlg_challenge_2017` and `v3`) tells whether the set of RDF triples was present in the training set or not. Several splits of the test set are available: with and without references, and for RDF-to-text generation / for semantic parsing.
167
  - `lex`: the lexicalizations, with:
168
  - `text`: the text to be predicted.
169
+ - `lid`: a lexicalization ID, unique per example.
170
  - `comment`: the lexicalizations were rated by crowd workers are either `good` or `bad`
171
+ - `lang`: (for `release_v3.0_ru`) the language used because original English texts were kept in the Russian version.
172
+
173
+ Russian data has additional optional fields comparing to English:
174
+ - `dbpedialinks`: RDF triples extracted from DBpedia between English and Russian entities by means of the property `sameAs`.
175
+ - `links`: RDF triples created manually for some entities to serve as pointers to translators. There are two types of them:
176
+ * with `sameAs` (`Spaniards | sameAs | испанцы`)
177
+ * with `includes` (`Tomatoes, guanciale, cheese, olive oil | includes | гуанчиале`). Those were mostly created for string literals to translate some parts of them.
178
 
179
  ### Data Splits
180
 
181
+ For `v3.0` releases:
182
+
183
+ | English (v3.0) | Train | Dev | Test (data-to-text) |
184
+ |-----------------|--------|-------|-------|
185
+ | **triple sets** | 13,211 | 1,667 | 1,779 |
186
+ | **texts** | 35,426 | 4,464 | 5,150 |
187
+ |**properties** | 372 | 290 | 220 |
188
+
189
+
190
+ | Russian (v3.0) | Train | Dev | Test (data-to-text) |
191
+ |-----------------|--------|-------|---------------------|
192
+ | **triple sets** | 5,573 | 790 | 1,102 |
193
+ | **texts** | 14,239 | 2,026 | 2,780 |
194
+ |**properties** | 226 | 115 | 192 |
195
 
196
  ## Dataset Creation
197
 
198
  ### Curation Rationale
199
 
200
+ The WebNLG dataset was created to promote the development _(i)_ of RDF verbalisers and _(ii)_ of microplanners able to handle a wide range of linguistic constructions. The dataset aims at covering knowledge in different domains ("categories"). The same properties and entities can appear in several categories.
201
 
202
  ### Source Data
203
 
204
+ The data was compiled from raw DBpedia triples. [This paper](https://www.aclweb.org/anthology/C16-1141/) explains how the triples were selected.
205
+
206
  #### Initial Data Collection and Normalization
207
 
208
+ Initial triples extracted from DBpedia were modified in several ways. See [official documentation](https://webnlg-challenge.loria.fr/docs/) for the most frequent changes that have been made. An original tripleset and a modified tripleset usually represent a one-to-one mapping. However, there are cases with many-to-one mappings when several original triplesets are mapped to one modified tripleset.
209
+
210
+ Entities that served as roots of RDF trees are listed in [this file](https://gitlab.com/shimorina/webnlg-dataset/-/blob/master/supplementary/entities_dict.json).
211
+
212
+ The English WebNLG 2020 dataset (v3.0) for training comprises data-text pairs for 16 distinct DBpedia categories:
213
+ - The 10 seen categories used in the 2017 version: Airport, Astronaut, Building, City, ComicsCharacter, Food, Monument, SportsTeam, University, and WrittenWork.
214
+ - The 5 unseen categories of 2017, which are now part of the seen data: Athlete, Artist, CelestialBody, MeanOfTransportation, Politician.
215
+ - 1 new category: Company.
216
+
217
+ The Russian dataset (v3.0) comprises data-text pairs for 9 distinct categories: Airport, Astronaut, Building, CelestialBody, ComicsCharacter, Food, Monument, SportsTeam, and University.
218
 
219
  #### Who are the source language producers?
220
 
221
+ There are no source texts, all textual material was compiled during the annotation process.
222
 
223
  ### Annotations
224
 
225
  #### Annotation process
226
 
227
+ Annotators were first asked to create sentences that verbalise single triples. In a second round, annotators were asked to combine single-triple sentences together into sentences that cover 2 triples. And so on until 7 triples. Quality checks were performed to ensure the quality of the annotations. See Section 3.3 in [the dataset paper](https://www.aclweb.org/anthology/P17-1017.pdf).
228
+
229
+ Russian data was translated from English with an MT system and then was post-edited by crowdworkers. See Section 2.2 of [this paper](https://webnlg-challenge.loria.fr/files/2020.webnlg-papers.7.pdf).
230
 
231
  #### Who are the annotators?
232
 
233
+ All references were collected through crowdsourcing platforms (CrowdFlower/Figure 8 and Amazon Mechanical Turk). For Russian, post-editing was done using the Yandex.Toloka crowdsourcing platform.
234
 
235
  ### Personal and Sensitive Information
236
 
237
+ Neither the dataset as published or the annotation process involves the collection or sharing of any kind of personal / demographic information.
238
 
239
  ## Considerations for Using the Data
240
 
241
  ### Social Impact of Dataset
242
 
243
+ We do not foresee any negative social impact in particular from this dataset or task.
244
+
245
+ Positive outlooks: Being able to generate good quality text from RDF data would permit, e.g., making this data more accessible to lay users, enriching existing text with information drawn from knowledge bases such as DBpedia or describing, comparing and relating entities present in these knowledge bases.
246
 
247
  ### Discussion of Biases
248
 
249
+ This dataset is created using DBpedia RDF triples which naturally exhibit biases that have been found to exist in Wikipedia such as some forms of, e.g., gender bias.
250
+
251
+ The choice of [entities](https://gitlab.com/shimorina/webnlg-dataset/-/blob/master/supplementary/entities_dict.json), described by RDF trees, was not controlled. As such, they may contain gender biases; for instance, all the astronauts described by RDF triples are male. Hence, in texts, pronouns _he/him/his_ occur more often. Similarly, entities can be related to the Western culture more often than to other cultures.
252
 
253
  ### Other Known Limitations
254
 
255
+ The quality of the crowdsourced references is limited, in particular in terms of fluency/naturalness of the collected texts.
256
+
257
+ Russian data was machine-translated and then post-edited by crowdworkers, so some examples may still exhibit issues related to bad translations.
258
 
259
  ## Additional Information
260
 
261
  ### Dataset Curators
262
 
263
+ The principle curator of the dataset is Anastasia Shimorina (Université de Lorraine / LORIA, France). Throughout the WebNLG releases, several people contributed to their construction: Claire Gardent (CNRS / LORIA, France), Shashi Narayan (Google, UK), Laura Perez-Beltrachini (University of Edinburgh, UK), Elena Khasanova, and Thiago Castro Ferreira (Federal University of Minas Gerais, Brazil).
264
+ The dataset construction was funded by the French National Research Agency (ANR).
265
 
266
  ### Licensing Information
267
 
 
304
  url = "http://aclweb.org/anthology/W18-6543"
305
  }
306
  ```
307
+
308
+ ### Contributions
309
+
310
+ Thanks to [@Shimorina](https://github.com/Shimorina), [@yjernite](https://github.com/yjernite) for adding this dataset.
dataset_infos.json CHANGED
@@ -1 +1 @@
1
- {"webnlg_challenge_2017": {"description": "The WebNLG challenge consists in mapping data to text. The training data consists\nof Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation\nof these triples. For instance, given the 3 DBpedia triples shown in (a), the aim is to generate a text such as (b).\n\na. (John_E_Blaha birthDate 1942_08_26) (John_E_Blaha birthPlace San_Antonio) (John_E_Blaha occupation Fighter_pilot)\nb. John E Blaha, born in San Antonio on 1942-08-26, worked as a fighter pilot\n\nAs the example illustrates, the task involves specific NLG subtasks such as sentence segmentation\n(how to chunk the input data into sentences), lexicalisation (of the DBpedia properties),\naggregation (how to avoid repetitions) and surface realisation\n(how to build a syntactically correct and natural sounding text).\n", "citation": "@inproceedings{web_nlg,\n author = {Claire Gardent and\n Anastasia Shimorina and\n Shashi Narayan and\n Laura Perez{-}Beltrachini},\n editor = {Regina Barzilay and\n Min{-}Yen Kan},\n title = {Creating Training Corpora for {NLG} Micro-Planners},\n booktitle = {Proceedings of the 55th Annual Meeting of the\n Association for Computational Linguistics,\n {ACL} 2017, Vancouver, Canada, July 30 - August 4,\n Volume 1: Long Papers},\n pages = {179--188},\n publisher = {Association for Computational Linguistics},\n year = {2017},\n url = {https://doi.org/10.18653/v1/P17-1017},\n doi = {10.18653/v1/P17-1017}\n}\n", "homepage": "https://webnlg-challenge.loria.fr/", "license": "", "features": {"category": {"dtype": "string", "id": null, "_type": "Value"}, "size": {"dtype": "int32", "id": null, "_type": "Value"}, "eid": {"dtype": "string", "id": null, "_type": "Value"}, "original_triple_sets": {"feature": {"otriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "modified_triple_sets": {"feature": {"mtriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "shape": {"dtype": "string", "id": null, "_type": "Value"}, "shape_type": {"dtype": "string", "id": null, "_type": "Value"}, "lex": {"feature": {"comment": {"dtype": "string", "id": null, "_type": "Value"}, "lid": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "2017_test_category": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "web_nlg", "config_name": "webnlg_challenge_2017", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 5439100, "num_examples": 6940, "dataset_name": "web_nlg"}, "dev": {"name": "dev", "num_bytes": 687093, "num_examples": 872, "dataset_name": "web_nlg"}, "test": {"name": "test", "num_bytes": 3037685, "num_examples": 4615, "dataset_name": "web_nlg"}}, "download_checksums": {"https://gitlab.com/shimorina/webnlg-dataset/-/archive/master/webnlg-dataset-master.zip": {"num_bytes": 25390588, "checksum": "287290957f7352c9e3b64cdc5957faba8ed5d835f34f2106ba5666a77fdb1cfb"}}, "download_size": 25390588, "post_processing_size": null, "dataset_size": 9163878, "size_in_bytes": 34554466}, "release_v1": {"description": "The WebNLG challenge consists in mapping data to text. The training data consists\nof Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation\nof these triples. For instance, given the 3 DBpedia triples shown in (a), the aim is to generate a text such as (b).\n\na. (John_E_Blaha birthDate 1942_08_26) (John_E_Blaha birthPlace San_Antonio) (John_E_Blaha occupation Fighter_pilot)\nb. John E Blaha, born in San Antonio on 1942-08-26, worked as a fighter pilot\n\nAs the example illustrates, the task involves specific NLG subtasks such as sentence segmentation\n(how to chunk the input data into sentences), lexicalisation (of the DBpedia properties),\naggregation (how to avoid repetitions) and surface realisation\n(how to build a syntactically correct and natural sounding text).\n", "citation": "@inproceedings{web_nlg,\n author = {Claire Gardent and\n Anastasia Shimorina and\n Shashi Narayan and\n Laura Perez{-}Beltrachini},\n editor = {Regina Barzilay and\n Min{-}Yen Kan},\n title = {Creating Training Corpora for {NLG} Micro-Planners},\n booktitle = {Proceedings of the 55th Annual Meeting of the\n Association for Computational Linguistics,\n {ACL} 2017, Vancouver, Canada, July 30 - August 4,\n Volume 1: Long Papers},\n pages = {179--188},\n publisher = {Association for Computational Linguistics},\n year = {2017},\n url = {https://doi.org/10.18653/v1/P17-1017},\n doi = {10.18653/v1/P17-1017}\n}\n", "homepage": "https://webnlg-challenge.loria.fr/", "license": "", "features": {"category": {"dtype": "string", "id": null, "_type": "Value"}, "size": {"dtype": "int32", "id": null, "_type": "Value"}, "eid": {"dtype": "string", "id": null, "_type": "Value"}, "original_triple_sets": {"feature": {"otriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "modified_triple_sets": {"feature": {"mtriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "shape": {"dtype": "string", "id": null, "_type": "Value"}, "shape_type": {"dtype": "string", "id": null, "_type": "Value"}, "lex": {"feature": {"comment": {"dtype": "string", "id": null, "_type": "Value"}, "lid": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "2017_test_category": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "web_nlg", "config_name": "release_v1", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"full": {"name": "full", "num_bytes": 11361516, "num_examples": 14237, "dataset_name": "web_nlg"}}, "download_checksums": {"https://gitlab.com/shimorina/webnlg-dataset/-/archive/master/webnlg-dataset-master.zip": {"num_bytes": 25390588, "checksum": "287290957f7352c9e3b64cdc5957faba8ed5d835f34f2106ba5666a77fdb1cfb"}}, "download_size": 25390588, "post_processing_size": null, "dataset_size": 11361516, "size_in_bytes": 36752104}, "release_v2": {"description": "The WebNLG challenge consists in mapping data to text. The training data consists\nof Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation\nof these triples. For instance, given the 3 DBpedia triples shown in (a), the aim is to generate a text such as (b).\n\na. (John_E_Blaha birthDate 1942_08_26) (John_E_Blaha birthPlace San_Antonio) (John_E_Blaha occupation Fighter_pilot)\nb. John E Blaha, born in San Antonio on 1942-08-26, worked as a fighter pilot\n\nAs the example illustrates, the task involves specific NLG subtasks such as sentence segmentation\n(how to chunk the input data into sentences), lexicalisation (of the DBpedia properties),\naggregation (how to avoid repetitions) and surface realisation\n(how to build a syntactically correct and natural sounding text).\n", "citation": "@inproceedings{web_nlg,\n author = {Claire Gardent and\n Anastasia Shimorina and\n Shashi Narayan and\n Laura Perez{-}Beltrachini},\n editor = {Regina Barzilay and\n Min{-}Yen Kan},\n title = {Creating Training Corpora for {NLG} Micro-Planners},\n booktitle = {Proceedings of the 55th Annual Meeting of the\n Association for Computational Linguistics,\n {ACL} 2017, Vancouver, Canada, July 30 - August 4,\n Volume 1: Long Papers},\n pages = {179--188},\n publisher = {Association for Computational Linguistics},\n year = {2017},\n url = {https://doi.org/10.18653/v1/P17-1017},\n doi = {10.18653/v1/P17-1017}\n}\n", "homepage": "https://webnlg-challenge.loria.fr/", "license": "", "features": {"category": {"dtype": "string", "id": null, "_type": "Value"}, "size": {"dtype": "int32", "id": null, "_type": "Value"}, "eid": {"dtype": "string", "id": null, "_type": "Value"}, "original_triple_sets": {"feature": {"otriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "modified_triple_sets": {"feature": {"mtriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "shape": {"dtype": "string", "id": null, "_type": "Value"}, "shape_type": {"dtype": "string", "id": null, "_type": "Value"}, "lex": {"feature": {"comment": {"dtype": "string", "id": null, "_type": "Value"}, "lid": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "2017_test_category": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "web_nlg", "config_name": "release_v2", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 10538445, "num_examples": 12876, "dataset_name": "web_nlg"}, "dev": {"name": "dev", "num_bytes": 1323317, "num_examples": 1619, "dataset_name": "web_nlg"}, "test": {"name": "test", "num_bytes": 1288814, "num_examples": 1600, "dataset_name": "web_nlg"}}, "download_checksums": {"https://gitlab.com/shimorina/webnlg-dataset/-/archive/master/webnlg-dataset-master.zip": {"num_bytes": 25390588, "checksum": "287290957f7352c9e3b64cdc5957faba8ed5d835f34f2106ba5666a77fdb1cfb"}}, "download_size": 25390588, "post_processing_size": null, "dataset_size": 13150576, "size_in_bytes": 38541164}, "release_v2_constrained": {"description": "The WebNLG challenge consists in mapping data to text. The training data consists\nof Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation\nof these triples. For instance, given the 3 DBpedia triples shown in (a), the aim is to generate a text such as (b).\n\na. (John_E_Blaha birthDate 1942_08_26) (John_E_Blaha birthPlace San_Antonio) (John_E_Blaha occupation Fighter_pilot)\nb. John E Blaha, born in San Antonio on 1942-08-26, worked as a fighter pilot\n\nAs the example illustrates, the task involves specific NLG subtasks such as sentence segmentation\n(how to chunk the input data into sentences), lexicalisation (of the DBpedia properties),\naggregation (how to avoid repetitions) and surface realisation\n(how to build a syntactically correct and natural sounding text).\n", "citation": "@inproceedings{web_nlg,\n author = {Claire Gardent and\n Anastasia Shimorina and\n Shashi Narayan and\n Laura Perez{-}Beltrachini},\n editor = {Regina Barzilay and\n Min{-}Yen Kan},\n title = {Creating Training Corpora for {NLG} Micro-Planners},\n booktitle = {Proceedings of the 55th Annual Meeting of the\n Association for Computational Linguistics,\n {ACL} 2017, Vancouver, Canada, July 30 - August 4,\n Volume 1: Long Papers},\n pages = {179--188},\n publisher = {Association for Computational Linguistics},\n year = {2017},\n url = {https://doi.org/10.18653/v1/P17-1017},\n doi = {10.18653/v1/P17-1017}\n}\n", "homepage": "https://webnlg-challenge.loria.fr/", "license": "", "features": {"category": {"dtype": "string", "id": null, "_type": "Value"}, "size": {"dtype": "int32", "id": null, "_type": "Value"}, "eid": {"dtype": "string", "id": null, "_type": "Value"}, "original_triple_sets": {"feature": {"otriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "modified_triple_sets": {"feature": {"mtriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "shape": {"dtype": "string", "id": null, "_type": "Value"}, "shape_type": {"dtype": "string", "id": null, "_type": "Value"}, "lex": {"feature": {"comment": {"dtype": "string", "id": null, "_type": "Value"}, "lid": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "2017_test_category": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "web_nlg", "config_name": "release_v2_constrained", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 10560502, "num_examples": 12895, "dataset_name": "web_nlg"}, "dev": {"name": "dev", "num_bytes": 1385570, "num_examples": 1594, "dataset_name": "web_nlg"}, "test": {"name": "test", "num_bytes": 1207294, "num_examples": 1606, "dataset_name": "web_nlg"}}, "download_checksums": {"https://gitlab.com/shimorina/webnlg-dataset/-/archive/master/webnlg-dataset-master.zip": {"num_bytes": 25390588, "checksum": "287290957f7352c9e3b64cdc5957faba8ed5d835f34f2106ba5666a77fdb1cfb"}}, "download_size": 25390588, "post_processing_size": null, "dataset_size": 13153366, "size_in_bytes": 38543954}, "release_v2.1": {"description": "The WebNLG challenge consists in mapping data to text. The training data consists\nof Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation\nof these triples. For instance, given the 3 DBpedia triples shown in (a), the aim is to generate a text such as (b).\n\na. (John_E_Blaha birthDate 1942_08_26) (John_E_Blaha birthPlace San_Antonio) (John_E_Blaha occupation Fighter_pilot)\nb. John E Blaha, born in San Antonio on 1942-08-26, worked as a fighter pilot\n\nAs the example illustrates, the task involves specific NLG subtasks such as sentence segmentation\n(how to chunk the input data into sentences), lexicalisation (of the DBpedia properties),\naggregation (how to avoid repetitions) and surface realisation\n(how to build a syntactically correct and natural sounding text).\n", "citation": "@inproceedings{web_nlg,\n author = {Claire Gardent and\n Anastasia Shimorina and\n Shashi Narayan and\n Laura Perez{-}Beltrachini},\n editor = {Regina Barzilay and\n Min{-}Yen Kan},\n title = {Creating Training Corpora for {NLG} Micro-Planners},\n booktitle = {Proceedings of the 55th Annual Meeting of the\n Association for Computational Linguistics,\n {ACL} 2017, Vancouver, Canada, July 30 - August 4,\n Volume 1: Long Papers},\n pages = {179--188},\n publisher = {Association for Computational Linguistics},\n year = {2017},\n url = {https://doi.org/10.18653/v1/P17-1017},\n doi = {10.18653/v1/P17-1017}\n}\n", "homepage": "https://webnlg-challenge.loria.fr/", "license": "", "features": {"category": {"dtype": "string", "id": null, "_type": "Value"}, "size": {"dtype": "int32", "id": null, "_type": "Value"}, "eid": {"dtype": "string", "id": null, "_type": "Value"}, "original_triple_sets": {"feature": {"otriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "modified_triple_sets": {"feature": {"mtriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "shape": {"dtype": "string", "id": null, "_type": "Value"}, "shape_type": {"dtype": "string", "id": null, "_type": "Value"}, "lex": {"feature": {"comment": {"dtype": "string", "id": null, "_type": "Value"}, "lid": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "2017_test_category": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "web_nlg", "config_name": "release_v2.1", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 10556881, "num_examples": 12876, "dataset_name": "web_nlg"}, "dev": {"name": "dev", "num_bytes": 1325368, "num_examples": 1619, "dataset_name": "web_nlg"}, "test": {"name": "test", "num_bytes": 1289748, "num_examples": 1600, "dataset_name": "web_nlg"}}, "download_checksums": {"https://gitlab.com/shimorina/webnlg-dataset/-/archive/master/webnlg-dataset-master.zip": {"num_bytes": 25390588, "checksum": "287290957f7352c9e3b64cdc5957faba8ed5d835f34f2106ba5666a77fdb1cfb"}}, "download_size": 25390588, "post_processing_size": null, "dataset_size": 13171997, "size_in_bytes": 38562585}, "release_v2.1_constrained": {"description": "The WebNLG challenge consists in mapping data to text. The training data consists\nof Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation\nof these triples. For instance, given the 3 DBpedia triples shown in (a), the aim is to generate a text such as (b).\n\na. (John_E_Blaha birthDate 1942_08_26) (John_E_Blaha birthPlace San_Antonio) (John_E_Blaha occupation Fighter_pilot)\nb. John E Blaha, born in San Antonio on 1942-08-26, worked as a fighter pilot\n\nAs the example illustrates, the task involves specific NLG subtasks such as sentence segmentation\n(how to chunk the input data into sentences), lexicalisation (of the DBpedia properties),\naggregation (how to avoid repetitions) and surface realisation\n(how to build a syntactically correct and natural sounding text).\n", "citation": "@inproceedings{web_nlg,\n author = {Claire Gardent and\n Anastasia Shimorina and\n Shashi Narayan and\n Laura Perez{-}Beltrachini},\n editor = {Regina Barzilay and\n Min{-}Yen Kan},\n title = {Creating Training Corpora for {NLG} Micro-Planners},\n booktitle = {Proceedings of the 55th Annual Meeting of the\n Association for Computational Linguistics,\n {ACL} 2017, Vancouver, Canada, July 30 - August 4,\n Volume 1: Long Papers},\n pages = {179--188},\n publisher = {Association for Computational Linguistics},\n year = {2017},\n url = {https://doi.org/10.18653/v1/P17-1017},\n doi = {10.18653/v1/P17-1017}\n}\n", "homepage": "https://webnlg-challenge.loria.fr/", "license": "", "features": {"category": {"dtype": "string", "id": null, "_type": "Value"}, "size": {"dtype": "int32", "id": null, "_type": "Value"}, "eid": {"dtype": "string", "id": null, "_type": "Value"}, "original_triple_sets": {"feature": {"otriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "modified_triple_sets": {"feature": {"mtriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "shape": {"dtype": "string", "id": null, "_type": "Value"}, "shape_type": {"dtype": "string", "id": null, "_type": "Value"}, "lex": {"feature": {"comment": {"dtype": "string", "id": null, "_type": "Value"}, "lid": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "2017_test_category": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "web_nlg", "config_name": "release_v2.1_constrained", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 10747616, "num_examples": 12895, "dataset_name": "web_nlg"}, "dev": {"name": "dev", "num_bytes": 1247988, "num_examples": 1594, "dataset_name": "web_nlg"}, "test": {"name": "test", "num_bytes": 1176393, "num_examples": 1606, "dataset_name": "web_nlg"}}, "download_checksums": {"https://gitlab.com/shimorina/webnlg-dataset/-/archive/master/webnlg-dataset-master.zip": {"num_bytes": 25390588, "checksum": "287290957f7352c9e3b64cdc5957faba8ed5d835f34f2106ba5666a77fdb1cfb"}}, "download_size": 25390588, "post_processing_size": null, "dataset_size": 13171997, "size_in_bytes": 38562585}, "release_v3.0_en": {"description": "The WebNLG challenge consists in mapping data to text. The training data consists\nof Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation\nof these triples. For instance, given the 3 DBpedia triples shown in (a), the aim is to generate a text such as (b).\n\na. (John_E_Blaha birthDate 1942_08_26) (John_E_Blaha birthPlace San_Antonio) (John_E_Blaha occupation Fighter_pilot)\nb. John E Blaha, born in San Antonio on 1942-08-26, worked as a fighter pilot\n\nAs the example illustrates, the task involves specific NLG subtasks such as sentence segmentation\n(how to chunk the input data into sentences), lexicalisation (of the DBpedia properties),\naggregation (how to avoid repetitions) and surface realisation\n(how to build a syntactically correct and natural sounding text).\n", "citation": "@inproceedings{web_nlg,\n author = {Claire Gardent and\n Anastasia Shimorina and\n Shashi Narayan and\n Laura Perez{-}Beltrachini},\n editor = {Regina Barzilay and\n Min{-}Yen Kan},\n title = {Creating Training Corpora for {NLG} Micro-Planners},\n booktitle = {Proceedings of the 55th Annual Meeting of the\n Association for Computational Linguistics,\n {ACL} 2017, Vancouver, Canada, July 30 - August 4,\n Volume 1: Long Papers},\n pages = {179--188},\n publisher = {Association for Computational Linguistics},\n year = {2017},\n url = {https://doi.org/10.18653/v1/P17-1017},\n doi = {10.18653/v1/P17-1017}\n}\n", "homepage": "https://webnlg-challenge.loria.fr/", "license": "", "features": {"category": {"dtype": "string", "id": null, "_type": "Value"}, "size": {"dtype": "int32", "id": null, "_type": "Value"}, "eid": {"dtype": "string", "id": null, "_type": "Value"}, "original_triple_sets": {"feature": {"otriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "modified_triple_sets": {"feature": {"mtriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "shape": {"dtype": "string", "id": null, "_type": "Value"}, "shape_type": {"dtype": "string", "id": null, "_type": "Value"}, "lex": {"feature": {"comment": {"dtype": "string", "id": null, "_type": "Value"}, "lid": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "2017_test_category": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "web_nlg", "config_name": "release_v3.0_en", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 10784576, "num_examples": 13211, "dataset_name": "web_nlg"}, "dev": {"name": "dev", "num_bytes": 1356359, "num_examples": 1667, "dataset_name": "web_nlg"}, "test": {"name": "test", "num_bytes": 25813556, "num_examples": 39991, "dataset_name": "web_nlg"}}, "download_checksums": {"https://gitlab.com/shimorina/webnlg-dataset/-/archive/master/webnlg-dataset-master.zip": {"num_bytes": 25390588, "checksum": "287290957f7352c9e3b64cdc5957faba8ed5d835f34f2106ba5666a77fdb1cfb"}}, "download_size": 25390588, "post_processing_size": null, "dataset_size": 37954491, "size_in_bytes": 63345079}, "release_v3.0_ru": {"description": "The WebNLG challenge consists in mapping data to text. The training data consists\nof Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation\nof these triples. For instance, given the 3 DBpedia triples shown in (a), the aim is to generate a text such as (b).\n\na. (John_E_Blaha birthDate 1942_08_26) (John_E_Blaha birthPlace San_Antonio) (John_E_Blaha occupation Fighter_pilot)\nb. John E Blaha, born in San Antonio on 1942-08-26, worked as a fighter pilot\n\nAs the example illustrates, the task involves specific NLG subtasks such as sentence segmentation\n(how to chunk the input data into sentences), lexicalisation (of the DBpedia properties),\naggregation (how to avoid repetitions) and surface realisation\n(how to build a syntactically correct and natural sounding text).\n", "citation": "@inproceedings{web_nlg,\n author = {Claire Gardent and\n Anastasia Shimorina and\n Shashi Narayan and\n Laura Perez{-}Beltrachini},\n editor = {Regina Barzilay and\n Min{-}Yen Kan},\n title = {Creating Training Corpora for {NLG} Micro-Planners},\n booktitle = {Proceedings of the 55th Annual Meeting of the\n Association for Computational Linguistics,\n {ACL} 2017, Vancouver, Canada, July 30 - August 4,\n Volume 1: Long Papers},\n pages = {179--188},\n publisher = {Association for Computational Linguistics},\n year = {2017},\n url = {https://doi.org/10.18653/v1/P17-1017},\n doi = {10.18653/v1/P17-1017}\n}\n", "homepage": "https://webnlg-challenge.loria.fr/", "license": "", "features": {"category": {"dtype": "string", "id": null, "_type": "Value"}, "size": {"dtype": "int32", "id": null, "_type": "Value"}, "eid": {"dtype": "string", "id": null, "_type": "Value"}, "original_triple_sets": {"feature": {"otriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "modified_triple_sets": {"feature": {"mtriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "shape": {"dtype": "string", "id": null, "_type": "Value"}, "shape_type": {"dtype": "string", "id": null, "_type": "Value"}, "lex": {"feature": {"comment": {"dtype": "string", "id": null, "_type": "Value"}, "lid": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "2017_test_category": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "web_nlg", "config_name": "release_v3.0_ru", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 7972852, "num_examples": 5573, "dataset_name": "web_nlg"}, "dev": {"name": "dev", "num_bytes": 1097883, "num_examples": 790, "dataset_name": "web_nlg"}, "test": {"name": "test", "num_bytes": 18023181, "num_examples": 23870, "dataset_name": "web_nlg"}}, "download_checksums": {"https://gitlab.com/shimorina/webnlg-dataset/-/archive/master/webnlg-dataset-master.zip": {"num_bytes": 25390588, "checksum": "287290957f7352c9e3b64cdc5957faba8ed5d835f34f2106ba5666a77fdb1cfb"}}, "download_size": 25390588, "post_processing_size": null, "dataset_size": 27093916, "size_in_bytes": 52484504}}
 
1
+ {"webnlg_challenge_2017": {"description": "The WebNLG challenge consists in mapping data to text. The training data consists\nof Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation\nof these triples. For instance, given the 3 DBpedia triples shown in (a), the aim is to generate a text such as (b).\n\na. (John_E_Blaha birthDate 1942_08_26) (John_E_Blaha birthPlace San_Antonio) (John_E_Blaha occupation Fighter_pilot)\nb. John E Blaha, born in San Antonio on 1942-08-26, worked as a fighter pilot\n\nAs the example illustrates, the task involves specific NLG subtasks such as sentence segmentation\n(how to chunk the input data into sentences), lexicalisation (of the DBpedia properties),\naggregation (how to avoid repetitions) and surface realisation\n(how to build a syntactically correct and natural sounding text).\n", "citation": "@inproceedings{web_nlg,\n author = {Claire Gardent and\n Anastasia Shimorina and\n Shashi Narayan and\n Laura Perez{-}Beltrachini},\n editor = {Regina Barzilay and\n Min{-}Yen Kan},\n title = {Creating Training Corpora for {NLG} Micro-Planners},\n booktitle = {Proceedings of the 55th Annual Meeting of the\n Association for Computational Linguistics,\n {ACL} 2017, Vancouver, Canada, July 30 - August 4,\n Volume 1: Long Papers},\n pages = {179--188},\n publisher = {Association for Computational Linguistics},\n year = {2017},\n url = {https://doi.org/10.18653/v1/P17-1017},\n doi = {10.18653/v1/P17-1017}\n}\n", "homepage": "https://webnlg-challenge.loria.fr/", "license": "", "features": {"category": {"dtype": "string", "id": null, "_type": "Value"}, "size": {"dtype": "int32", "id": null, "_type": "Value"}, "eid": {"dtype": "string", "id": null, "_type": "Value"}, "original_triple_sets": {"feature": {"otriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "modified_triple_sets": {"feature": {"mtriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "shape": {"dtype": "string", "id": null, "_type": "Value"}, "shape_type": {"dtype": "string", "id": null, "_type": "Value"}, "lex": {"feature": {"comment": {"dtype": "string", "id": null, "_type": "Value"}, "lid": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "lang": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "test_category": {"dtype": "string", "id": null, "_type": "Value"}, "dbpedia_links": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "links": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "web_nlg", "config_name": "webnlg_challenge_2017", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 5594812, "num_examples": 6940, "dataset_name": "web_nlg"}, "dev": {"name": "dev", "num_bytes": 706653, "num_examples": 872, "dataset_name": "web_nlg"}, "test": {"name": "test", "num_bytes": 3122533, "num_examples": 4615, "dataset_name": "web_nlg"}}, "download_checksums": {"https://gitlab.com/shimorina/webnlg-dataset/-/archive/master/webnlg-dataset-master.zip": {"num_bytes": 25390588, "checksum": "287290957f7352c9e3b64cdc5957faba8ed5d835f34f2106ba5666a77fdb1cfb"}}, "download_size": 25390588, "post_processing_size": null, "dataset_size": 9423998, "size_in_bytes": 34814586}, "release_v1": {"description": "The WebNLG challenge consists in mapping data to text. The training data consists\nof Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation\nof these triples. For instance, given the 3 DBpedia triples shown in (a), the aim is to generate a text such as (b).\n\na. (John_E_Blaha birthDate 1942_08_26) (John_E_Blaha birthPlace San_Antonio) (John_E_Blaha occupation Fighter_pilot)\nb. John E Blaha, born in San Antonio on 1942-08-26, worked as a fighter pilot\n\nAs the example illustrates, the task involves specific NLG subtasks such as sentence segmentation\n(how to chunk the input data into sentences), lexicalisation (of the DBpedia properties),\naggregation (how to avoid repetitions) and surface realisation\n(how to build a syntactically correct and natural sounding text).\n", "citation": "@inproceedings{web_nlg,\n author = {Claire Gardent and\n Anastasia Shimorina and\n Shashi Narayan and\n Laura Perez{-}Beltrachini},\n editor = {Regina Barzilay and\n Min{-}Yen Kan},\n title = {Creating Training Corpora for {NLG} Micro-Planners},\n booktitle = {Proceedings of the 55th Annual Meeting of the\n Association for Computational Linguistics,\n {ACL} 2017, Vancouver, Canada, July 30 - August 4,\n Volume 1: Long Papers},\n pages = {179--188},\n publisher = {Association for Computational Linguistics},\n year = {2017},\n url = {https://doi.org/10.18653/v1/P17-1017},\n doi = {10.18653/v1/P17-1017}\n}\n", "homepage": "https://webnlg-challenge.loria.fr/", "license": "", "features": {"category": {"dtype": "string", "id": null, "_type": "Value"}, "size": {"dtype": "int32", "id": null, "_type": "Value"}, "eid": {"dtype": "string", "id": null, "_type": "Value"}, "original_triple_sets": {"feature": {"otriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "modified_triple_sets": {"feature": {"mtriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "shape": {"dtype": "string", "id": null, "_type": "Value"}, "shape_type": {"dtype": "string", "id": null, "_type": "Value"}, "lex": {"feature": {"comment": {"dtype": "string", "id": null, "_type": "Value"}, "lid": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "lang": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "test_category": {"dtype": "string", "id": null, "_type": "Value"}, "dbpedia_links": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "links": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "web_nlg", "config_name": "release_v1", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"full": {"name": "full", "num_bytes": 11684308, "num_examples": 14237, "dataset_name": "web_nlg"}}, "download_checksums": {"https://gitlab.com/shimorina/webnlg-dataset/-/archive/master/webnlg-dataset-master.zip": {"num_bytes": 25390588, "checksum": "287290957f7352c9e3b64cdc5957faba8ed5d835f34f2106ba5666a77fdb1cfb"}}, "download_size": 25390588, "post_processing_size": null, "dataset_size": 11684308, "size_in_bytes": 37074896}, "release_v2": {"description": "The WebNLG challenge consists in mapping data to text. The training data consists\nof Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation\nof these triples. For instance, given the 3 DBpedia triples shown in (a), the aim is to generate a text such as (b).\n\na. (John_E_Blaha birthDate 1942_08_26) (John_E_Blaha birthPlace San_Antonio) (John_E_Blaha occupation Fighter_pilot)\nb. John E Blaha, born in San Antonio on 1942-08-26, worked as a fighter pilot\n\nAs the example illustrates, the task involves specific NLG subtasks such as sentence segmentation\n(how to chunk the input data into sentences), lexicalisation (of the DBpedia properties),\naggregation (how to avoid repetitions) and surface realisation\n(how to build a syntactically correct and natural sounding text).\n", "citation": "@inproceedings{web_nlg,\n author = {Claire Gardent and\n Anastasia Shimorina and\n Shashi Narayan and\n Laura Perez{-}Beltrachini},\n editor = {Regina Barzilay and\n Min{-}Yen Kan},\n title = {Creating Training Corpora for {NLG} Micro-Planners},\n booktitle = {Proceedings of the 55th Annual Meeting of the\n Association for Computational Linguistics,\n {ACL} 2017, Vancouver, Canada, July 30 - August 4,\n Volume 1: Long Papers},\n pages = {179--188},\n publisher = {Association for Computational Linguistics},\n year = {2017},\n url = {https://doi.org/10.18653/v1/P17-1017},\n doi = {10.18653/v1/P17-1017}\n}\n", "homepage": "https://webnlg-challenge.loria.fr/", "license": "", "features": {"category": {"dtype": "string", "id": null, "_type": "Value"}, "size": {"dtype": "int32", "id": null, "_type": "Value"}, "eid": {"dtype": "string", "id": null, "_type": "Value"}, "original_triple_sets": {"feature": {"otriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "modified_triple_sets": {"feature": {"mtriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "shape": {"dtype": "string", "id": null, "_type": "Value"}, "shape_type": {"dtype": "string", "id": null, "_type": "Value"}, "lex": {"feature": {"comment": {"dtype": "string", "id": null, "_type": "Value"}, "lid": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "lang": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "test_category": {"dtype": "string", "id": null, "_type": "Value"}, "dbpedia_links": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "links": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "web_nlg", "config_name": "release_v2", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 10830413, "num_examples": 12876, "dataset_name": "web_nlg"}, "dev": {"name": "dev", "num_bytes": 1360033, "num_examples": 1619, "dataset_name": "web_nlg"}, "test": {"name": "test", "num_bytes": 1324934, "num_examples": 1600, "dataset_name": "web_nlg"}}, "download_checksums": {"https://gitlab.com/shimorina/webnlg-dataset/-/archive/master/webnlg-dataset-master.zip": {"num_bytes": 25390588, "checksum": "287290957f7352c9e3b64cdc5957faba8ed5d835f34f2106ba5666a77fdb1cfb"}}, "download_size": 25390588, "post_processing_size": null, "dataset_size": 13515380, "size_in_bytes": 38905968}, "release_v2_constrained": {"description": "The WebNLG challenge consists in mapping data to text. The training data consists\nof Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation\nof these triples. For instance, given the 3 DBpedia triples shown in (a), the aim is to generate a text such as (b).\n\na. (John_E_Blaha birthDate 1942_08_26) (John_E_Blaha birthPlace San_Antonio) (John_E_Blaha occupation Fighter_pilot)\nb. John E Blaha, born in San Antonio on 1942-08-26, worked as a fighter pilot\n\nAs the example illustrates, the task involves specific NLG subtasks such as sentence segmentation\n(how to chunk the input data into sentences), lexicalisation (of the DBpedia properties),\naggregation (how to avoid repetitions) and surface realisation\n(how to build a syntactically correct and natural sounding text).\n", "citation": "@inproceedings{web_nlg,\n author = {Claire Gardent and\n Anastasia Shimorina and\n Shashi Narayan and\n Laura Perez{-}Beltrachini},\n editor = {Regina Barzilay and\n Min{-}Yen Kan},\n title = {Creating Training Corpora for {NLG} Micro-Planners},\n booktitle = {Proceedings of the 55th Annual Meeting of the\n Association for Computational Linguistics,\n {ACL} 2017, Vancouver, Canada, July 30 - August 4,\n Volume 1: Long Papers},\n pages = {179--188},\n publisher = {Association for Computational Linguistics},\n year = {2017},\n url = {https://doi.org/10.18653/v1/P17-1017},\n doi = {10.18653/v1/P17-1017}\n}\n", "homepage": "https://webnlg-challenge.loria.fr/", "license": "", "features": {"category": {"dtype": "string", "id": null, "_type": "Value"}, "size": {"dtype": "int32", "id": null, "_type": "Value"}, "eid": {"dtype": "string", "id": null, "_type": "Value"}, "original_triple_sets": {"feature": {"otriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "modified_triple_sets": {"feature": {"mtriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "shape": {"dtype": "string", "id": null, "_type": "Value"}, "shape_type": {"dtype": "string", "id": null, "_type": "Value"}, "lex": {"feature": {"comment": {"dtype": "string", "id": null, "_type": "Value"}, "lid": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "lang": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "test_category": {"dtype": "string", "id": null, "_type": "Value"}, "dbpedia_links": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "links": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "web_nlg", "config_name": "release_v2_constrained", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 10853434, "num_examples": 12895, "dataset_name": "web_nlg"}, "dev": {"name": "dev", "num_bytes": 1421590, "num_examples": 1594, "dataset_name": "web_nlg"}, "test": {"name": "test", "num_bytes": 1243182, "num_examples": 1606, "dataset_name": "web_nlg"}}, "download_checksums": {"https://gitlab.com/shimorina/webnlg-dataset/-/archive/master/webnlg-dataset-master.zip": {"num_bytes": 25390588, "checksum": "287290957f7352c9e3b64cdc5957faba8ed5d835f34f2106ba5666a77fdb1cfb"}}, "download_size": 25390588, "post_processing_size": null, "dataset_size": 13518206, "size_in_bytes": 38908794}, "release_v2.1": {"description": "The WebNLG challenge consists in mapping data to text. The training data consists\nof Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation\nof these triples. For instance, given the 3 DBpedia triples shown in (a), the aim is to generate a text such as (b).\n\na. (John_E_Blaha birthDate 1942_08_26) (John_E_Blaha birthPlace San_Antonio) (John_E_Blaha occupation Fighter_pilot)\nb. John E Blaha, born in San Antonio on 1942-08-26, worked as a fighter pilot\n\nAs the example illustrates, the task involves specific NLG subtasks such as sentence segmentation\n(how to chunk the input data into sentences), lexicalisation (of the DBpedia properties),\naggregation (how to avoid repetitions) and surface realisation\n(how to build a syntactically correct and natural sounding text).\n", "citation": "@inproceedings{web_nlg,\n author = {Claire Gardent and\n Anastasia Shimorina and\n Shashi Narayan and\n Laura Perez{-}Beltrachini},\n editor = {Regina Barzilay and\n Min{-}Yen Kan},\n title = {Creating Training Corpora for {NLG} Micro-Planners},\n booktitle = {Proceedings of the 55th Annual Meeting of the\n Association for Computational Linguistics,\n {ACL} 2017, Vancouver, Canada, July 30 - August 4,\n Volume 1: Long Papers},\n pages = {179--188},\n publisher = {Association for Computational Linguistics},\n year = {2017},\n url = {https://doi.org/10.18653/v1/P17-1017},\n doi = {10.18653/v1/P17-1017}\n}\n", "homepage": "https://webnlg-challenge.loria.fr/", "license": "", "features": {"category": {"dtype": "string", "id": null, "_type": "Value"}, "size": {"dtype": "int32", "id": null, "_type": "Value"}, "eid": {"dtype": "string", "id": null, "_type": "Value"}, "original_triple_sets": {"feature": {"otriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "modified_triple_sets": {"feature": {"mtriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "shape": {"dtype": "string", "id": null, "_type": "Value"}, "shape_type": {"dtype": "string", "id": null, "_type": "Value"}, "lex": {"feature": {"comment": {"dtype": "string", "id": null, "_type": "Value"}, "lid": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "lang": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "test_category": {"dtype": "string", "id": null, "_type": "Value"}, "dbpedia_links": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "links": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "web_nlg", "config_name": "release_v2.1", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 10848793, "num_examples": 12876, "dataset_name": "web_nlg"}, "dev": {"name": "dev", "num_bytes": 1362072, "num_examples": 1619, "dataset_name": "web_nlg"}, "test": {"name": "test", "num_bytes": 1325860, "num_examples": 1600, "dataset_name": "web_nlg"}}, "download_checksums": {"https://gitlab.com/shimorina/webnlg-dataset/-/archive/master/webnlg-dataset-master.zip": {"num_bytes": 25390588, "checksum": "287290957f7352c9e3b64cdc5957faba8ed5d835f34f2106ba5666a77fdb1cfb"}}, "download_size": 25390588, "post_processing_size": null, "dataset_size": 13536725, "size_in_bytes": 38927313}, "release_v2.1_constrained": {"description": "The WebNLG challenge consists in mapping data to text. The training data consists\nof Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation\nof these triples. For instance, given the 3 DBpedia triples shown in (a), the aim is to generate a text such as (b).\n\na. (John_E_Blaha birthDate 1942_08_26) (John_E_Blaha birthPlace San_Antonio) (John_E_Blaha occupation Fighter_pilot)\nb. John E Blaha, born in San Antonio on 1942-08-26, worked as a fighter pilot\n\nAs the example illustrates, the task involves specific NLG subtasks such as sentence segmentation\n(how to chunk the input data into sentences), lexicalisation (of the DBpedia properties),\naggregation (how to avoid repetitions) and surface realisation\n(how to build a syntactically correct and natural sounding text).\n", "citation": "@inproceedings{web_nlg,\n author = {Claire Gardent and\n Anastasia Shimorina and\n Shashi Narayan and\n Laura Perez{-}Beltrachini},\n editor = {Regina Barzilay and\n Min{-}Yen Kan},\n title = {Creating Training Corpora for {NLG} Micro-Planners},\n booktitle = {Proceedings of the 55th Annual Meeting of the\n Association for Computational Linguistics,\n {ACL} 2017, Vancouver, Canada, July 30 - August 4,\n Volume 1: Long Papers},\n pages = {179--188},\n publisher = {Association for Computational Linguistics},\n year = {2017},\n url = {https://doi.org/10.18653/v1/P17-1017},\n doi = {10.18653/v1/P17-1017}\n}\n", "homepage": "https://webnlg-challenge.loria.fr/", "license": "", "features": {"category": {"dtype": "string", "id": null, "_type": "Value"}, "size": {"dtype": "int32", "id": null, "_type": "Value"}, "eid": {"dtype": "string", "id": null, "_type": "Value"}, "original_triple_sets": {"feature": {"otriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "modified_triple_sets": {"feature": {"mtriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "shape": {"dtype": "string", "id": null, "_type": "Value"}, "shape_type": {"dtype": "string", "id": null, "_type": "Value"}, "lex": {"feature": {"comment": {"dtype": "string", "id": null, "_type": "Value"}, "lid": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "lang": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "test_category": {"dtype": "string", "id": null, "_type": "Value"}, "dbpedia_links": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "links": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "web_nlg", "config_name": "release_v2.1_constrained", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 11040016, "num_examples": 12895, "dataset_name": "web_nlg"}, "dev": {"name": "dev", "num_bytes": 1284044, "num_examples": 1594, "dataset_name": "web_nlg"}, "test": {"name": "test", "num_bytes": 1212665, "num_examples": 1606, "dataset_name": "web_nlg"}}, "download_checksums": {"https://gitlab.com/shimorina/webnlg-dataset/-/archive/master/webnlg-dataset-master.zip": {"num_bytes": 25390588, "checksum": "287290957f7352c9e3b64cdc5957faba8ed5d835f34f2106ba5666a77fdb1cfb"}}, "download_size": 25390588, "post_processing_size": null, "dataset_size": 13536725, "size_in_bytes": 38927313}, "release_v3.0_en": {"description": "The WebNLG challenge consists in mapping data to text. The training data consists\nof Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation\nof these triples. For instance, given the 3 DBpedia triples shown in (a), the aim is to generate a text such as (b).\n\na. (John_E_Blaha birthDate 1942_08_26) (John_E_Blaha birthPlace San_Antonio) (John_E_Blaha occupation Fighter_pilot)\nb. John E Blaha, born in San Antonio on 1942-08-26, worked as a fighter pilot\n\nAs the example illustrates, the task involves specific NLG subtasks such as sentence segmentation\n(how to chunk the input data into sentences), lexicalisation (of the DBpedia properties),\naggregation (how to avoid repetitions) and surface realisation\n(how to build a syntactically correct and natural sounding text).\n", "citation": "@inproceedings{web_nlg,\n author = {Claire Gardent and\n Anastasia Shimorina and\n Shashi Narayan and\n Laura Perez{-}Beltrachini},\n editor = {Regina Barzilay and\n Min{-}Yen Kan},\n title = {Creating Training Corpora for {NLG} Micro-Planners},\n booktitle = {Proceedings of the 55th Annual Meeting of the\n Association for Computational Linguistics,\n {ACL} 2017, Vancouver, Canada, July 30 - August 4,\n Volume 1: Long Papers},\n pages = {179--188},\n publisher = {Association for Computational Linguistics},\n year = {2017},\n url = {https://doi.org/10.18653/v1/P17-1017},\n doi = {10.18653/v1/P17-1017}\n}\n", "homepage": "https://webnlg-challenge.loria.fr/", "license": "", "features": {"category": {"dtype": "string", "id": null, "_type": "Value"}, "size": {"dtype": "int32", "id": null, "_type": "Value"}, "eid": {"dtype": "string", "id": null, "_type": "Value"}, "original_triple_sets": {"feature": {"otriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "modified_triple_sets": {"feature": {"mtriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "shape": {"dtype": "string", "id": null, "_type": "Value"}, "shape_type": {"dtype": "string", "id": null, "_type": "Value"}, "lex": {"feature": {"comment": {"dtype": "string", "id": null, "_type": "Value"}, "lid": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "lang": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "test_category": {"dtype": "string", "id": null, "_type": "Value"}, "dbpedia_links": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "links": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "web_nlg", "config_name": "release_v3.0_en", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 11084860, "num_examples": 13211, "dataset_name": "web_nlg"}, "dev": {"name": "dev", "num_bytes": 1394243, "num_examples": 1667, "dataset_name": "web_nlg"}, "test": {"name": "test", "num_bytes": 4039282, "num_examples": 5713, "dataset_name": "web_nlg"}}, "download_checksums": {"https://gitlab.com/shimorina/webnlg-dataset/-/archive/master/webnlg-dataset-master.zip": {"num_bytes": 25390588, "checksum": "287290957f7352c9e3b64cdc5957faba8ed5d835f34f2106ba5666a77fdb1cfb"}}, "download_size": 25390588, "post_processing_size": null, "dataset_size": 16518385, "size_in_bytes": 41908973}, "release_v3.0_ru": {"description": "The WebNLG challenge consists in mapping data to text. The training data consists\nof Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation\nof these triples. For instance, given the 3 DBpedia triples shown in (a), the aim is to generate a text such as (b).\n\na. (John_E_Blaha birthDate 1942_08_26) (John_E_Blaha birthPlace San_Antonio) (John_E_Blaha occupation Fighter_pilot)\nb. John E Blaha, born in San Antonio on 1942-08-26, worked as a fighter pilot\n\nAs the example illustrates, the task involves specific NLG subtasks such as sentence segmentation\n(how to chunk the input data into sentences), lexicalisation (of the DBpedia properties),\naggregation (how to avoid repetitions) and surface realisation\n(how to build a syntactically correct and natural sounding text).\n", "citation": "@inproceedings{web_nlg,\n author = {Claire Gardent and\n Anastasia Shimorina and\n Shashi Narayan and\n Laura Perez{-}Beltrachini},\n editor = {Regina Barzilay and\n Min{-}Yen Kan},\n title = {Creating Training Corpora for {NLG} Micro-Planners},\n booktitle = {Proceedings of the 55th Annual Meeting of the\n Association for Computational Linguistics,\n {ACL} 2017, Vancouver, Canada, July 30 - August 4,\n Volume 1: Long Papers},\n pages = {179--188},\n publisher = {Association for Computational Linguistics},\n year = {2017},\n url = {https://doi.org/10.18653/v1/P17-1017},\n doi = {10.18653/v1/P17-1017}\n}\n", "homepage": "https://webnlg-challenge.loria.fr/", "license": "", "features": {"category": {"dtype": "string", "id": null, "_type": "Value"}, "size": {"dtype": "int32", "id": null, "_type": "Value"}, "eid": {"dtype": "string", "id": null, "_type": "Value"}, "original_triple_sets": {"feature": {"otriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "modified_triple_sets": {"feature": {"mtriple_set": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "shape": {"dtype": "string", "id": null, "_type": "Value"}, "shape_type": {"dtype": "string", "id": null, "_type": "Value"}, "lex": {"feature": {"comment": {"dtype": "string", "id": null, "_type": "Value"}, "lid": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "lang": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "test_category": {"dtype": "string", "id": null, "_type": "Value"}, "dbpedia_links": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "links": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "web_nlg", "config_name": "release_v3.0_ru", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 9550340, "num_examples": 5573, "dataset_name": "web_nlg"}, "dev": {"name": "dev", "num_bytes": 1314226, "num_examples": 790, "dataset_name": "web_nlg"}, "test": {"name": "test", "num_bytes": 3656501, "num_examples": 3410, "dataset_name": "web_nlg"}}, "download_checksums": {"https://gitlab.com/shimorina/webnlg-dataset/-/archive/master/webnlg-dataset-master.zip": {"num_bytes": 25390588, "checksum": "287290957f7352c9e3b64cdc5957faba8ed5d835f34f2106ba5666a77fdb1cfb"}}, "download_size": 25390588, "post_processing_size": null, "dataset_size": 14521067, "size_in_bytes": 39911655}}
web_nlg.py CHANGED
@@ -21,6 +21,7 @@ import xml.etree.cElementTree as ET
21
  from collections import defaultdict
22
  from glob import glob
23
  from os.path import join as pjoin
 
24
 
25
  import datasets
26
 
@@ -92,12 +93,12 @@ _FILE_PATHS = {
92
  "release_v3.0_en": {
93
  "train": [f"release_v3.0/en/train/{i}triples/" for i in range(1, 8)],
94
  "dev": [f"release_v3.0/en/dev/{i}triples/" for i in range(1, 8)],
95
- "test": [f"release_v3.0/en/test/" for i in range(1, 8)],
96
  },
97
  "release_v3.0_ru": {
98
  "train": [f"release_v3.0/ru/train/{i}triples/" for i in range(1, 8)],
99
  "dev": [f"release_v3.0/ru/dev/{i}triples/" for i in range(1, 8)],
100
- "test": [f"release_v3.0/ru/test/" for i in range(1, 8)],
101
  },
102
  }
103
 
@@ -136,9 +137,20 @@ def parse_entry(entry):
136
  "comment": [ex.get("comment", "") for ex in entry.get("lex", [])],
137
  "lid": [ex.get("lid", "") for ex in entry.get("lex", [])],
138
  "text": [ex.get("text", "") for ex in entry.get("lex", [])],
 
139
  }
140
  res["shape"] = entry.get("shape", "")
141
  res["shape_type"] = entry.get("shape_type", "")
 
 
 
 
 
 
 
 
 
 
142
  return res
143
 
144
 
@@ -197,9 +209,12 @@ class WebNlg(datasets.GeneratorBasedBuilder):
197
  "comment": datasets.Value("string"),
198
  "lid": datasets.Value("string"),
199
  "text": datasets.Value("string"),
 
200
  }
201
  ),
202
- "2017_test_category": datasets.Value("string"),
 
 
203
  }
204
  )
205
  return datasets.DatasetInfo(
@@ -238,8 +253,17 @@ class WebNlg(datasets.GeneratorBasedBuilder):
238
  id_ = 0
239
  for xml_location in filedirs:
240
  for xml_file in sorted(glob(pjoin(xml_location, "*.xml"))):
241
- test_cat = xml_file.split("/")[-1][:-4] if "webnlg_challenge_2017/test" in xml_file else ""
 
 
 
 
 
 
 
 
 
242
  for exple_dict in xml_file_to_examples(xml_file):
243
- exple_dict["2017_test_category"] = test_cat
244
  id_ += 1
245
  yield id_, exple_dict
 
21
  from collections import defaultdict
22
  from glob import glob
23
  from os.path import join as pjoin
24
+ from pathlib import Path
25
 
26
  import datasets
27
 
 
93
  "release_v3.0_en": {
94
  "train": [f"release_v3.0/en/train/{i}triples/" for i in range(1, 8)],
95
  "dev": [f"release_v3.0/en/dev/{i}triples/" for i in range(1, 8)],
96
+ "test": [f"release_v3.0/en/test/"],
97
  },
98
  "release_v3.0_ru": {
99
  "train": [f"release_v3.0/ru/train/{i}triples/" for i in range(1, 8)],
100
  "dev": [f"release_v3.0/ru/dev/{i}triples/" for i in range(1, 8)],
101
+ "test": [f"release_v3.0/ru/test/"],
102
  },
103
  }
104
 
 
137
  "comment": [ex.get("comment", "") for ex in entry.get("lex", [])],
138
  "lid": [ex.get("lid", "") for ex in entry.get("lex", [])],
139
  "text": [ex.get("text", "") for ex in entry.get("lex", [])],
140
+ "lang": [ex.get("lang", "") for ex in entry.get("lex", [])],
141
  }
142
  res["shape"] = entry.get("shape", "")
143
  res["shape_type"] = entry.get("shape_type", "")
144
+ dbpedia_links = entry["dbpedialinks"]
145
+ if dbpedia_links:
146
+ res["dbpedia_links"] = [dbpedia_link["text"] for dbpedia_link in dbpedia_links[0]["dbpedialink"]]
147
+ else:
148
+ res["dbpedia_links"] = []
149
+ links = entry["links"]
150
+ if links:
151
+ res["links"] = [link["text"] for link in links[0]["link"]]
152
+ else:
153
+ res["links"] = []
154
  return res
155
 
156
 
 
209
  "comment": datasets.Value("string"),
210
  "lid": datasets.Value("string"),
211
  "text": datasets.Value("string"),
212
+ "lang": datasets.Value("string"),
213
  }
214
  ),
215
+ "test_category": datasets.Value("string"),
216
+ "dbpedia_links": datasets.Sequence(datasets.Value("string")),
217
+ "links": datasets.Sequence(datasets.Value("string")),
218
  }
219
  )
220
  return datasets.DatasetInfo(
 
253
  id_ = 0
254
  for xml_location in filedirs:
255
  for xml_file in sorted(glob(pjoin(xml_location, "*.xml"))):
256
+ # windows may use backslashes so we first need to replace them with slashes
257
+ xml_file_path_with_slashes = "/".join(Path(xml_file).parts)
258
+ if (
259
+ "webnlg_challenge_2017/test" in xml_file_path_with_slashes
260
+ or "release_v3.0/en/test" in xml_file_path_with_slashes
261
+ or "release_v3.0/ru/test" in xml_file_path_with_slashes
262
+ ):
263
+ test_cat = xml_file_path_with_slashes.split("/")[-1][:-4]
264
+ else:
265
+ test_cat = ""
266
  for exple_dict in xml_file_to_examples(xml_file):
267
+ exple_dict["test_category"] = test_cat
268
  id_ += 1
269
  yield id_, exple_dict