Datasets:

Multilinguality:
multilingual
Size Categories:
10K<n<100K
1K<n<10K
Language Creators:
crowdsourced
Annotations Creators:
crowdsourced
Source Datasets:
original
ArXiv:
Tags:
License:

Host data files

#2
by albertvillanova HF staff - opened
README.md CHANGED
@@ -482,7 +482,7 @@ config_names:
482
 
483
  ### Dataset Summary
484
 
485
- We introduce WikiLingua, a large-scale, multilingual dataset for the evaluation of crosslingual abstractive summarization systems. We extract article and summary pairs in 18 languages from WikiHow, a high quality, collaborative resource of how-to guides on a diverse set of topics written by human authors. We create gold-standard article-summary alignments across languages by aligning the images that are used to describe each how-to step in an article.
486
 
487
  ### Supported Tasks and Leaderboards
488
 
@@ -631,12 +631,20 @@ ______________________________
631
  ### Citation Information
632
 
633
  ```bibtex
634
- @article{ladhak-wiki-2020,
635
- title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},
636
- authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},
637
- journal = {arXiv preprint arXiv:2010.03093},
638
- year = {2020},
639
- url = {https://arxiv.org/abs/2010.03093}
 
 
 
 
 
 
 
 
640
  }
641
  ```
642
 
 
482
 
483
  ### Dataset Summary
484
 
485
+ We introduce WikiLingua, a large-scale, multilingual dataset for the evaluation of cross-lingual abstractive summarization systems. We extract article and summary pairs in 18 languages from WikiHow, a high quality, collaborative resource of how-to guides on a diverse set of topics written by human authors. We create gold-standard article-summary alignments across languages by aligning the images that are used to describe each how-to step in an article.
486
 
487
  ### Supported Tasks and Leaderboards
488
 
 
631
  ### Citation Information
632
 
633
  ```bibtex
634
+ @inproceedings{ladhak-etal-2020-wikilingua,
635
+ title = "{W}iki{L}ingua: A New Benchmark Dataset for Cross-Lingual Abstractive Summarization",
636
+ author = "Ladhak, Faisal and
637
+ Durmus, Esin and
638
+ Cardie, Claire and
639
+ McKeown, Kathleen",
640
+ booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
641
+ month = nov,
642
+ year = "2020",
643
+ address = "Online",
644
+ publisher = "Association for Computational Linguistics",
645
+ url = "https://aclanthology.org/2020.findings-emnlp.360",
646
+ doi = "10.18653/v1/2020.findings-emnlp.360",
647
+ pages = "4034--4048",
648
  }
649
  ```
650
 
data/arabic.jsonl.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7c3d0c024d5e024392c7d51c70c2c7a2e3241ed52159c4d5591f1736d15b520d
3
+ size 41403753
data/chinese.jsonl.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:89a721b220f58ef77b7d570cab4b093b84aa24c77524f6d293f3112687c161a9
3
+ size 19099290
data/czech.jsonl.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:719327825ffc23bd6d7392e8a06db168e18069e35d0f150ba719eb438e1d6e9b
3
+ size 8293848
data/dutch.jsonl.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:241595c7cc4031f8fed3e76502e0bb1a9a9871d097b85ab8a83075ed4f5b4407
3
+ size 29400461
data/english.jsonl.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1cd5a804b1a64763e13c97fa1ad1d22d0263e55bcd71a42d31b340b0c8cb4d29
3
+ size 115537674
data/french.jsonl.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a687aad0b3fe602ae47bb071767a12c39280a8491781987c6f7333507f4ed14e
3
+ size 65059668
data/german.jsonl.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8a9bdc538934041eac8a31cb331a4b35a2e5151b27f28959507105afbfda2a3
3
+ size 58091919
data/hindi.jsonl.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d70b966b130158943c201f0c7f572ed084514a76685521484c2960712377bf9c
3
+ size 14567375
data/indonesian.jsonl.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6584fe08ee0d2b996cee4b2476191b4eb8c2d58f374478364b1e1edccd85806d
3
+ size 41852908
data/italian.jsonl.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ea94c8c2a1be3b4c67c1eabc3fe03d607570bd754d983cfa9434d5f3b53424e
3
+ size 46917823
data/japanese.jsonl.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:169a5a7fb0c4e166a2e125158bb9c6972c5a76c6d3abfe3f267068c5ef0debcd
3
+ size 14416407
data/korean.jsonl.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28441564238c7f0830cee9486c4cbc028f3a646e91436556c28a56e6c34aae88
3
+ size 14417542
data/portuguese.jsonl.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:52ac0deeae96eef1a3f89805a3a944f61b1f80ad9b6c0f8511e9c3f54ec4e010
3
+ size 71411075
data/russian.jsonl.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5956df21ca5eaeb2f8cdb549ca51873a659aaecada100d94ae2b711a7d867b01
3
+ size 79624829
data/spanish.jsonl.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5431e36af351461d6acd9da6248943ab054d03f2c8277a151cbd9ae076781c30
3
+ size 104218559
data/thai.jsonl.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a8b3c99077a95df6caf10d72016c4570a1bd6914c76ae3d3c41fd3eedf87e75d
3
+ size 19741058
data/turkish.jsonl.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a3d5eb7d9bb943fc5d6c1765d7740d88c2aae78aa5ad32b6b5f49e55020a350
3
+ size 3877836
data/vietnamese.jsonl.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6735c342ebc3cf347bbd8052c58a26fbf6d2f5154e8ab2acf4726276405f046c
3
+ size 22117520
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"arabic": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wiki_lingua", "config_name": "arabic", "version": {"version_str": "1.1.1", "description": null, "major": 1, "minor": 1, "patch": 1}, "splits": {"train": {"name": "train", "num_bytes": 119116119, "num_examples": 9995, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1__EjA6oZsgXQpggPm-h54jZu3kP6Y6zu": {"num_bytes": 119358890, "checksum": "25fc655eb53227acf5dbe4de09732dedee6cbd83b4c1e8c3bb018eada79555d1"}}, "download_size": 119358890, "post_processing_size": null, "dataset_size": 119116119, "size_in_bytes": 238475009}, "chinese": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wiki_lingua", "config_name": "chinese", "version": {"version_str": "1.1.1", "description": null, "major": 1, "minor": 1, "patch": 1}, "splits": {"train": {"name": "train", "num_bytes": 41170689, "num_examples": 6541, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1TuWH7uwu6V90QWmZn25qhou1rm97Egmn": {"num_bytes": 41345464, "checksum": "be54a90ec9ac9baa2fb006c11363d44b9475c1fb8ac2aa84beeea1e065c58972"}}, "download_size": 41345464, "post_processing_size": null, "dataset_size": 41170689, "size_in_bytes": 82516153}, "czech": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wiki_lingua", "config_name": "czech", "version": {"version_str": "1.1.1", "description": null, "major": 1, "minor": 1, "patch": 1}, "splits": {"train": {"name": "train", "num_bytes": 20816390, "num_examples": 2520, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1GcUN6mytEcOMBBOvjJOQzBmEkc-LdgQg": {"num_bytes": 20894511, "checksum": "bb3f9300b8631667d25b9e2b73c98ad90e0b5a3203bba21ed896f12b4a4e39a1"}}, "download_size": 20894511, "post_processing_size": null, "dataset_size": 20816390, "size_in_bytes": 41710901}, "dutch": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wiki_lingua", "config_name": "dutch", "version": {"version_str": "1.1.1", "description": null, "major": 1, "minor": 1, "patch": 1}, "splits": {"train": {"name": "train", "num_bytes": 87258040, "num_examples": 10862, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1-w-0uqaC6hnRn1F_3XqJEvi09zlcTIhX": {"num_bytes": 87533442, "checksum": "1282abaa1f70e0d46db2f199a8e0bacd5c06a97220cf874854c41e12c072f10a"}}, "download_size": 87533442, "post_processing_size": null, "dataset_size": 87258040, "size_in_bytes": 174791482}, "english": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wiki_lingua", "config_name": "english", "version": {"version_str": "1.1.1", "description": null, "major": 1, "minor": 1, "patch": 1}, "splits": {"train": {"name": "train", "num_bytes": 333700114, "num_examples": 57945, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=11wMGqNVSwwk6zUnDaJEgm3qT71kAHeff": {"num_bytes": 338036185, "checksum": "1f0b51ac4b733e06a067826d9e137ee300d751f12f240e95be4b258f7bb5191d"}}, "download_size": 338036185, "post_processing_size": null, "dataset_size": 333700114, "size_in_bytes": 671736299}, "french": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wiki_lingua", "config_name": "french", "version": {"version_str": "1.1.1", "description": null, "major": 1, "minor": 1, "patch": 1}, "splits": {"train": {"name": "train", "num_bytes": 197550376, "num_examples": 21690, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1Uit4Og1pk-br_0UJIO5sdhApyhTuHzqo": {"num_bytes": 198114157, "checksum": "e7e71d214142d06ddfd00411c2ceb3f1abee44eef9f6dbdd61ea5c5b30521230"}}, "download_size": 198114157, "post_processing_size": null, "dataset_size": 197550376, "size_in_bytes": 395664533}, "german": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wiki_lingua", "config_name": "german", "version": {"version_str": "1.1.1", "description": null, "major": 1, "minor": 1, "patch": 1}, "splits": {"train": {"name": "train", "num_bytes": 168674340, "num_examples": 20103, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1meSNZHxd_0TZLKCRCYGN-Ke3IA5c1qOE": {"num_bytes": 169195050, "checksum": "88ee4628700c0e58b529a75e3f9f27022be3e7a591a8981f503b078a7116c4eb"}}, "download_size": 169195050, "post_processing_size": null, "dataset_size": 168674340, "size_in_bytes": 337869390}, "hindi": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wiki_lingua", "config_name": "hindi", "version": {"version_str": "1.1.1", "description": null, "major": 1, "minor": 1, "patch": 1}, "splits": {"train": {"name": "train", "num_bytes": 63785051, "num_examples": 3402, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1ZyFGufe4puX3vjGPbp4xg9Hca3Gwq22g": {"num_bytes": 63874759, "checksum": "a6a9b0cb313ecad82985269153e03e4c02376f0e52e53168100eacafc1c55037"}}, "download_size": 63874759, "post_processing_size": null, "dataset_size": 63785051, "size_in_bytes": 127659810}, "indonesian": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wiki_lingua", "config_name": "indonesian", "version": {"version_str": "1.1.1", "description": null, "major": 1, "minor": 1, "patch": 1}, "splits": {"train": {"name": "train", "num_bytes": 136408861, "num_examples": 16308, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1PGa8j1_IqxiGTc3SU6NMB38sAzxCPS34": {"num_bytes": 136833587, "checksum": "cfa0b6eeb590e0db212b616d455fa00ed376186638c7c4b2771986fb4bd4b7e6"}}, "download_size": 136833587, "post_processing_size": null, "dataset_size": 136408861, "size_in_bytes": 273242448}, "italian": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wiki_lingua", "config_name": "italian", "version": {"version_str": "1.1.1", "description": null, "major": 1, "minor": 1, "patch": 1}, "splits": {"train": {"name": "train", "num_bytes": 138119527, "num_examples": 17673, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1okwGJiOZmTpNRNgJLCnjFF4Q0H1z4l6_": {"num_bytes": 138578956, "checksum": "f6960f3d025f65452d3a536065925e86c425f7f559f574ed078172aa30d6a6ae"}}, "download_size": 138578956, "post_processing_size": null, "dataset_size": 138119527, "size_in_bytes": 276698483}, "japanese": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wiki_lingua", "config_name": "japanese", "version": {"version_str": "1.1.1", "description": null, "major": 1, "minor": 1, "patch": 1}, "splits": {"train": {"name": "train", "num_bytes": 40145031, "num_examples": 4372, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1Z2ty5hU0tIGRZRDlFQZLO7b5vijRfvo0": {"num_bytes": 40259570, "checksum": "dc080f6db644261e31b0d9564eec0c07f87e939cd4af535ad239ee8813c92a33"}}, "download_size": 40259570, "post_processing_size": null, "dataset_size": 40145031, "size_in_bytes": 80404601}, "korean": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wiki_lingua", "config_name": "korean", "version": {"version_str": "1.1.1", "description": null, "major": 1, "minor": 1, "patch": 1}, "splits": {"train": {"name": "train", "num_bytes": 38647614, "num_examples": 4111, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1cqu_YAgvlyVSzzjcUyP1Cz7q0k8Pw7vN": {"num_bytes": 38748961, "checksum": "b6f97c124033c99034696034a19b4e32d0573281281fe2655f7d70032dc65d01"}}, "download_size": 38748961, "post_processing_size": null, "dataset_size": 38647614, "size_in_bytes": 77396575}, "portuguese": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wiki_lingua", "config_name": "portuguese", "version": {"version_str": "1.1.1", "description": null, "major": 1, "minor": 1, "patch": 1}, "splits": {"train": {"name": "train", "num_bytes": 204270845, "num_examples": 28143, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1GTHUJxxmjLmG2lnF9dwRgIDRFZaOY3-F": {"num_bytes": 204997686, "checksum": "c5f912b3b00e11f02a9ddd2b879b605f3fd2354eb0b5f8acac13e01e49ea1e59"}}, "download_size": 204997686, "post_processing_size": null, "dataset_size": 204270845, "size_in_bytes": 409268531}, "russian": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wiki_lingua", "config_name": "russian", "version": {"version_str": "1.1.1", "description": null, "major": 1, "minor": 1, "patch": 1}, "splits": {"train": {"name": "train", "num_bytes": 241924032, "num_examples": 18143, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1fUR3MqJ8jTMka6owA0S-Fe6aHmiophc_": {"num_bytes": 242377242, "checksum": "246647637d6de8bb84e26f68546c5a5ba04e196d1769716975e52447d43e4d71"}}, "download_size": 242377242, "post_processing_size": null, "dataset_size": 241924032, "size_in_bytes": 484301274}, "spanish": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wiki_lingua", "config_name": "spanish", "version": {"version_str": "1.1.1", "description": null, "major": 1, "minor": 1, "patch": 1}, "splits": {"train": {"name": "train", "num_bytes": 314618618, "num_examples": 38795, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1KtMDsoYNukGP89PLujQTGVgt37cOARs5": {"num_bytes": 315609530, "checksum": "b6c42c313d28199c88a0696d920c08ab951820e84f6ebe9137dd7e74b6907912"}}, "download_size": 315609530, "post_processing_size": null, "dataset_size": 314618618, "size_in_bytes": 630228148}, "thai": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wiki_lingua", "config_name": "thai", "version": {"version_str": "1.1.1", "description": null, "major": 1, "minor": 1, "patch": 1}, "splits": {"train": {"name": "train", "num_bytes": 86982851, "num_examples": 5093, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1QsV8C5EPJrQl37mwva_5-IJOrCaOi2tH": {"num_bytes": 87104200, "checksum": "464a35114cb35792f0a875ebf653c60be8b83e6eb5baa458dce2629c3b798161"}}, "download_size": 87104200, "post_processing_size": null, "dataset_size": 86982851, "size_in_bytes": 174087051}, "turkish": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wiki_lingua", "config_name": "turkish", "version": {"version_str": "1.1.1", "description": null, "major": 1, "minor": 1, "patch": 1}, "splits": {"train": {"name": "train", "num_bytes": 11371821, "num_examples": 1512, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1M1M5yIOyjKWGprc3LUeVVwxgKXxgpqxm": {"num_bytes": 11405793, "checksum": "858406c011fc2c1ef0c8bf3acb77edcf1d05c5189e61be54e1655d6e8a98076d"}}, "download_size": 11405793, "post_processing_size": null, "dataset_size": 11371821, "size_in_bytes": 22777614}, "vietnamese": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wiki_lingua", "config_name": "vietnamese", "version": {"version_str": "1.1.1", "description": null, "major": 1, "minor": 1, "patch": 1}, "splits": {"train": {"name": "train", "num_bytes": 69868788, "num_examples": 6616, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=17FGi8KI9N9SuGe7elM8qU8_3fx4sfgTr": {"num_bytes": 70024093, "checksum": "590e51dbef3cd17ef271088778289596d1363d72708e7f7d625d28a837e395a5"}}, "download_size": 70024093, "post_processing_size": null, "dataset_size": 69868788, "size_in_bytes": 139892881}}
 
 
wiki_lingua.py CHANGED
@@ -12,28 +12,36 @@
12
  # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
  # See the License for the specific language governing permissions and
14
  # limitations under the License.
15
- """TODO: Add a description here."""
16
 
17
 
18
- import pickle
19
 
20
  import datasets
21
 
22
 
23
  # Find for instance the citation on arxiv or on the dataset repo/website
24
  _CITATION = """\
25
- @article{ladhak-wiki-2020,
26
- title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},
27
- authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},
28
- journal = {arXiv preprint arXiv:2010.03093},
29
- year = {2020},
30
- url = {https://arxiv.org/abs/2010.03093}
 
 
 
 
 
 
 
 
31
  }
32
  """
33
 
34
  _DESCRIPTION = """\
35
  WikiLingua is a large-scale multilingual dataset for the evaluation of
36
- crosslingual abstractive summarization systems. The dataset includes ~770k
37
  article and summary pairs in 18 languages from WikiHow. The gold-standard
38
  article-summary alignments across languages was done by aligning the images
39
  that are used to describe each how-to step in an article.
@@ -43,72 +51,42 @@ _HOMEPAGE = "https://github.com/esdurmus/Wikilingua"
43
 
44
  _LICENSE = "CC BY-NC-SA 3.0"
45
 
46
- # Download links
47
- _URLs = {
48
- "arabic": "https://drive.google.com/uc?export=download&id=1__EjA6oZsgXQpggPm-h54jZu3kP6Y6zu",
49
- "chinese": "https://drive.google.com/uc?export=download&id=1TuWH7uwu6V90QWmZn25qhou1rm97Egmn",
50
- "czech": "https://drive.google.com/uc?export=download&id=1GcUN6mytEcOMBBOvjJOQzBmEkc-LdgQg",
51
- "dutch": "https://drive.google.com/uc?export=download&id=1-w-0uqaC6hnRn1F_3XqJEvi09zlcTIhX",
52
- "english": "https://drive.google.com/uc?export=download&id=11wMGqNVSwwk6zUnDaJEgm3qT71kAHeff",
53
- "french": "https://drive.google.com/uc?export=download&id=1Uit4Og1pk-br_0UJIO5sdhApyhTuHzqo",
54
- "german": "https://drive.google.com/uc?export=download&id=1meSNZHxd_0TZLKCRCYGN-Ke3IA5c1qOE",
55
- "hindi": "https://drive.google.com/uc?export=download&id=1ZyFGufe4puX3vjGPbp4xg9Hca3Gwq22g",
56
- "indonesian": "https://drive.google.com/uc?export=download&id=1PGa8j1_IqxiGTc3SU6NMB38sAzxCPS34",
57
- "italian": "https://drive.google.com/uc?export=download&id=1okwGJiOZmTpNRNgJLCnjFF4Q0H1z4l6_",
58
- "japanese": "https://drive.google.com/uc?export=download&id=1Z2ty5hU0tIGRZRDlFQZLO7b5vijRfvo0",
59
- "korean": "https://drive.google.com/uc?export=download&id=1cqu_YAgvlyVSzzjcUyP1Cz7q0k8Pw7vN",
60
- "portuguese": "https://drive.google.com/uc?export=download&id=1GTHUJxxmjLmG2lnF9dwRgIDRFZaOY3-F",
61
- "russian": "https://drive.google.com/uc?export=download&id=1fUR3MqJ8jTMka6owA0S-Fe6aHmiophc_",
62
- "spanish": "https://drive.google.com/uc?export=download&id=1KtMDsoYNukGP89PLujQTGVgt37cOARs5",
63
- "thai": "https://drive.google.com/uc?export=download&id=1QsV8C5EPJrQl37mwva_5-IJOrCaOi2tH",
64
- "turkish": "https://drive.google.com/uc?export=download&id=1M1M5yIOyjKWGprc3LUeVVwxgKXxgpqxm",
65
- "vietnamese": "https://drive.google.com/uc?export=download&id=17FGi8KI9N9SuGe7elM8qU8_3fx4sfgTr",
66
- }
 
67
 
68
 
69
  class WikiLingua(datasets.GeneratorBasedBuilder):
70
- """TODO: Short description of my dataset."""
71
 
72
  VERSION = datasets.Version("1.1.1")
73
 
74
- # This is an example of a dataset with multiple configurations.
75
- # If you don't want/need to define several sub-sets in your dataset,
76
- # just remove the BUILDER_CONFIG_CLASS and the BUILDER_CONFIGS attributes.
77
-
78
- # If you need to make complex sub-parts in the datasets with configurable options
79
- # You can create your own builder configuration class to store attribute, inheriting from datasets.BuilderConfig
80
- # BUILDER_CONFIG_CLASS = MyBuilderConfig
81
-
82
- # You will be able to load one or the other configurations in the following list with
83
- # data = datasets.load_dataset('my_dataset', 'first_domain')
84
- # data = datasets.load_dataset('my_dataset', 'second_domain')
85
  BUILDER_CONFIGS = [
86
- datasets.BuilderConfig(name="arabic", version=VERSION, description="A subset of article-summary in Arabic"),
87
- datasets.BuilderConfig(name="chinese", version=VERSION, description="A subset of article-summary in Chinese"),
88
- datasets.BuilderConfig(name="czech", version=VERSION, description="A subset of article-summary in Czech"),
89
- datasets.BuilderConfig(name="dutch", version=VERSION, description="A subset of article-summary in Dutch"),
90
- datasets.BuilderConfig(name="english", version=VERSION, description="A subset of article-summary in English"),
91
- datasets.BuilderConfig(name="french", version=VERSION, description="A subset of article-summary in French"),
92
- datasets.BuilderConfig(name="german", version=VERSION, description="A subset of article-summary in German"),
93
- datasets.BuilderConfig(name="hindi", version=VERSION, description="A subset of article-summary in Hindi"),
94
- datasets.BuilderConfig(
95
- name="indonesian", version=VERSION, description="A subset of article-summary in Indonesian"
96
- ),
97
- datasets.BuilderConfig(name="italian", version=VERSION, description="A subset of article-summary in Italian"),
98
  datasets.BuilderConfig(
99
- name="japanese", version=VERSION, description="A subset of article-summary in Japanese"
100
- ),
101
- datasets.BuilderConfig(name="korean", version=VERSION, description="A subset of article-summary in Korean"),
102
- datasets.BuilderConfig(
103
- name="portuguese", version=VERSION, description="A subset of article-summary in Portuguese"
104
- ),
105
- datasets.BuilderConfig(name="russian", version=VERSION, description="A subset of article-summary in Russian"),
106
- datasets.BuilderConfig(name="spanish", version=VERSION, description="A subset of article-summary in Spanish"),
107
- datasets.BuilderConfig(name="thai", version=VERSION, description="A subset of article-summary in Thai"),
108
- datasets.BuilderConfig(name="turkish", version=VERSION, description="A subset of article-summary in Turkish"),
109
- datasets.BuilderConfig(
110
- name="vietnamese", version=VERSION, description="A subset of article-summary in Vietnamese"
111
- ),
112
  ]
113
 
114
  DEFAULT_CONFIG_NAME = "english"
@@ -148,10 +126,6 @@ class WikiLingua(datasets.GeneratorBasedBuilder):
148
  description=_DESCRIPTION,
149
  # This defines the different columns of the dataset and their types
150
  features=features, # Here we define them above because they are different between the two configurations
151
- # If there's a common (input, target) tuple from the features,
152
- # specify them here. They'll be used if as_supervised=True in
153
- # builder.as_dataset.
154
- supervised_keys=None,
155
  # Homepage of the dataset for documentation
156
  homepage=_HOMEPAGE,
157
  # License for the dataset if available
@@ -162,16 +136,13 @@ class WikiLingua(datasets.GeneratorBasedBuilder):
162
 
163
  def _split_generators(self, dl_manager):
164
  """Returns SplitGenerators."""
165
- my_urls = _URLs[self.config.name]
166
- # See create_dummy.py to create new dummy data
167
- train_fname = dl_manager.download_and_extract(my_urls)
168
  return [
169
  datasets.SplitGenerator(
170
  name=datasets.Split.TRAIN,
171
  # These kwargs will be passed to _generate_examples
172
  gen_kwargs={
173
- "filepath": train_fname,
174
- "split": "train",
175
  },
176
  ),
177
  ]
@@ -189,9 +160,9 @@ class WikiLingua(datasets.GeneratorBasedBuilder):
189
 
190
  return processed_article
191
 
192
- def _generate_examples(self, filepath, split):
193
  """Yields examples."""
194
  with open(filepath, "rb") as f:
195
- data = pickle.load(f)
196
- for id_, row in enumerate(data.items()):
197
- yield id_, {"url": row[0], "article": self._process_article(row[1])}
 
12
  # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
  # See the License for the specific language governing permissions and
14
  # limitations under the License.
15
+ """WikiLingua."""
16
 
17
 
18
+ import json
19
 
20
  import datasets
21
 
22
 
23
  # Find for instance the citation on arxiv or on the dataset repo/website
24
  _CITATION = """\
25
+ @inproceedings{ladhak-etal-2020-wikilingua,
26
+ title = "{W}iki{L}ingua: A New Benchmark Dataset for Cross-Lingual Abstractive Summarization",
27
+ author = "Ladhak, Faisal and
28
+ Durmus, Esin and
29
+ Cardie, Claire and
30
+ McKeown, Kathleen",
31
+ booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
32
+ month = nov,
33
+ year = "2020",
34
+ address = "Online",
35
+ publisher = "Association for Computational Linguistics",
36
+ url = "https://aclanthology.org/2020.findings-emnlp.360",
37
+ doi = "10.18653/v1/2020.findings-emnlp.360",
38
+ pages = "4034--4048",
39
  }
40
  """
41
 
42
  _DESCRIPTION = """\
43
  WikiLingua is a large-scale multilingual dataset for the evaluation of
44
+ cross-lingual abstractive summarization systems. The dataset includes ~770k
45
  article and summary pairs in 18 languages from WikiHow. The gold-standard
46
  article-summary alignments across languages was done by aligning the images
47
  that are used to describe each how-to step in an article.
 
51
 
52
  _LICENSE = "CC BY-NC-SA 3.0"
53
 
54
+ # Download link
55
+ _URL = "data/{language}.jsonl.gz"
56
+ _LANGUAGES = [
57
+ "arabic",
58
+ "chinese",
59
+ "czech",
60
+ "dutch",
61
+ "english",
62
+ "french",
63
+ "german",
64
+ "hindi",
65
+ "indonesian",
66
+ "italian",
67
+ "japanese",
68
+ "korean",
69
+ "portuguese",
70
+ "russian",
71
+ "spanish",
72
+ "thai",
73
+ "turkish",
74
+ "vietnamese",
75
+ ]
76
 
77
 
78
  class WikiLingua(datasets.GeneratorBasedBuilder):
79
+ """WikiLingua dataset."""
80
 
81
  VERSION = datasets.Version("1.1.1")
82
 
 
 
 
 
 
 
 
 
 
 
 
83
  BUILDER_CONFIGS = [
 
 
 
 
 
 
 
 
 
 
 
 
84
  datasets.BuilderConfig(
85
+ name=lang,
86
+ version=datasets.Version("1.1.1"),
87
+ description=f"A subset of article-summary in {lang.capitalize()}",
88
+ )
89
+ for lang in _LANGUAGES
 
 
 
 
 
 
 
 
90
  ]
91
 
92
  DEFAULT_CONFIG_NAME = "english"
 
126
  description=_DESCRIPTION,
127
  # This defines the different columns of the dataset and their types
128
  features=features, # Here we define them above because they are different between the two configurations
 
 
 
 
129
  # Homepage of the dataset for documentation
130
  homepage=_HOMEPAGE,
131
  # License for the dataset if available
 
136
 
137
  def _split_generators(self, dl_manager):
138
  """Returns SplitGenerators."""
139
+ filepath = dl_manager.download_and_extract(_URL.format(language=self.config.name))
 
 
140
  return [
141
  datasets.SplitGenerator(
142
  name=datasets.Split.TRAIN,
143
  # These kwargs will be passed to _generate_examples
144
  gen_kwargs={
145
+ "filepath": filepath,
 
146
  },
147
  ),
148
  ]
 
160
 
161
  return processed_article
162
 
163
+ def _generate_examples(self, filepath):
164
  """Yields examples."""
165
  with open(filepath, "rb") as f:
166
+ for id_, line in enumerate(f):
167
+ row = json.loads(line)
168
+ yield id_, {"url": row["url"], "article": self._process_article(row["article"])}