system HF staff commited on
Commit
8e7f0d8
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,249 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - crowdsourced
6
+ languages:
7
+ arabic:
8
+ - ar
9
+ chinese:
10
+ - zh
11
+ czech:
12
+ - cs
13
+ dutch:
14
+ - nl
15
+ english:
16
+ - en
17
+ french:
18
+ - fr
19
+ german:
20
+ - de
21
+ hindi:
22
+ - hi
23
+ indonesian:
24
+ - id
25
+ italian:
26
+ - it
27
+ japanese:
28
+ - ja
29
+ korean:
30
+ - ko
31
+ portuguese:
32
+ - pt
33
+ russian:
34
+ - ru
35
+ spanish:
36
+ - es
37
+ thai:
38
+ - th
39
+ turkish:
40
+ - tr
41
+ vietnamese:
42
+ - vi
43
+ licenses:
44
+ - cc-by-3-0
45
+ multilinguality:
46
+ - multilingual
47
+ size_categories:
48
+ arabic:
49
+ - 10K<n<100K
50
+ chinese:
51
+ - 10K<n<100K
52
+ czech:
53
+ - 1K<n<10K
54
+ dutch:
55
+ - 10K<n<100K
56
+ english:
57
+ - 100K<n<500K
58
+ french:
59
+ - 10K<n<100K
60
+ german:
61
+ - 10K<n<100K
62
+ hindi:
63
+ - 1K<n<10K
64
+ indonesian:
65
+ - 10K<n<100K
66
+ italian:
67
+ - 10K<n<100K
68
+ japanese:
69
+ - 10K<n<100K
70
+ korean:
71
+ - 10K<n<100K
72
+ portuguese:
73
+ - 10K<n<100K
74
+ russian:
75
+ - 10K<n<100K
76
+ spanish:
77
+ - 100K<n<500K
78
+ thai:
79
+ - 10K<n<100K
80
+ turkish:
81
+ - 1K<n<10K
82
+ vietnamese:
83
+ - 10K<n<100K
84
+ source_datasets:
85
+ - original
86
+ task_categories:
87
+ - conditional-text-generation
88
+ task_ids:
89
+ - summarization
90
+ ---
91
+ # Dataset Card Creation Guide
92
+
93
+ ## Table of Contents
94
+ - [Dataset Description](#dataset-description)
95
+ - [Dataset Summary](#dataset-summary)
96
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
97
+ - [Languages](#languages)
98
+ - [Dataset Structure](#dataset-structure)
99
+ - [Data Instances](#data-instances)
100
+ - [Data Fields](#data-instances)
101
+ - [Data Splits](#data-instances)
102
+ - [Dataset Creation](#dataset-creation)
103
+ - [Curation Rationale](#curation-rationale)
104
+ - [Source Data](#source-data)
105
+ - [Annotations](#annotations)
106
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
107
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
108
+ - [Social Impact of Dataset](#social-impact-of-dataset)
109
+ - [Discussion of Biases](#discussion-of-biases)
110
+ - [Other Known Limitations](#other-known-limitations)
111
+ - [Additional Information](#additional-information)
112
+ - [Dataset Curators](#dataset-curators)
113
+ - [Licensing Information](#licensing-information)
114
+ - [Citation Information](#citation-information)
115
+
116
+ ## Dataset Description
117
+
118
+ - **Repository:** [URL](https://github.com/esdurmus/Wikilingua)
119
+ - **Paper:** [WikiLingua: A Multilingual Abstractive Summarization Dataset](https://arxiv.org/abs/2010.03093)
120
+
121
+ ### Dataset Summary
122
+
123
+ We introduce WikiLingua, a large-scale, multilingual dataset for the evaluation of crosslingual abstractive summarization systems. We extract article and summary pairs in 18 languages from WikiHow, a high quality, collaborative resource of how-to guides on a diverse set of topics written by human authors. We create gold-standard article-summary alignments across languages by aligning the images that are used to describe each how-to step in an article.
124
+
125
+ ### Supported Tasks and Leaderboards
126
+
127
+ [More Information Needed]
128
+
129
+ ### Languages
130
+
131
+ The table below shows number of article-summary pairs with a parallel article-summary pair in English.
132
+ ______________________________
133
+ | Language | Num. parallel |
134
+ | ----------- | --------------|
135
+ | English | 141,457 |
136
+ | Spanish | 113,215 |
137
+ | Portuguese | 81,695 |
138
+ | French | 63,692 |
139
+ | German | 58,375 |
140
+ | Russian | 52,928 |
141
+ | Italian | 50,968 |
142
+ | Indonesian | 47,511 |
143
+ | Dutch | 31,270 |
144
+ | Arabic | 29,229 |
145
+ | Vietnamese | 19,600 |
146
+ | Chinese | 18,887 |
147
+ | Thai | 14,770 |
148
+ | Japanese | 12,669 |
149
+ | Korean | 12,189 |
150
+ | Hindi | 9,929 |
151
+ | Czech | 7,200 |
152
+ | Turkish | 4,503 |
153
+
154
+
155
+ ## Dataset Structure
156
+
157
+ ### Data Instances
158
+
159
+ ```
160
+ {
161
+ 'article': {
162
+ 'document': ['make sure that the area is a safe place, especially if you plan on walking home at night. It’s always a good idea to practice the buddy system. Have a friend meet up and walk with you. Research the bus, train, or streetcar routes available in your area to find safe and affordable travel to your destination. Make sure you check the schedule for your outgoing and return travel. Some public transportation will cease to run late at night. Be sure if you take public transportation to the venue that you will also be able to get home late at night. Check the routes. Even if some public transit is still running late at night, the routing may change. Some may run express past many of the stops, or not travel all the way to the ends. Be sure that your stop will still be available when you need it for your return trip. If you are taking public transit in a vulnerable state after drinking, it is always a good idea to travel in groups. Having friends available is a good way to stay safe and make sure that you reach your destination. This is more expensive option than a taxi or ride share service, but could be a fun and fancy way to stay safe and ensure that you will have a ride home. Plan this service in advance with a scheduled time to pick you up from your home and the venue. You want to be sure that the service will still be available when you need to get home. This may be easy in a large city, but taxis may be less frequent in smaller towns. This is especially true late at night, so this is a less reliable option than scheduling a ride in advance. Have a friend accompany you and help you flag a cab to make sure you are able to get one. Set up a plan to call a friend when you get home to make sure that you made it safely to your destination. If there are no taxis readily available call a local service to send a car to pick you up. You can share a ride with your friends, or other people using the app at the same moment. If you are in a vulnerable state it is best to share the ride with your friends to make sure you get home safe. You can request the car to yourself rather than sharing rides with strangers. If you travel home on your own or are the last of your group to be dropped off, make plans to call a friend when you get home so they know you made it safely to your destination. There may be a designated driver service in your area which can chauffeur your group. Make reservations with them in advance and keep their contact information handy while you are drinking.',
163
+ "Designating a driver is a very popular tactic to avoid drinking and driving. It is important to plan in advance, because your brain function will slow down and your decision making skills will be impaired once you start drinking. Decide before you begin drinking that you will not drive. Figure out who will be getting you home before you leave. Make sure this person is responsible and keep them in your sight while you are drinking. Have their contact information handy in case you can’t find them when you are ready to leave. Choose a friend who doesn’t drink alcohol. You likely have someone in your friend group who doesn’t drink. This person is the most likely to remain sober. Decide on one person who will remain sober. You can take turns within your friend group, alternating who will be the designated driver on each occasion. Be sure that the designated driver actually remains sober. The person who has drank the least is still not sober. If you don’t have your car with you, you can guarantee that you won’t make the choice to drive it home. If you are drinking at your home. Give your keys to a responsible friend to ensure that you don't choose to drive somewhere after you have been drinking. It may be tempting to stay longer or leave with someone else. Stick to the plan you made in advance and only leave with your sober, designated driver. Keep the phone number of your driver handy in case you can't find them when you are ready to leave. If your designated driver drinks alcohol, find alternate transportation to get home.",
164
+ 'If you have been drinking at all you are at least on the spectrum of drunkenness. You could be showing signs of impairment and slower brain function including lack of motor skills and slower reaction time, leading to the inability to operate a motor vehicle. Some of these signs could be: Poor balance or stumbling. Difficulty speaking clearly and slurred words. Abnormal behavior leading to you doing things you wouldn’t normally do if you were sober. As soon as you notice that you are showing signs of impairment, give your keys to a friend, the host or the bartender to ensure that you won’t drive until you are sober. Make sure to only give them your car key. Hold onto your house keys. If your friend, the host or the bartender are advising you not to drive, you are likely too drunk. Listen to their advice and acknowledge that they are trying to help you. Bystander intervention is common when it comes to drinking and driving. Many people will be willing to step in, take your keys and help you get home safely. If no one if offering to help, you may need to ask. Take a ride from a sober friend. It is best to get in a car with someone you trust when you are in this vulnerable state. Allow the host or bartender to call a cab or car service to take you home. If you are having a difficult time finding a safe way to get home, find a place to stay which does not involve you driving. Ask the host of the party if there is a place you can sleep. Give them your keys and ask that they keep them in a safe place until the morning. Stay with a friend if they live nearby and are on their way home. Find a hotel within walking distance. Call them to book a room, or have a friend help you secure one. Ask the friend if they will walk you to the hotel and make sure you get checked in safely. There are people in your life who care about you and want to be sure that you are safe. It may seem scary or embarrassing to call your parents or your siblings if you are too drunk to drive, but they will be glad you did. Your safety is the most important. You may need your phone to call someone for a ride or get help from a friend. Be sure to charge your phone before you leave the house. It is also a good idea to bring a charger with you in case your battery dies before the end of the night or you end up staying where you are and need to get home the next morning. You may also want to invest in a portable battery charger for your phone should there not be a power outlet available. Make sure it is fully charged before you leave your house. Keep it handy in your pocket or your bag throughout the night.'
165
+ ],
166
+ 'section_name': ['Finding Other Transportation',
167
+ 'Designating a Driver',
168
+ 'Staying Safe'
169
+ ],
170
+ 'summary': ['Walk to the venue where you will be drinking if it is close enough. Take public transit. Show up in style by hiring a limo or black car service. Flag a taxi cab for a convenient option to get where you’re going. Request a rideshare service like Uber or Lyft using an app on your phone. Reserve a designated driver service.',
171
+ 'Plan in advance. Assign a designated driver. Leave your car at home. Leave the venue with your designated driver.',
172
+ 'Pay attention to your body. Give up your keys. Listen to other people. Accept help. Stay where you are. Have an emergency back-up plan. Make sure that your phone is charged.'
173
+ ]
174
+ },
175
+ 'url': 'https://www.wikihow.com/Avoid-Drinking-and-Driving'
176
+ }
177
+ ```
178
+ ### Data Fields
179
+
180
+ - `url`: WikiHow URL of the article
181
+ - `article`: A dictionary containing `section_name`, `document` and `summary`
182
+ - `section_name`: List of section headings in an article
183
+ - `document`: List of documents, one for each section in the `section_name` list
184
+ - `summary`: List of summarized document
185
+
186
+ ### Data Splits
187
+
188
+ [More Information Needed]
189
+ ## Dataset Creation
190
+
191
+ ### Curation Rationale
192
+
193
+ [More Information Needed]
194
+
195
+ ### Source Data
196
+
197
+ [More Information Needed]
198
+
199
+ #### Initial Data Collection and Normalization
200
+
201
+ [More Information Needed]
202
+
203
+ #### Who are the source language producers?
204
+
205
+ [More Information Needed]
206
+
207
+ ### Annotations
208
+
209
+ [More Information Needed]
210
+
211
+ #### Annotation process
212
+
213
+ [More Information Needed]
214
+
215
+ #### Who are the annotators?
216
+
217
+ [More Information Needed]
218
+
219
+ ### Personal and Sensitive Information
220
+
221
+ [More Information Needed]
222
+
223
+ ## Considerations for Using the Data
224
+
225
+ ### Social Impact of Dataset
226
+
227
+ [More Information Needed]
228
+
229
+ ### Discussion of Biases
230
+
231
+ [More Information Needed]
232
+
233
+ ### Other Known Limitations
234
+
235
+ [More Information Needed]
236
+
237
+ ## Additional Information
238
+
239
+ ### Dataset Curators
240
+
241
+ [More Information Needed]
242
+
243
+ ### Licensing Information
244
+
245
+ [More Information Needed]
246
+
247
+ ### Citation Information
248
+
249
+ [More Information Needed]
create_dummy.py ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import itertools
2
+ import os
3
+ import pickle
4
+ import shutil
5
+ from glob import glob
6
+ from os.path import join as pjoin
7
+
8
+
9
+ _URLs = {
10
+ "arabic": "https://drive.google.com/uc?export=download&id=1__EjA6oZsgXQpggPm-h54jZu3kP6Y6zu",
11
+ "chinese": "https://drive.google.com/uc?export=download&id=1TuWH7uwu6V90QWmZn25qhou1rm97Egmn",
12
+ "czech": "https://drive.google.com/uc?export=download&id=1GcUN6mytEcOMBBOvjJOQzBmEkc-LdgQg",
13
+ "dutch": "https://drive.google.com/uc?export=download&id=1-w-0uqaC6hnRn1F_3XqJEvi09zlcTIhX",
14
+ "english": "https://drive.google.com/uc?export=download&id=11wMGqNVSwwk6zUnDaJEgm3qT71kAHeff",
15
+ "french": "https://drive.google.com/uc?export=download&id=1Uit4Og1pk-br_0UJIO5sdhApyhTuHzqo",
16
+ "german": "https://drive.google.com/uc?export=download&id=1meSNZHxd_0TZLKCRCYGN-Ke3IA5c1qOE",
17
+ "hindi": "https://drive.google.com/uc?export=download&id=1ZyFGufe4puX3vjGPbp4xg9Hca3Gwq22g",
18
+ "indonesian": "https://drive.google.com/uc?export=download&id=1PGa8j1_IqxiGTc3SU6NMB38sAzxCPS34",
19
+ "italian": "https://drive.google.com/uc?export=download&id=1okwGJiOZmTpNRNgJLCnjFF4Q0H1z4l6_",
20
+ "japanese": "https://drive.google.com/uc?export=download&id=1Z2ty5hU0tIGRZRDlFQZLO7b5vijRfvo0",
21
+ "korean": "https://drive.google.com/uc?export=download&id=1cqu_YAgvlyVSzzjcUyP1Cz7q0k8Pw7vN",
22
+ "portuguese": "https://drive.google.com/uc?export=download&id=1GTHUJxxmjLmG2lnF9dwRgIDRFZaOY3-F",
23
+ "russian": "https://drive.google.com/uc?export=download&id=1fUR3MqJ8jTMka6owA0S-Fe6aHmiophc_",
24
+ "spanish": "https://drive.google.com/uc?export=download&id=17FGi8KI9N9SuGe7elM8qU8_3fx4sfgTr",
25
+ "thai": "https://drive.google.com/uc?export=download&id=1QsV8C5EPJrQl37mwva_5-IJOrCaOi2tH",
26
+ "turkish": "https://drive.google.com/uc?export=download&id=1M1M5yIOyjKWGprc3LUeVVwxgKXxgpqxm",
27
+ "vietnamese": "https://drive.google.com/uc?export=download&id=17FGi8KI9N9SuGe7elM8qU8_3fx4sfgTr",
28
+ }
29
+
30
+
31
+ def sanitize_url(url):
32
+ """Convert the url into correct format"""
33
+ url = url.replace("https://drive.google.com/", "")
34
+ url = url.replace("?", "%3F")
35
+ url = url.replace("=", "%3D")
36
+ url = url.replace("&", "%26")
37
+ return url
38
+
39
+
40
+ def create():
41
+ """Creates the dummy pickle file with a subset of data"""
42
+ # 1. Download the google drive folder : https://drive.google.com/drive/folders/1PFvXUOsW_KSEzFm5ixB8J8BDB8zRRfHW
43
+ # and specify the decompressed folder location
44
+ downloaded_data_path = "/Users/katnoria/Downloads/WikiLingua"
45
+ files = glob(f"{downloaded_data_path}/*.pkl")
46
+ base_path = "/Users/katnoria/dev/projects/workspaces/python/datasets"
47
+ for key in _URLs.keys():
48
+ # data = load_dataset('./datasets/wiki_lingua', key)
49
+ print(f"Finding {key}.pkl")
50
+ filepath = [name for name in files if name.endswith(f"{key}.pkl")][0]
51
+ with open(filepath, "rb") as f:
52
+ data = pickle.load(f)
53
+
54
+ data_subset = dict(itertools.islice(data.items(), 3))
55
+ fname = sanitize_url(_URLs[key])
56
+ dirname = pjoin(base_path, f"datasets/wiki_lingua/dummy/{key}/1.1.0/dummy_data")
57
+ if not os.path.exists(dirname):
58
+ print(f"created folder {dirname}")
59
+ os.makedirs(dirname)
60
+ fname = pjoin(dirname, fname)
61
+ print(f"creating for {key}:{fname}")
62
+ with open(fname, "wb") as f:
63
+ pickle.dump(data_subset, f)
64
+ print("SUCCESS")
65
+
66
+
67
+ def zip():
68
+ """Zip the file"""
69
+ base_path = "/Users/katnoria/dev/projects/workspaces/python/datasets"
70
+ for key in _URLs.keys():
71
+ # dirname = pjoin(base_path, f"datasets/wiki_lingua/dummy/{key}/1.1.0/dummy_data")
72
+ dirname = pjoin(base_path, f"datasets/wiki_lingua/dummy/{key}/1.1.0")
73
+ print(f"Zipping {dirname}")
74
+ shutil.make_archive(f"{dirname}/dummy_data", "zip", dirname, "dummy_data")
75
+ shutil.rmtree(f"{dirname}/dummy_data")
76
+ print(f"Deleted folder {dirname}/dummy_data")
77
+
78
+
79
+ # Utility script to create the dummy data and zip the contents
80
+ # 1. Create data
81
+ create()
82
+ # 2. Zip contents
83
+ zip()
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"arabic": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wiki_lingua", "config_name": "arabic", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 119116119, "num_examples": 9995, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1__EjA6oZsgXQpggPm-h54jZu3kP6Y6zu": {"num_bytes": 119358890, "checksum": "25fc655eb53227acf5dbe4de09732dedee6cbd83b4c1e8c3bb018eada79555d1"}}, "download_size": 119358890, "post_processing_size": null, "dataset_size": 119116119, "size_in_bytes": 238475009}, "chinese": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wiki_lingua", "config_name": "chinese", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 41170689, "num_examples": 6541, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1TuWH7uwu6V90QWmZn25qhou1rm97Egmn": {"num_bytes": 41345464, "checksum": "be54a90ec9ac9baa2fb006c11363d44b9475c1fb8ac2aa84beeea1e065c58972"}}, "download_size": 41345464, "post_processing_size": null, "dataset_size": 41170689, "size_in_bytes": 82516153}, "czech": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wiki_lingua", "config_name": "czech", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 20816390, "num_examples": 2520, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1GcUN6mytEcOMBBOvjJOQzBmEkc-LdgQg": {"num_bytes": 20894511, "checksum": "bb3f9300b8631667d25b9e2b73c98ad90e0b5a3203bba21ed896f12b4a4e39a1"}}, "download_size": 20894511, "post_processing_size": null, "dataset_size": 20816390, "size_in_bytes": 41710901}, "dutch": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wiki_lingua", "config_name": "dutch", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 87258040, "num_examples": 10862, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1-w-0uqaC6hnRn1F_3XqJEvi09zlcTIhX": {"num_bytes": 87533442, "checksum": "1282abaa1f70e0d46db2f199a8e0bacd5c06a97220cf874854c41e12c072f10a"}}, "download_size": 87533442, "post_processing_size": null, "dataset_size": 87258040, "size_in_bytes": 174791482}, "english": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wiki_lingua", "config_name": "english", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 333700114, "num_examples": 57945, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=11wMGqNVSwwk6zUnDaJEgm3qT71kAHeff": {"num_bytes": 338036185, "checksum": "1f0b51ac4b733e06a067826d9e137ee300d751f12f240e95be4b258f7bb5191d"}}, "download_size": 338036185, "post_processing_size": null, "dataset_size": 333700114, "size_in_bytes": 671736299}, "french": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wiki_lingua", "config_name": "french", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 197550376, "num_examples": 21690, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1Uit4Og1pk-br_0UJIO5sdhApyhTuHzqo": {"num_bytes": 198114157, "checksum": "e7e71d214142d06ddfd00411c2ceb3f1abee44eef9f6dbdd61ea5c5b30521230"}}, "download_size": 198114157, "post_processing_size": null, "dataset_size": 197550376, "size_in_bytes": 395664533}, "german": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wiki_lingua", "config_name": "german", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 168674340, "num_examples": 20103, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1meSNZHxd_0TZLKCRCYGN-Ke3IA5c1qOE": {"num_bytes": 169195050, "checksum": "88ee4628700c0e58b529a75e3f9f27022be3e7a591a8981f503b078a7116c4eb"}}, "download_size": 169195050, "post_processing_size": null, "dataset_size": 168674340, "size_in_bytes": 337869390}, "hindi": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wiki_lingua", "config_name": "hindi", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 63785051, "num_examples": 3402, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1ZyFGufe4puX3vjGPbp4xg9Hca3Gwq22g": {"num_bytes": 63874759, "checksum": "a6a9b0cb313ecad82985269153e03e4c02376f0e52e53168100eacafc1c55037"}}, "download_size": 63874759, "post_processing_size": null, "dataset_size": 63785051, "size_in_bytes": 127659810}, "indonesian": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wiki_lingua", "config_name": "indonesian", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 136408861, "num_examples": 16308, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1PGa8j1_IqxiGTc3SU6NMB38sAzxCPS34": {"num_bytes": 136833587, "checksum": "cfa0b6eeb590e0db212b616d455fa00ed376186638c7c4b2771986fb4bd4b7e6"}}, "download_size": 136833587, "post_processing_size": null, "dataset_size": 136408861, "size_in_bytes": 273242448}, "italian": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wiki_lingua", "config_name": "italian", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 138119527, "num_examples": 17673, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1okwGJiOZmTpNRNgJLCnjFF4Q0H1z4l6_": {"num_bytes": 138578956, "checksum": "f6960f3d025f65452d3a536065925e86c425f7f559f574ed078172aa30d6a6ae"}}, "download_size": 138578956, "post_processing_size": null, "dataset_size": 138119527, "size_in_bytes": 276698483}, "japanese": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wiki_lingua", "config_name": "japanese", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 40145031, "num_examples": 4372, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1Z2ty5hU0tIGRZRDlFQZLO7b5vijRfvo0": {"num_bytes": 40259570, "checksum": "dc080f6db644261e31b0d9564eec0c07f87e939cd4af535ad239ee8813c92a33"}}, "download_size": 40259570, "post_processing_size": null, "dataset_size": 40145031, "size_in_bytes": 80404601}, "korean": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wiki_lingua", "config_name": "korean", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 38647614, "num_examples": 4111, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1cqu_YAgvlyVSzzjcUyP1Cz7q0k8Pw7vN": {"num_bytes": 38748961, "checksum": "b6f97c124033c99034696034a19b4e32d0573281281fe2655f7d70032dc65d01"}}, "download_size": 38748961, "post_processing_size": null, "dataset_size": 38647614, "size_in_bytes": 77396575}, "portuguese": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wiki_lingua", "config_name": "portuguese", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 204270845, "num_examples": 28143, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1GTHUJxxmjLmG2lnF9dwRgIDRFZaOY3-F": {"num_bytes": 204997686, "checksum": "c5f912b3b00e11f02a9ddd2b879b605f3fd2354eb0b5f8acac13e01e49ea1e59"}}, "download_size": 204997686, "post_processing_size": null, "dataset_size": 204270845, "size_in_bytes": 409268531}, "russian": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wiki_lingua", "config_name": "russian", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 241924032, "num_examples": 18143, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1fUR3MqJ8jTMka6owA0S-Fe6aHmiophc_": {"num_bytes": 242377242, "checksum": "246647637d6de8bb84e26f68546c5a5ba04e196d1769716975e52447d43e4d71"}}, "download_size": 242377242, "post_processing_size": null, "dataset_size": 241924032, "size_in_bytes": 484301274}, "spanish": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wiki_lingua", "config_name": "spanish", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 69868788, "num_examples": 6616, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=17FGi8KI9N9SuGe7elM8qU8_3fx4sfgTr": {"num_bytes": 70024093, "checksum": "590e51dbef3cd17ef271088778289596d1363d72708e7f7d625d28a837e395a5"}}, "download_size": 70024093, "post_processing_size": null, "dataset_size": 69868788, "size_in_bytes": 139892881}, "thai": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wiki_lingua", "config_name": "thai", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 86982851, "num_examples": 5093, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1QsV8C5EPJrQl37mwva_5-IJOrCaOi2tH": {"num_bytes": 87104200, "checksum": "464a35114cb35792f0a875ebf653c60be8b83e6eb5baa458dce2629c3b798161"}}, "download_size": 87104200, "post_processing_size": null, "dataset_size": 86982851, "size_in_bytes": 174087051}, "turkish": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wiki_lingua", "config_name": "turkish", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 11371821, "num_examples": 1512, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1M1M5yIOyjKWGprc3LUeVVwxgKXxgpqxm": {"num_bytes": 11405793, "checksum": "858406c011fc2c1ef0c8bf3acb77edcf1d05c5189e61be54e1655d6e8a98076d"}}, "download_size": 11405793, "post_processing_size": null, "dataset_size": 11371821, "size_in_bytes": 22777614}, "vietnamese": {"description": "WikiLingua is a large-scale multilingual dataset for the evaluation of\ncrosslingual abstractive summarization systems. The dataset includes ~770k\narticle and summary pairs in 18 languages from WikiHow. The gold-standard\narticle-summary alignments across languages was done by aligning the images\nthat are used to describe each how-to step in an article.\n", "citation": "@article{ladhak-wiki-2020,\n title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n journal = {arXiv preprint arXiv:2010.03093},\n year = {2020},\n url = {https://arxiv.org/abs/2010.03093}\n}\n", "homepage": "https://github.com/esdurmus/Wikilingua", "license": "CC BY-NC-SA 3.0", "features": {"url": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "english_url": {"dtype": "string", "id": null, "_type": "Value"}, "english_section_name": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "wiki_lingua", "config_name": "vietnamese", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 69868788, "num_examples": 6616, "dataset_name": "wiki_lingua"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=17FGi8KI9N9SuGe7elM8qU8_3fx4sfgTr": {"num_bytes": 70024093, "checksum": "590e51dbef3cd17ef271088778289596d1363d72708e7f7d625d28a837e395a5"}}, "download_size": 70024093, "post_processing_size": null, "dataset_size": 69868788, "size_in_bytes": 139892881}}
dummy/arabic/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ceb1a188b749e27e8956876a3643ffb4b4542a2ead993ea44f41f1c60fa7dfd4
3
+ size 17002
dummy/chinese/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:89f7dd1c212075cfc448b8d6f1829c3942531ab6965b496ef3b451c273dd57c9
3
+ size 10649
dummy/czech/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e804df6259af37c14918873bbba59481bd8e3a187f970e7246fed61afb8e768
3
+ size 7070
dummy/dutch/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d92c6077d0e802cc807e6fd98a01c7238dbbf2cd5a2521959e563d199bb45517
3
+ size 8787
dummy/english/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:191bb0faebf3ef43eb10a76d23fff14e0c90b844500b67107c504c3e68344ed4
3
+ size 8226
dummy/french/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:27482c13432f912ca387575e70a69b742fca5d722d4f7e296cffaf7481862368
3
+ size 11795
dummy/german/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f07f070993595fb44c3ef0de335a11485910e96a2b319813dee26962f0dca6a3
3
+ size 11144
dummy/hindi/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:651ce4525141276ec82c34c3345900004758b9cf43939ddf0cf9ed044adbfbaf
3
+ size 11852
dummy/indonesian/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d2e0a54e352d11d909c4cbfa05e6e384c0a501c56457d9df9e4648e575163e2a
3
+ size 10379
dummy/italian/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:11766a846d7f300cf8feb04cc1bc81187c09424766badff2af683b05200fa34a
3
+ size 9429
dummy/japanese/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a1423a25de8ff94524a4c1fa42ca04bbf35d7b212e4d3a09e586063bfb4e449
3
+ size 9285
dummy/korean/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:683a0fe5207eca1cad952c3e6df562f3ecba3b0473d368dffd37c84f2462f708
3
+ size 7372
dummy/portuguese/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f6fa21b274463f4bce68b61e44948506b2c35392fcd07c317732bd094464a2a3
3
+ size 6552
dummy/russian/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:132c5ca0ddb46ca6416422c57bf95d5e16facb189f76cbb87834a2009c828e26
3
+ size 11645
dummy/spanish/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b0cb319cba863fe3e35eeb4bbf51bae2f2c4f44ea10fd7fe5e82d4d0ae2553ba
3
+ size 6866
dummy/thai/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1903f18f23ae9e9c31fd0da3d6101ec4c0305a50d87dc41a9d74195179de746f
3
+ size 11098
dummy/turkish/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:18d5fb0a6774bd8ec967d8f1f9450db741b65a48a4a7c4503429d2dd17a41dcd
3
+ size 6864
dummy/vietnamese/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa881999cd83a41ea4e996533fc9f43ae44937a46a6b5c4c521bfe90f472ead0
3
+ size 10333
wiki_lingua.py ADDED
@@ -0,0 +1,198 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """TODO: Add a description here."""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import pickle
20
+
21
+ import datasets
22
+
23
+
24
+ # Find for instance the citation on arxiv or on the dataset repo/website
25
+ _CITATION = """\
26
+ @article{ladhak-wiki-2020,
27
+ title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},
28
+ authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},
29
+ journal = {arXiv preprint arXiv:2010.03093},
30
+ year = {2020},
31
+ url = {https://arxiv.org/abs/2010.03093}
32
+ }
33
+ """
34
+
35
+ _DESCRIPTION = """\
36
+ WikiLingua is a large-scale multilingual dataset for the evaluation of
37
+ crosslingual abstractive summarization systems. The dataset includes ~770k
38
+ article and summary pairs in 18 languages from WikiHow. The gold-standard
39
+ article-summary alignments across languages was done by aligning the images
40
+ that are used to describe each how-to step in an article.
41
+ """
42
+
43
+ _HOMEPAGE = "https://github.com/esdurmus/Wikilingua"
44
+
45
+ _LICENSE = "CC BY-NC-SA 3.0"
46
+
47
+ # Download links
48
+ _URLs = {
49
+ "arabic": "https://drive.google.com/uc?export=download&id=1__EjA6oZsgXQpggPm-h54jZu3kP6Y6zu",
50
+ "chinese": "https://drive.google.com/uc?export=download&id=1TuWH7uwu6V90QWmZn25qhou1rm97Egmn",
51
+ "czech": "https://drive.google.com/uc?export=download&id=1GcUN6mytEcOMBBOvjJOQzBmEkc-LdgQg",
52
+ "dutch": "https://drive.google.com/uc?export=download&id=1-w-0uqaC6hnRn1F_3XqJEvi09zlcTIhX",
53
+ "english": "https://drive.google.com/uc?export=download&id=11wMGqNVSwwk6zUnDaJEgm3qT71kAHeff",
54
+ "french": "https://drive.google.com/uc?export=download&id=1Uit4Og1pk-br_0UJIO5sdhApyhTuHzqo",
55
+ "german": "https://drive.google.com/uc?export=download&id=1meSNZHxd_0TZLKCRCYGN-Ke3IA5c1qOE",
56
+ "hindi": "https://drive.google.com/uc?export=download&id=1ZyFGufe4puX3vjGPbp4xg9Hca3Gwq22g",
57
+ "indonesian": "https://drive.google.com/uc?export=download&id=1PGa8j1_IqxiGTc3SU6NMB38sAzxCPS34",
58
+ "italian": "https://drive.google.com/uc?export=download&id=1okwGJiOZmTpNRNgJLCnjFF4Q0H1z4l6_",
59
+ "japanese": "https://drive.google.com/uc?export=download&id=1Z2ty5hU0tIGRZRDlFQZLO7b5vijRfvo0",
60
+ "korean": "https://drive.google.com/uc?export=download&id=1cqu_YAgvlyVSzzjcUyP1Cz7q0k8Pw7vN",
61
+ "portuguese": "https://drive.google.com/uc?export=download&id=1GTHUJxxmjLmG2lnF9dwRgIDRFZaOY3-F",
62
+ "russian": "https://drive.google.com/uc?export=download&id=1fUR3MqJ8jTMka6owA0S-Fe6aHmiophc_",
63
+ "spanish": "https://drive.google.com/uc?export=download&id=17FGi8KI9N9SuGe7elM8qU8_3fx4sfgTr",
64
+ "thai": "https://drive.google.com/uc?export=download&id=1QsV8C5EPJrQl37mwva_5-IJOrCaOi2tH",
65
+ "turkish": "https://drive.google.com/uc?export=download&id=1M1M5yIOyjKWGprc3LUeVVwxgKXxgpqxm",
66
+ "vietnamese": "https://drive.google.com/uc?export=download&id=17FGi8KI9N9SuGe7elM8qU8_3fx4sfgTr",
67
+ }
68
+
69
+
70
+ class WikiLingua(datasets.GeneratorBasedBuilder):
71
+ """TODO: Short description of my dataset."""
72
+
73
+ VERSION = datasets.Version("1.1.0")
74
+
75
+ # This is an example of a dataset with multiple configurations.
76
+ # If you don't want/need to define several sub-sets in your dataset,
77
+ # just remove the BUILDER_CONFIG_CLASS and the BUILDER_CONFIGS attributes.
78
+
79
+ # If you need to make complex sub-parts in the datasets with configurable options
80
+ # You can create your own builder configuration class to store attribute, inheriting from datasets.BuilderConfig
81
+ # BUILDER_CONFIG_CLASS = MyBuilderConfig
82
+
83
+ # You will be able to load one or the other configurations in the following list with
84
+ # data = datasets.load_dataset('my_dataset', 'first_domain')
85
+ # data = datasets.load_dataset('my_dataset', 'second_domain')
86
+ BUILDER_CONFIGS = [
87
+ datasets.BuilderConfig(name="arabic", version=VERSION, description="A subset of article-summary in Arabic"),
88
+ datasets.BuilderConfig(name="chinese", version=VERSION, description="A subset of article-summary in Chinese"),
89
+ datasets.BuilderConfig(name="czech", version=VERSION, description="A subset of article-summary in Czech"),
90
+ datasets.BuilderConfig(name="dutch", version=VERSION, description="A subset of article-summary in Dutch"),
91
+ datasets.BuilderConfig(name="english", version=VERSION, description="A subset of article-summary in English"),
92
+ datasets.BuilderConfig(name="french", version=VERSION, description="A subset of article-summary in French"),
93
+ datasets.BuilderConfig(name="german", version=VERSION, description="A subset of article-summary in German"),
94
+ datasets.BuilderConfig(name="hindi", version=VERSION, description="A subset of article-summary in Hindi"),
95
+ datasets.BuilderConfig(
96
+ name="indonesian", version=VERSION, description="A subset of article-summary in Indonesian"
97
+ ),
98
+ datasets.BuilderConfig(name="italian", version=VERSION, description="A subset of article-summary in Italian"),
99
+ datasets.BuilderConfig(
100
+ name="japanese", version=VERSION, description="A subset of article-summary in Japanese"
101
+ ),
102
+ datasets.BuilderConfig(name="korean", version=VERSION, description="A subset of article-summary in Korean"),
103
+ datasets.BuilderConfig(
104
+ name="portuguese", version=VERSION, description="A subset of article-summary in Portuguese"
105
+ ),
106
+ datasets.BuilderConfig(name="russian", version=VERSION, description="A subset of article-summary in Russian"),
107
+ datasets.BuilderConfig(name="spanish", version=VERSION, description="A subset of article-summary in Spanish"),
108
+ datasets.BuilderConfig(name="thai", version=VERSION, description="A subset of article-summary in Thai"),
109
+ datasets.BuilderConfig(name="turkish", version=VERSION, description="A subset of article-summary in Turkish"),
110
+ datasets.BuilderConfig(
111
+ name="vietnamese", version=VERSION, description="A subset of article-summary in Vietnamese"
112
+ ),
113
+ ]
114
+
115
+ DEFAULT_CONFIG_NAME = "english"
116
+
117
+ def _info(self):
118
+ if self.config.name == "english":
119
+ features = datasets.Features(
120
+ {
121
+ "url": datasets.Value("string"),
122
+ "article": datasets.Sequence(
123
+ {
124
+ "section_name": datasets.Value("string"),
125
+ "document": datasets.Value("string"),
126
+ "summary": datasets.Value("string"),
127
+ }
128
+ ),
129
+ }
130
+ )
131
+ else:
132
+ features = datasets.Features(
133
+ {
134
+ "url": datasets.Value("string"),
135
+ "article": datasets.Sequence(
136
+ {
137
+ "section_name": datasets.Value("string"),
138
+ "document": datasets.Value("string"),
139
+ "summary": datasets.Value("string"),
140
+ "english_url": datasets.Value("string"),
141
+ "english_section_name": datasets.Value("string"),
142
+ }
143
+ ),
144
+ }
145
+ )
146
+
147
+ return datasets.DatasetInfo(
148
+ # This is the description that will appear on the datasets page.
149
+ description=_DESCRIPTION,
150
+ # This defines the different columns of the dataset and their types
151
+ features=features, # Here we define them above because they are different between the two configurations
152
+ # If there's a common (input, target) tuple from the features,
153
+ # specify them here. They'll be used if as_supervised=True in
154
+ # builder.as_dataset.
155
+ supervised_keys=None,
156
+ # Homepage of the dataset for documentation
157
+ homepage=_HOMEPAGE,
158
+ # License for the dataset if available
159
+ license=_LICENSE,
160
+ # Citation for the dataset
161
+ citation=_CITATION,
162
+ )
163
+
164
+ def _split_generators(self, dl_manager):
165
+ """Returns SplitGenerators."""
166
+ my_urls = _URLs[self.config.name]
167
+ # See create_dummy.py to create new dummy data
168
+ train_fname = dl_manager.download_and_extract(my_urls)
169
+ return [
170
+ datasets.SplitGenerator(
171
+ name=datasets.Split.TRAIN,
172
+ # These kwargs will be passed to _generate_examples
173
+ gen_kwargs={
174
+ "filepath": train_fname,
175
+ "split": "train",
176
+ },
177
+ ),
178
+ ]
179
+
180
+ def _process_article(self, article):
181
+ """Parse the article and convert into list of dict"""
182
+ processed_article = []
183
+ for key, value in article.items():
184
+ row = {"section_name": key, "document": value["document"], "summary": value["summary"]}
185
+
186
+ if self.config.name != "english":
187
+ row["english_url"] = value["english_url"]
188
+ row["english_section_name"] = value["english_section_name"]
189
+ processed_article.append(row)
190
+
191
+ return processed_article
192
+
193
+ def _generate_examples(self, filepath, split):
194
+ """ Yields examples. """
195
+ with open(filepath, "rb") as f:
196
+ data = pickle.load(f)
197
+ for id_, row in enumerate(data.items()):
198
+ yield id_, {"url": row[0], "article": self._process_article(row[1])}