polinaeterna HF staff parquet-converter commited on
Commit
dadefe4
0 Parent(s):

Duplicate from xsum

Browse files

Co-authored-by: francky <francky@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,214 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - found
6
+ language:
7
+ - en
8
+ license:
9
+ - unknown
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: Extreme Summarization (XSum)
13
+ paperswithcode_id: xsum
14
+ size_categories:
15
+ - 100K<n<1M
16
+ source_datasets:
17
+ - original
18
+ task_categories:
19
+ - summarization
20
+ task_ids:
21
+ - news-articles-summarization
22
+ train-eval-index:
23
+ - config: default
24
+ task: summarization
25
+ task_id: summarization
26
+ splits:
27
+ train_split: train
28
+ eval_split: test
29
+ col_mapping:
30
+ document: text
31
+ summary: target
32
+ metrics:
33
+ - type: rouge
34
+ name: Rouge
35
+ dataset_info:
36
+ features:
37
+ - name: document
38
+ dtype: string
39
+ - name: summary
40
+ dtype: string
41
+ - name: id
42
+ dtype: string
43
+ splits:
44
+ - name: train
45
+ num_bytes: 479206608
46
+ num_examples: 204045
47
+ - name: validation
48
+ num_bytes: 26292901
49
+ num_examples: 11332
50
+ - name: test
51
+ num_bytes: 26756165
52
+ num_examples: 11334
53
+ download_size: 257302866
54
+ dataset_size: 532255674
55
+ duplicated_from: xsum
56
+ ---
57
+
58
+ # Dataset Card for "xsum"
59
+
60
+ ## Table of Contents
61
+ - [Dataset Description](#dataset-description)
62
+ - [Dataset Summary](#dataset-summary)
63
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
64
+ - [Languages](#languages)
65
+ - [Dataset Structure](#dataset-structure)
66
+ - [Data Instances](#data-instances)
67
+ - [Data Fields](#data-fields)
68
+ - [Data Splits](#data-splits)
69
+ - [Dataset Creation](#dataset-creation)
70
+ - [Curation Rationale](#curation-rationale)
71
+ - [Source Data](#source-data)
72
+ - [Annotations](#annotations)
73
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
74
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
75
+ - [Social Impact of Dataset](#social-impact-of-dataset)
76
+ - [Discussion of Biases](#discussion-of-biases)
77
+ - [Other Known Limitations](#other-known-limitations)
78
+ - [Additional Information](#additional-information)
79
+ - [Dataset Curators](#dataset-curators)
80
+ - [Licensing Information](#licensing-information)
81
+ - [Citation Information](#citation-information)
82
+ - [Contributions](#contributions)
83
+
84
+ ## Dataset Description
85
+
86
+ - **Homepage:**
87
+ - **Repository:** https://github.com/EdinburghNLP/XSum
88
+ - **Paper:** [Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization](https://arxiv.org/abs/1808.08745)
89
+ - **Point of Contact:** [Shashi Narayan](mailto:shashi.narayan@ed.ac.uk)
90
+ - **Size of downloaded dataset files:** 245.38 MB
91
+ - **Size of the generated dataset:** 507.60 MB
92
+ - **Total amount of disk used:** 752.98 MB
93
+
94
+ ### Dataset Summary
95
+
96
+ Extreme Summarization (XSum) Dataset.
97
+
98
+ There are three features:
99
+ - document: Input news article.
100
+ - summary: One sentence summary of the article.
101
+ - id: BBC ID of the article.
102
+
103
+ ### Supported Tasks and Leaderboards
104
+
105
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
106
+
107
+ ### Languages
108
+
109
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
110
+
111
+ ## Dataset Structure
112
+
113
+ ### Data Instances
114
+
115
+ #### default
116
+
117
+ - **Size of downloaded dataset files:** 245.38 MB
118
+ - **Size of the generated dataset:** 507.60 MB
119
+ - **Total amount of disk used:** 752.98 MB
120
+
121
+ An example of 'validation' looks as follows.
122
+ ```
123
+ {
124
+ "document": "some-body",
125
+ "id": "29750031",
126
+ "summary": "some-sentence"
127
+ }
128
+ ```
129
+
130
+ ### Data Fields
131
+
132
+ The data fields are the same among all splits.
133
+
134
+ #### default
135
+ - `document`: a `string` feature.
136
+ - `summary`: a `string` feature.
137
+ - `id`: a `string` feature.
138
+
139
+ ### Data Splits
140
+
141
+ | name |train |validation|test |
142
+ |-------|-----:|---------:|----:|
143
+ |default|204045| 11332|11334|
144
+
145
+ ## Dataset Creation
146
+
147
+ ### Curation Rationale
148
+
149
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
150
+
151
+ ### Source Data
152
+
153
+ #### Initial Data Collection and Normalization
154
+
155
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
156
+
157
+ #### Who are the source language producers?
158
+
159
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
160
+
161
+ ### Annotations
162
+
163
+ #### Annotation process
164
+
165
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
166
+
167
+ #### Who are the annotators?
168
+
169
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
170
+
171
+ ### Personal and Sensitive Information
172
+
173
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
174
+
175
+ ## Considerations for Using the Data
176
+
177
+ ### Social Impact of Dataset
178
+
179
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
180
+
181
+ ### Discussion of Biases
182
+
183
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
184
+
185
+ ### Other Known Limitations
186
+
187
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
188
+
189
+ ## Additional Information
190
+
191
+ ### Dataset Curators
192
+
193
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
194
+
195
+ ### Licensing Information
196
+
197
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
198
+
199
+ ### Citation Information
200
+
201
+ ```
202
+ @article{Narayan2018DontGM,
203
+ title={Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization},
204
+ author={Shashi Narayan and Shay B. Cohen and Mirella Lapata},
205
+ journal={ArXiv},
206
+ year={2018},
207
+ volume={abs/1808.08745}
208
+ }
209
+ ```
210
+
211
+
212
+ ### Contributions
213
+
214
+ Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@jbragg](https://github.com/jbragg), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
data/XSUM-EMNLP18-Summary-Data-Original.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:10b48aa187fc9c904b30f76ca97e2da0de8d3a1238acc26acadef93e2001af90
3
+ size 254582292
xsum.py ADDED
@@ -0,0 +1,170 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """XSum dataset."""
18
+
19
+
20
+ import json
21
+ import os
22
+
23
+ import datasets
24
+
25
+
26
+ _CITATION = """
27
+ @article{Narayan2018DontGM,
28
+ title={Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization},
29
+ author={Shashi Narayan and Shay B. Cohen and Mirella Lapata},
30
+ journal={ArXiv},
31
+ year={2018},
32
+ volume={abs/1808.08745}
33
+ }
34
+ """
35
+
36
+ _DESCRIPTION = """
37
+ Extreme Summarization (XSum) Dataset.
38
+
39
+ There are three features:
40
+ - document: Input news article.
41
+ - summary: One sentence summary of the article.
42
+ - id: BBC ID of the article.
43
+
44
+ """
45
+
46
+ # From https://github.com/EdinburghNLP/XSum/issues/12
47
+ _URL_DATA = "data/XSUM-EMNLP18-Summary-Data-Original.tar.gz"
48
+ _URL_SPLITS = (
49
+ "https://raw.githubusercontent.com/EdinburghNLP/XSum/master/XSum-Dataset/XSum-TRAINING-DEV-TEST-SPLIT-90-5-5.json"
50
+ )
51
+
52
+ _DOCUMENT = "document"
53
+ _SUMMARY = "summary"
54
+ _ID = "id"
55
+
56
+ _REMOVE_LINES = set(
57
+ [
58
+ "Share this with\n",
59
+ "Email\n",
60
+ "Facebook\n",
61
+ "Messenger\n",
62
+ "Twitter\n",
63
+ "Pinterest\n",
64
+ "WhatsApp\n",
65
+ "Linkedin\n",
66
+ "LinkedIn\n",
67
+ "Copy this link\n",
68
+ "These are external links and will open in a new window\n",
69
+ ]
70
+ )
71
+
72
+
73
+ class Xsum(datasets.GeneratorBasedBuilder):
74
+ """Extreme Summarization (XSum) Dataset."""
75
+
76
+ # Version 1.2.0 expands coverage, includes ids, and removes web contents.
77
+ VERSION = datasets.Version("1.2.0")
78
+
79
+ def _info(self):
80
+ return datasets.DatasetInfo(
81
+ description=_DESCRIPTION,
82
+ features=datasets.Features(
83
+ {
84
+ _DOCUMENT: datasets.Value("string"),
85
+ _SUMMARY: datasets.Value("string"),
86
+ _ID: datasets.Value("string"),
87
+ }
88
+ ),
89
+ supervised_keys=(_DOCUMENT, _SUMMARY),
90
+ homepage="https://github.com/EdinburghNLP/XSum/tree/master/XSum-Dataset",
91
+ citation=_CITATION,
92
+ )
93
+
94
+ def _split_generators(self, dl_manager):
95
+ """Returns SplitGenerators."""
96
+
97
+ files_to_download = {"data": _URL_DATA, "splits": _URL_SPLITS}
98
+ downloaded_files = dl_manager.download(files_to_download)
99
+
100
+ return [
101
+ datasets.SplitGenerator(
102
+ name=datasets.Split.TRAIN,
103
+ gen_kwargs={
104
+ "split_path": downloaded_files["splits"],
105
+ "split_name": "train",
106
+ "data_dir": "bbc-summary-data",
107
+ "files": dl_manager.iter_archive(downloaded_files["data"]),
108
+ },
109
+ ),
110
+ datasets.SplitGenerator(
111
+ name=datasets.Split.VALIDATION,
112
+ gen_kwargs={
113
+ "split_path": downloaded_files["splits"],
114
+ "split_name": "validation",
115
+ "data_dir": "bbc-summary-data",
116
+ "files": dl_manager.iter_archive(downloaded_files["data"]),
117
+ },
118
+ ),
119
+ datasets.SplitGenerator(
120
+ name=datasets.Split.TEST,
121
+ gen_kwargs={
122
+ "split_path": downloaded_files["splits"],
123
+ "split_name": "test",
124
+ "data_dir": "bbc-summary-data",
125
+ "files": dl_manager.iter_archive(downloaded_files["data"]),
126
+ },
127
+ ),
128
+ ]
129
+
130
+ def _generate_examples(self, split_path, split_name, data_dir, files):
131
+ """Yields examples."""
132
+
133
+ with open(split_path, "r", encoding="utf-8") as f:
134
+ split_ids = json.load(f)
135
+ split_ids = {k: set(v) for k, v in split_ids.items()}
136
+
137
+ for path, f in files:
138
+ if not split_ids[split_name]:
139
+ break
140
+ elif path.startswith(data_dir) and path.endswith(".summary"):
141
+ i = os.path.basename(path).split(".")[0]
142
+ if i in split_ids[split_name]:
143
+ split_ids[split_name].remove(i)
144
+ text = "".join(
145
+ [
146
+ line.decode("utf-8")
147
+ for line in f.readlines()
148
+ if line.decode("utf-8") not in _REMOVE_LINES and line.strip()
149
+ ]
150
+ )
151
+ # Each file follows below format:
152
+ # [SN]URL[SN]
153
+ # http://somelink
154
+ #
155
+ # [SN]TITLE[SN]
156
+ # some intro
157
+ #
158
+ # [SN]FIRST-SENTENCE[SN]
159
+ # some intro
160
+ #
161
+ # [SN]RESTBODY[SN]
162
+ # text line.
163
+ # another text line.
164
+ # "another text line."
165
+
166
+ # According to the following issue, FIRST-SENTENCE
167
+ # is the reference summary and TITLE is unused:
168
+ # https://github.com/EdinburghNLP/XSum/issues/22
169
+ segs = text.split("[SN]")
170
+ yield i, {_DOCUMENT: segs[8].strip(), _SUMMARY: segs[6].strip(), _ID: i}