Datasets:
Tasks:
Summarization
Modalities:
Text
Sub-tasks:
news-articles-summarization
Languages:
English
Size:
100K - 1M
ArXiv:
License:
parquet-converter
commited on
Commit
•
2147021
1
Parent(s):
0f3ea2f
Update parquet files
Browse files- .gitattributes +0 -27
- README.md +0 -213
- data/XSUM-EMNLP18-Summary-Data-Original.tar.gz → default/xsum-test.parquet +2 -2
- default/xsum-train.parquet +3 -0
- default/xsum-validation.parquet +3 -0
- xsum.py +0 -170
.gitattributes
DELETED
@@ -1,27 +0,0 @@
|
|
1 |
-
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
-
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
-
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
-
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
5 |
-
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
6 |
-
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
-
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
-
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
-
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
-
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
-
*.model filter=lfs diff=lfs merge=lfs -text
|
12 |
-
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
13 |
-
*.onnx filter=lfs diff=lfs merge=lfs -text
|
14 |
-
*.ot filter=lfs diff=lfs merge=lfs -text
|
15 |
-
*.parquet filter=lfs diff=lfs merge=lfs -text
|
16 |
-
*.pb filter=lfs diff=lfs merge=lfs -text
|
17 |
-
*.pt filter=lfs diff=lfs merge=lfs -text
|
18 |
-
*.pth filter=lfs diff=lfs merge=lfs -text
|
19 |
-
*.rar filter=lfs diff=lfs merge=lfs -text
|
20 |
-
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
21 |
-
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
22 |
-
*.tflite filter=lfs diff=lfs merge=lfs -text
|
23 |
-
*.tgz filter=lfs diff=lfs merge=lfs -text
|
24 |
-
*.xz filter=lfs diff=lfs merge=lfs -text
|
25 |
-
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
-
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
-
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
README.md
DELETED
@@ -1,213 +0,0 @@
|
|
1 |
-
---
|
2 |
-
annotations_creators:
|
3 |
-
- found
|
4 |
-
language_creators:
|
5 |
-
- found
|
6 |
-
language:
|
7 |
-
- en
|
8 |
-
license:
|
9 |
-
- unknown
|
10 |
-
multilinguality:
|
11 |
-
- monolingual
|
12 |
-
pretty_name: Extreme Summarization (XSum)
|
13 |
-
paperswithcode_id: xsum
|
14 |
-
size_categories:
|
15 |
-
- 100K<n<1M
|
16 |
-
source_datasets:
|
17 |
-
- original
|
18 |
-
task_categories:
|
19 |
-
- summarization
|
20 |
-
task_ids:
|
21 |
-
- news-articles-summarization
|
22 |
-
train-eval-index:
|
23 |
-
- config: default
|
24 |
-
task: summarization
|
25 |
-
task_id: summarization
|
26 |
-
splits:
|
27 |
-
train_split: train
|
28 |
-
eval_split: test
|
29 |
-
col_mapping:
|
30 |
-
document: text
|
31 |
-
summary: target
|
32 |
-
metrics:
|
33 |
-
- type: rouge
|
34 |
-
name: Rouge
|
35 |
-
dataset_info:
|
36 |
-
features:
|
37 |
-
- name: document
|
38 |
-
dtype: string
|
39 |
-
- name: summary
|
40 |
-
dtype: string
|
41 |
-
- name: id
|
42 |
-
dtype: string
|
43 |
-
splits:
|
44 |
-
- name: train
|
45 |
-
num_bytes: 479206608
|
46 |
-
num_examples: 204045
|
47 |
-
- name: validation
|
48 |
-
num_bytes: 26292901
|
49 |
-
num_examples: 11332
|
50 |
-
- name: test
|
51 |
-
num_bytes: 26756165
|
52 |
-
num_examples: 11334
|
53 |
-
download_size: 257302866
|
54 |
-
dataset_size: 532255674
|
55 |
-
---
|
56 |
-
|
57 |
-
# Dataset Card for "xsum"
|
58 |
-
|
59 |
-
## Table of Contents
|
60 |
-
- [Dataset Description](#dataset-description)
|
61 |
-
- [Dataset Summary](#dataset-summary)
|
62 |
-
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
63 |
-
- [Languages](#languages)
|
64 |
-
- [Dataset Structure](#dataset-structure)
|
65 |
-
- [Data Instances](#data-instances)
|
66 |
-
- [Data Fields](#data-fields)
|
67 |
-
- [Data Splits](#data-splits)
|
68 |
-
- [Dataset Creation](#dataset-creation)
|
69 |
-
- [Curation Rationale](#curation-rationale)
|
70 |
-
- [Source Data](#source-data)
|
71 |
-
- [Annotations](#annotations)
|
72 |
-
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
73 |
-
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
74 |
-
- [Social Impact of Dataset](#social-impact-of-dataset)
|
75 |
-
- [Discussion of Biases](#discussion-of-biases)
|
76 |
-
- [Other Known Limitations](#other-known-limitations)
|
77 |
-
- [Additional Information](#additional-information)
|
78 |
-
- [Dataset Curators](#dataset-curators)
|
79 |
-
- [Licensing Information](#licensing-information)
|
80 |
-
- [Citation Information](#citation-information)
|
81 |
-
- [Contributions](#contributions)
|
82 |
-
|
83 |
-
## Dataset Description
|
84 |
-
|
85 |
-
- **Homepage:**
|
86 |
-
- **Repository:** https://github.com/EdinburghNLP/XSum
|
87 |
-
- **Paper:** [Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization](https://arxiv.org/abs/1808.08745)
|
88 |
-
- **Point of Contact:** [Shashi Narayan](mailto:shashi.narayan@ed.ac.uk)
|
89 |
-
- **Size of downloaded dataset files:** 245.38 MB
|
90 |
-
- **Size of the generated dataset:** 507.60 MB
|
91 |
-
- **Total amount of disk used:** 752.98 MB
|
92 |
-
|
93 |
-
### Dataset Summary
|
94 |
-
|
95 |
-
Extreme Summarization (XSum) Dataset.
|
96 |
-
|
97 |
-
There are three features:
|
98 |
-
- document: Input news article.
|
99 |
-
- summary: One sentence summary of the article.
|
100 |
-
- id: BBC ID of the article.
|
101 |
-
|
102 |
-
### Supported Tasks and Leaderboards
|
103 |
-
|
104 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
105 |
-
|
106 |
-
### Languages
|
107 |
-
|
108 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
109 |
-
|
110 |
-
## Dataset Structure
|
111 |
-
|
112 |
-
### Data Instances
|
113 |
-
|
114 |
-
#### default
|
115 |
-
|
116 |
-
- **Size of downloaded dataset files:** 245.38 MB
|
117 |
-
- **Size of the generated dataset:** 507.60 MB
|
118 |
-
- **Total amount of disk used:** 752.98 MB
|
119 |
-
|
120 |
-
An example of 'validation' looks as follows.
|
121 |
-
```
|
122 |
-
{
|
123 |
-
"document": "some-body",
|
124 |
-
"id": "29750031",
|
125 |
-
"summary": "some-sentence"
|
126 |
-
}
|
127 |
-
```
|
128 |
-
|
129 |
-
### Data Fields
|
130 |
-
|
131 |
-
The data fields are the same among all splits.
|
132 |
-
|
133 |
-
#### default
|
134 |
-
- `document`: a `string` feature.
|
135 |
-
- `summary`: a `string` feature.
|
136 |
-
- `id`: a `string` feature.
|
137 |
-
|
138 |
-
### Data Splits
|
139 |
-
|
140 |
-
| name |train |validation|test |
|
141 |
-
|-------|-----:|---------:|----:|
|
142 |
-
|default|204045| 11332|11334|
|
143 |
-
|
144 |
-
## Dataset Creation
|
145 |
-
|
146 |
-
### Curation Rationale
|
147 |
-
|
148 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
149 |
-
|
150 |
-
### Source Data
|
151 |
-
|
152 |
-
#### Initial Data Collection and Normalization
|
153 |
-
|
154 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
155 |
-
|
156 |
-
#### Who are the source language producers?
|
157 |
-
|
158 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
159 |
-
|
160 |
-
### Annotations
|
161 |
-
|
162 |
-
#### Annotation process
|
163 |
-
|
164 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
165 |
-
|
166 |
-
#### Who are the annotators?
|
167 |
-
|
168 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
169 |
-
|
170 |
-
### Personal and Sensitive Information
|
171 |
-
|
172 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
173 |
-
|
174 |
-
## Considerations for Using the Data
|
175 |
-
|
176 |
-
### Social Impact of Dataset
|
177 |
-
|
178 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
179 |
-
|
180 |
-
### Discussion of Biases
|
181 |
-
|
182 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
183 |
-
|
184 |
-
### Other Known Limitations
|
185 |
-
|
186 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
187 |
-
|
188 |
-
## Additional Information
|
189 |
-
|
190 |
-
### Dataset Curators
|
191 |
-
|
192 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
193 |
-
|
194 |
-
### Licensing Information
|
195 |
-
|
196 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
197 |
-
|
198 |
-
### Citation Information
|
199 |
-
|
200 |
-
```
|
201 |
-
@article{Narayan2018DontGM,
|
202 |
-
title={Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization},
|
203 |
-
author={Shashi Narayan and Shay B. Cohen and Mirella Lapata},
|
204 |
-
journal={ArXiv},
|
205 |
-
year={2018},
|
206 |
-
volume={abs/1808.08745}
|
207 |
-
}
|
208 |
-
```
|
209 |
-
|
210 |
-
|
211 |
-
### Contributions
|
212 |
-
|
213 |
-
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@jbragg](https://github.com/jbragg), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
data/XSUM-EMNLP18-Summary-Data-Original.tar.gz → default/xsum-test.parquet
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:60b69b52ffb59ef07dd7125b52a489fab8ba164042fe23e8f79cb21b7b3f5ecc
|
3 |
+
size 16996188
|
default/xsum-train.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1342ba349a33c983ed4029d9ebb34b23b330848a30f7fee2bc513ea3c5505e09
|
3 |
+
size 304352685
|
default/xsum-validation.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3cd4812a898075f931fb0f2c69d0c34968d1a074229a0786e67bcabf4068f9fc
|
3 |
+
size 16700165
|
xsum.py
DELETED
@@ -1,170 +0,0 @@
|
|
1 |
-
# coding=utf-8
|
2 |
-
# Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
|
3 |
-
#
|
4 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
# you may not use this file except in compliance with the License.
|
6 |
-
# You may obtain a copy of the License at
|
7 |
-
#
|
8 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
#
|
10 |
-
# Unless required by applicable law or agreed to in writing, software
|
11 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
# See the License for the specific language governing permissions and
|
14 |
-
# limitations under the License.
|
15 |
-
|
16 |
-
# Lint as: python3
|
17 |
-
"""XSum dataset."""
|
18 |
-
|
19 |
-
|
20 |
-
import json
|
21 |
-
import os
|
22 |
-
|
23 |
-
import datasets
|
24 |
-
|
25 |
-
|
26 |
-
_CITATION = """
|
27 |
-
@article{Narayan2018DontGM,
|
28 |
-
title={Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization},
|
29 |
-
author={Shashi Narayan and Shay B. Cohen and Mirella Lapata},
|
30 |
-
journal={ArXiv},
|
31 |
-
year={2018},
|
32 |
-
volume={abs/1808.08745}
|
33 |
-
}
|
34 |
-
"""
|
35 |
-
|
36 |
-
_DESCRIPTION = """
|
37 |
-
Extreme Summarization (XSum) Dataset.
|
38 |
-
|
39 |
-
There are three features:
|
40 |
-
- document: Input news article.
|
41 |
-
- summary: One sentence summary of the article.
|
42 |
-
- id: BBC ID of the article.
|
43 |
-
|
44 |
-
"""
|
45 |
-
|
46 |
-
# From https://github.com/EdinburghNLP/XSum/issues/12
|
47 |
-
_URL_DATA = "data/XSUM-EMNLP18-Summary-Data-Original.tar.gz"
|
48 |
-
_URL_SPLITS = (
|
49 |
-
"https://raw.githubusercontent.com/EdinburghNLP/XSum/master/XSum-Dataset/XSum-TRAINING-DEV-TEST-SPLIT-90-5-5.json"
|
50 |
-
)
|
51 |
-
|
52 |
-
_DOCUMENT = "document"
|
53 |
-
_SUMMARY = "summary"
|
54 |
-
_ID = "id"
|
55 |
-
|
56 |
-
_REMOVE_LINES = set(
|
57 |
-
[
|
58 |
-
"Share this with\n",
|
59 |
-
"Email\n",
|
60 |
-
"Facebook\n",
|
61 |
-
"Messenger\n",
|
62 |
-
"Twitter\n",
|
63 |
-
"Pinterest\n",
|
64 |
-
"WhatsApp\n",
|
65 |
-
"Linkedin\n",
|
66 |
-
"LinkedIn\n",
|
67 |
-
"Copy this link\n",
|
68 |
-
"These are external links and will open in a new window\n",
|
69 |
-
]
|
70 |
-
)
|
71 |
-
|
72 |
-
|
73 |
-
class Xsum(datasets.GeneratorBasedBuilder):
|
74 |
-
"""Extreme Summarization (XSum) Dataset."""
|
75 |
-
|
76 |
-
# Version 1.2.0 expands coverage, includes ids, and removes web contents.
|
77 |
-
VERSION = datasets.Version("1.2.0")
|
78 |
-
|
79 |
-
def _info(self):
|
80 |
-
return datasets.DatasetInfo(
|
81 |
-
description=_DESCRIPTION,
|
82 |
-
features=datasets.Features(
|
83 |
-
{
|
84 |
-
_DOCUMENT: datasets.Value("string"),
|
85 |
-
_SUMMARY: datasets.Value("string"),
|
86 |
-
_ID: datasets.Value("string"),
|
87 |
-
}
|
88 |
-
),
|
89 |
-
supervised_keys=(_DOCUMENT, _SUMMARY),
|
90 |
-
homepage="https://github.com/EdinburghNLP/XSum/tree/master/XSum-Dataset",
|
91 |
-
citation=_CITATION,
|
92 |
-
)
|
93 |
-
|
94 |
-
def _split_generators(self, dl_manager):
|
95 |
-
"""Returns SplitGenerators."""
|
96 |
-
|
97 |
-
files_to_download = {"data": _URL_DATA, "splits": _URL_SPLITS}
|
98 |
-
downloaded_files = dl_manager.download(files_to_download)
|
99 |
-
|
100 |
-
return [
|
101 |
-
datasets.SplitGenerator(
|
102 |
-
name=datasets.Split.TRAIN,
|
103 |
-
gen_kwargs={
|
104 |
-
"split_path": downloaded_files["splits"],
|
105 |
-
"split_name": "train",
|
106 |
-
"data_dir": "bbc-summary-data",
|
107 |
-
"files": dl_manager.iter_archive(downloaded_files["data"]),
|
108 |
-
},
|
109 |
-
),
|
110 |
-
datasets.SplitGenerator(
|
111 |
-
name=datasets.Split.VALIDATION,
|
112 |
-
gen_kwargs={
|
113 |
-
"split_path": downloaded_files["splits"],
|
114 |
-
"split_name": "validation",
|
115 |
-
"data_dir": "bbc-summary-data",
|
116 |
-
"files": dl_manager.iter_archive(downloaded_files["data"]),
|
117 |
-
},
|
118 |
-
),
|
119 |
-
datasets.SplitGenerator(
|
120 |
-
name=datasets.Split.TEST,
|
121 |
-
gen_kwargs={
|
122 |
-
"split_path": downloaded_files["splits"],
|
123 |
-
"split_name": "test",
|
124 |
-
"data_dir": "bbc-summary-data",
|
125 |
-
"files": dl_manager.iter_archive(downloaded_files["data"]),
|
126 |
-
},
|
127 |
-
),
|
128 |
-
]
|
129 |
-
|
130 |
-
def _generate_examples(self, split_path, split_name, data_dir, files):
|
131 |
-
"""Yields examples."""
|
132 |
-
|
133 |
-
with open(split_path, "r", encoding="utf-8") as f:
|
134 |
-
split_ids = json.load(f)
|
135 |
-
split_ids = {k: set(v) for k, v in split_ids.items()}
|
136 |
-
|
137 |
-
for path, f in files:
|
138 |
-
if not split_ids[split_name]:
|
139 |
-
break
|
140 |
-
elif path.startswith(data_dir) and path.endswith(".summary"):
|
141 |
-
i = os.path.basename(path).split(".")[0]
|
142 |
-
if i in split_ids[split_name]:
|
143 |
-
split_ids[split_name].remove(i)
|
144 |
-
text = "".join(
|
145 |
-
[
|
146 |
-
line.decode("utf-8")
|
147 |
-
for line in f.readlines()
|
148 |
-
if line.decode("utf-8") not in _REMOVE_LINES and line.strip()
|
149 |
-
]
|
150 |
-
)
|
151 |
-
# Each file follows below format:
|
152 |
-
# [SN]URL[SN]
|
153 |
-
# http://somelink
|
154 |
-
#
|
155 |
-
# [SN]TITLE[SN]
|
156 |
-
# some intro
|
157 |
-
#
|
158 |
-
# [SN]FIRST-SENTENCE[SN]
|
159 |
-
# some intro
|
160 |
-
#
|
161 |
-
# [SN]RESTBODY[SN]
|
162 |
-
# text line.
|
163 |
-
# another text line.
|
164 |
-
# "another text line."
|
165 |
-
|
166 |
-
# According to the following issue, FIRST-SENTENCE
|
167 |
-
# is the reference summary and TITLE is unused:
|
168 |
-
# https://github.com/EdinburghNLP/XSum/issues/22
|
169 |
-
segs = text.split("[SN]")
|
170 |
-
yield i, {_DOCUMENT: segs[8].strip(), _SUMMARY: segs[6].strip(), _ID: i}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|