parquet-converter commited on
Commit
6d85f73
1 Parent(s): eb23888

Update parquet files

Browse files
README.md DELETED
@@ -1,252 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - found
4
- language:
5
- - en
6
- language_creators:
7
- - found
8
- license:
9
- - unknown
10
- multilinguality:
11
- - monolingual
12
- pretty_name: ScientificPapers
13
- size_categories:
14
- - 100K<n<1M
15
- source_datasets:
16
- - original
17
- task_categories:
18
- - summarization
19
- task_ids: []
20
- paperswithcode_id: null
21
- tags:
22
- - abstractive-summarization
23
- dataset_info:
24
- - config_name: arxiv
25
- features:
26
- - name: article
27
- dtype: string
28
- - name: abstract
29
- dtype: string
30
- - name: section_names
31
- dtype: string
32
- splits:
33
- - name: train
34
- num_bytes: 7148341992
35
- num_examples: 203037
36
- - name: validation
37
- num_bytes: 217125524
38
- num_examples: 6436
39
- - name: test
40
- num_bytes: 217514961
41
- num_examples: 6440
42
- download_size: 4504646347
43
- dataset_size: 7582982477
44
- - config_name: pubmed
45
- features:
46
- - name: article
47
- dtype: string
48
- - name: abstract
49
- dtype: string
50
- - name: section_names
51
- dtype: string
52
- splits:
53
- - name: train
54
- num_bytes: 2252027383
55
- num_examples: 119924
56
- - name: validation
57
- num_bytes: 127403398
58
- num_examples: 6633
59
- - name: test
60
- num_bytes: 127184448
61
- num_examples: 6658
62
- download_size: 4504646347
63
- dataset_size: 2506615229
64
- ---
65
-
66
- # Dataset Card for "scientific_papers"
67
-
68
- ## Table of Contents
69
- - [Dataset Description](#dataset-description)
70
- - [Dataset Summary](#dataset-summary)
71
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
72
- - [Languages](#languages)
73
- - [Dataset Structure](#dataset-structure)
74
- - [Data Instances](#data-instances)
75
- - [Data Fields](#data-fields)
76
- - [Data Splits](#data-splits)
77
- - [Dataset Creation](#dataset-creation)
78
- - [Curation Rationale](#curation-rationale)
79
- - [Source Data](#source-data)
80
- - [Annotations](#annotations)
81
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
82
- - [Considerations for Using the Data](#considerations-for-using-the-data)
83
- - [Social Impact of Dataset](#social-impact-of-dataset)
84
- - [Discussion of Biases](#discussion-of-biases)
85
- - [Other Known Limitations](#other-known-limitations)
86
- - [Additional Information](#additional-information)
87
- - [Dataset Curators](#dataset-curators)
88
- - [Licensing Information](#licensing-information)
89
- - [Citation Information](#citation-information)
90
- - [Contributions](#contributions)
91
-
92
- ## Dataset Description
93
-
94
- - **Homepage:**
95
- - **Repository:** https://github.com/armancohan/long-summarization
96
- - **Paper:** [A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents](https://arxiv.org/abs/1804.05685)
97
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
98
- - **Size of downloaded dataset files:** 8591.93 MB
99
- - **Size of the generated dataset:** 9622.19 MB
100
- - **Total amount of disk used:** 18214.12 MB
101
-
102
- ### Dataset Summary
103
-
104
- Scientific papers datasets contains two sets of long and structured documents.
105
- The datasets are obtained from ArXiv and PubMed OpenAccess repositories.
106
-
107
- Both "arxiv" and "pubmed" have two features:
108
- - article: the body of the document, paragraphs separated by "/n".
109
- - abstract: the abstract of the document, paragraphs separated by "/n".
110
- - section_names: titles of sections, separated by "/n".
111
-
112
- ### Supported Tasks and Leaderboards
113
-
114
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
115
-
116
- ### Languages
117
-
118
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
119
-
120
- ## Dataset Structure
121
-
122
- ### Data Instances
123
-
124
- #### arxiv
125
-
126
- - **Size of downloaded dataset files:** 4295.97 MB
127
- - **Size of the generated dataset:** 7231.70 MB
128
- - **Total amount of disk used:** 11527.66 MB
129
-
130
- An example of 'train' looks as follows.
131
- ```
132
- This example was too long and was cropped:
133
-
134
- {
135
- "abstract": "\" we have studied the leptonic decay @xmath0 , via the decay channel @xmath1 , using a sample of tagged @xmath2 decays collected...",
136
- "article": "\"the leptonic decays of a charged pseudoscalar meson @xmath7 are processes of the type @xmath8 , where @xmath9 , @xmath10 , or @...",
137
- "section_names": "[sec:introduction]introduction\n[sec:detector]data and the cleo- detector\n[sec:analysys]analysis method\n[sec:conclusion]summary"
138
- }
139
- ```
140
-
141
- #### pubmed
142
-
143
- - **Size of downloaded dataset files:** 4295.97 MB
144
- - **Size of the generated dataset:** 2390.49 MB
145
- - **Total amount of disk used:** 6686.46 MB
146
-
147
- An example of 'validation' looks as follows.
148
- ```
149
- This example was too long and was cropped:
150
-
151
- {
152
- "abstract": "\" background and aim : there is lack of substantial indian data on venous thromboembolism ( vte ) . \\n the aim of this study was...",
153
- "article": "\"approximately , one - third of patients with symptomatic vte manifests pe , whereas two - thirds manifest dvt alone .\\nboth dvt...",
154
- "section_names": "\"Introduction\\nSubjects and Methods\\nResults\\nDemographics and characteristics of venous thromboembolism patients\\nRisk factors ..."
155
- }
156
- ```
157
-
158
- ### Data Fields
159
-
160
- The data fields are the same among all splits.
161
-
162
- #### arxiv
163
- - `article`: a `string` feature.
164
- - `abstract`: a `string` feature.
165
- - `section_names`: a `string` feature.
166
-
167
- #### pubmed
168
- - `article`: a `string` feature.
169
- - `abstract`: a `string` feature.
170
- - `section_names`: a `string` feature.
171
-
172
- ### Data Splits
173
-
174
- | name |train |validation|test|
175
- |------|-----:|---------:|---:|
176
- |arxiv |203037| 6436|6440|
177
- |pubmed|119924| 6633|6658|
178
-
179
- ## Dataset Creation
180
-
181
- ### Curation Rationale
182
-
183
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
184
-
185
- ### Source Data
186
-
187
- #### Initial Data Collection and Normalization
188
-
189
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
190
-
191
- #### Who are the source language producers?
192
-
193
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
194
-
195
- ### Annotations
196
-
197
- #### Annotation process
198
-
199
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
200
-
201
- #### Who are the annotators?
202
-
203
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
204
-
205
- ### Personal and Sensitive Information
206
-
207
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
208
-
209
- ## Considerations for Using the Data
210
-
211
- ### Social Impact of Dataset
212
-
213
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
214
-
215
- ### Discussion of Biases
216
-
217
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
218
-
219
- ### Other Known Limitations
220
-
221
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
222
-
223
- ## Additional Information
224
-
225
- ### Dataset Curators
226
-
227
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
228
-
229
- ### Licensing Information
230
-
231
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
232
-
233
- ### Citation Information
234
-
235
- ```
236
- @article{Cohan_2018,
237
- title={A Discourse-Aware Attention Model for Abstractive Summarization of
238
- Long Documents},
239
- url={http://dx.doi.org/10.18653/v1/n18-2097},
240
- DOI={10.18653/v1/n18-2097},
241
- journal={Proceedings of the 2018 Conference of the North American Chapter of
242
- the Association for Computational Linguistics: Human Language
243
- Technologies, Volume 2 (Short Papers)},
244
- publisher={Association for Computational Linguistics},
245
- author={Cohan, Arman and Dernoncourt, Franck and Kim, Doo Soon and Bui, Trung and Kim, Seokhwan and Chang, Walter and Goharian, Nazli},
246
- year={2018}
247
- }
248
- ```
249
-
250
- ### Contributions
251
-
252
- Thanks to [@thomwolf](https://github.com/thomwolf), [@jplu](https://github.com/jplu), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"arxiv": {"description": "\nScientific papers datasets contains two sets of long and structured documents.\nThe datasets are obtained from ArXiv and PubMed OpenAccess repositories.\n\nBoth \"arxiv\" and \"pubmed\" have two features:\n - article: the body of the document, pagragraphs seperated by \"/n\".\n - abstract: the abstract of the document, pagragraphs seperated by \"/n\".\n - section_names: titles of sections, seperated by \"/n\".\n\n", "citation": "\n@article{Cohan_2018,\n title={A Discourse-Aware Attention Model for Abstractive Summarization of\n Long Documents},\n url={http://dx.doi.org/10.18653/v1/n18-2097},\n DOI={10.18653/v1/n18-2097},\n journal={Proceedings of the 2018 Conference of the North American Chapter of\n the Association for Computational Linguistics: Human Language\n Technologies, Volume 2 (Short Papers)},\n publisher={Association for Computational Linguistics},\n author={Cohan, Arman and Dernoncourt, Franck and Kim, Doo Soon and Bui, Trung and Kim, Seokhwan and Chang, Walter and Goharian, Nazli},\n year={2018}\n}\n", "homepage": "https://github.com/armancohan/long-summarization", "license": "", "features": {"article": {"dtype": "string", "id": null, "_type": "Value"}, "abstract": {"dtype": "string", "id": null, "_type": "Value"}, "section_names": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "scientific_papers", "config_name": "arxiv", "version": {"version_str": "1.1.1", "description": null, "major": 1, "minor": 1, "patch": 1}, "splits": {"train": {"name": "train", "num_bytes": 7148341992, "num_examples": 203037, "dataset_name": "scientific_papers"}, "validation": {"name": "validation", "num_bytes": 217125524, "num_examples": 6436, "dataset_name": "scientific_papers"}, "test": {"name": "test", "num_bytes": 217514961, "num_examples": 6440, "dataset_name": "scientific_papers"}}, "download_checksums": {"https://s3.amazonaws.com/datasets.huggingface.co/scientific_papers/1.1.1/arxiv-dataset.zip": {"num_bytes": 3624420843, "checksum": "82ed30dd7c66a6497eeb3d7c3090c274e9e32c012438f8e0bb3cce3e6c1fcada"}, "https://s3.amazonaws.com/datasets.huggingface.co/scientific_papers/1.1.1/pubmed-dataset.zip": {"num_bytes": 880225504, "checksum": "d424074726a5e29e20bf834055fe7efe90f8a37bce0a2b512e4ab7e487013c04"}}, "download_size": 4504646347, "post_processing_size": null, "dataset_size": 7582982477, "size_in_bytes": 12087628824}, "pubmed": {"description": "\nScientific papers datasets contains two sets of long and structured documents.\nThe datasets are obtained from ArXiv and PubMed OpenAccess repositories.\n\nBoth \"arxiv\" and \"pubmed\" have two features:\n - article: the body of the document, pagragraphs seperated by \"/n\".\n - abstract: the abstract of the document, pagragraphs seperated by \"/n\".\n - section_names: titles of sections, seperated by \"/n\".\n\n", "citation": "\n@article{Cohan_2018,\n title={A Discourse-Aware Attention Model for Abstractive Summarization of\n Long Documents},\n url={http://dx.doi.org/10.18653/v1/n18-2097},\n DOI={10.18653/v1/n18-2097},\n journal={Proceedings of the 2018 Conference of the North American Chapter of\n the Association for Computational Linguistics: Human Language\n Technologies, Volume 2 (Short Papers)},\n publisher={Association for Computational Linguistics},\n author={Cohan, Arman and Dernoncourt, Franck and Kim, Doo Soon and Bui, Trung and Kim, Seokhwan and Chang, Walter and Goharian, Nazli},\n year={2018}\n}\n", "homepage": "https://github.com/armancohan/long-summarization", "license": "", "features": {"article": {"dtype": "string", "id": null, "_type": "Value"}, "abstract": {"dtype": "string", "id": null, "_type": "Value"}, "section_names": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "scientific_papers", "config_name": "pubmed", "version": {"version_str": "1.1.1", "description": null, "major": 1, "minor": 1, "patch": 1}, "splits": {"train": {"name": "train", "num_bytes": 2252027383, "num_examples": 119924, "dataset_name": "scientific_papers"}, "validation": {"name": "validation", "num_bytes": 127403398, "num_examples": 6633, "dataset_name": "scientific_papers"}, "test": {"name": "test", "num_bytes": 127184448, "num_examples": 6658, "dataset_name": "scientific_papers"}}, "download_checksums": {"https://s3.amazonaws.com/datasets.huggingface.co/scientific_papers/1.1.1/arxiv-dataset.zip": {"num_bytes": 3624420843, "checksum": "82ed30dd7c66a6497eeb3d7c3090c274e9e32c012438f8e0bb3cce3e6c1fcada"}, "https://s3.amazonaws.com/datasets.huggingface.co/scientific_papers/1.1.1/pubmed-dataset.zip": {"num_bytes": 880225504, "checksum": "d424074726a5e29e20bf834055fe7efe90f8a37bce0a2b512e4ab7e487013c04"}}, "download_size": 4504646347, "post_processing_size": null, "dataset_size": 2506615229, "size_in_bytes": 7011261576}}
 
 
pubmed/scientific_papers-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1b4a1ec245d45a518ad518db525272705c7160f5a1804bef46f5a52774a3d402
3
+ size 59127441
pubmed/scientific_papers-train-00000-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4e65d88462aac3179361340381f643cac7902b4ae6ec0bcea142824ae37f3db
3
+ size 236959981
pubmed/scientific_papers-train-00001-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce6b7f0b4f1f66b4843d338185f44e76fffef80fbff64e465a603e6c9875b7ef
3
+ size 235898578
pubmed/scientific_papers-train-00002-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a45b7b6ab54e28617918c6750a9af769427dc1296ee4b4ff037c9ca8efa55fb
3
+ size 235130891
pubmed/scientific_papers-train-00003-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f08e5743f4f56c4a2bf7a8c92b40be9446e25cd770cfd6455490f8be13af35c
3
+ size 236131025
pubmed/scientific_papers-train-00004-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6527c67b0e80fc0c71e596399048d02a5aadcc6fd84d3ff91b4c81f26bff6dfd
3
+ size 105944033
pubmed/scientific_papers-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a186870f3269f07f48ab3a1cb2dea24797646442b5326882922bde64072f9a46
3
+ size 59264542
scientific_papers.py DELETED
@@ -1,138 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- # Lint as: python3
17
- """Scientific Papers Dataset."""
18
-
19
-
20
- import json
21
- import os
22
-
23
- import datasets
24
-
25
-
26
- _CITATION = """
27
- @article{Cohan_2018,
28
- title={A Discourse-Aware Attention Model for Abstractive Summarization of
29
- Long Documents},
30
- url={http://dx.doi.org/10.18653/v1/n18-2097},
31
- DOI={10.18653/v1/n18-2097},
32
- journal={Proceedings of the 2018 Conference of the North American Chapter of
33
- the Association for Computational Linguistics: Human Language
34
- Technologies, Volume 2 (Short Papers)},
35
- publisher={Association for Computational Linguistics},
36
- author={Cohan, Arman and Dernoncourt, Franck and Kim, Doo Soon and Bui, Trung and Kim, Seokhwan and Chang, Walter and Goharian, Nazli},
37
- year={2018}
38
- }
39
- """
40
-
41
- _DESCRIPTION = """
42
- Scientific papers datasets contains two sets of long and structured documents.
43
- The datasets are obtained from ArXiv and PubMed OpenAccess repositories.
44
-
45
- Both "arxiv" and "pubmed" have two features:
46
- - article: the body of the document, pagragraphs seperated by "/n".
47
- - abstract: the abstract of the document, pagragraphs seperated by "/n".
48
- - section_names: titles of sections, seperated by "/n".
49
-
50
- """
51
-
52
- _DOCUMENT = "article"
53
- _SUMMARY = "abstract"
54
-
55
- _URLS = {
56
- "arxiv": "https://s3.amazonaws.com/datasets.huggingface.co/scientific_papers/1.1.1/arxiv-dataset.zip",
57
- "pubmed": "https://s3.amazonaws.com/datasets.huggingface.co/scientific_papers/1.1.1/pubmed-dataset.zip",
58
- }
59
-
60
-
61
- class ScientificPapersConfig(datasets.BuilderConfig):
62
- """BuilderConfig for Scientific Papers."""
63
-
64
- def __init__(self, filename=None, **kwargs):
65
- """BuilderConfig for ScientificPapers
66
-
67
- Args:
68
- filename: filename of different configs for the dataset.
69
- **kwargs: keyword arguments forwarded to super.
70
- """
71
- # 1.1.0 remove sentence breaker <S> and </S> in summary.
72
- super(ScientificPapersConfig, self).__init__(version=datasets.Version("1.1.1"), **kwargs)
73
- self.filename = filename
74
-
75
-
76
- class ScientificPapers(datasets.GeneratorBasedBuilder):
77
- """Scientific Papers."""
78
-
79
- BUILDER_CONFIGS = [
80
- ScientificPapersConfig(name="pubmed", description="Documents from PubMed repository."),
81
- ScientificPapersConfig(name="arxiv", description="Documents from ArXiv repository."),
82
- ]
83
-
84
- def _info(self):
85
- return datasets.DatasetInfo(
86
- description=_DESCRIPTION,
87
- features=datasets.Features(
88
- {
89
- _DOCUMENT: datasets.Value("string"),
90
- _SUMMARY: datasets.Value("string"),
91
- "section_names": datasets.Value("string"),
92
- }
93
- ),
94
- supervised_keys=None,
95
- homepage="https://github.com/armancohan/long-summarization",
96
- citation=_CITATION,
97
- )
98
-
99
- def _split_generators(self, dl_manager):
100
- """Returns SplitGenerators."""
101
- dl_paths = dl_manager.download_and_extract(_URLS)
102
- path = os.path.join(dl_paths[self.config.name], self.config.name + "-dataset")
103
- return [
104
- datasets.SplitGenerator(
105
- name=datasets.Split.TRAIN,
106
- gen_kwargs={"path": os.path.join(path, "train.txt")},
107
- ),
108
- datasets.SplitGenerator(
109
- name=datasets.Split.VALIDATION,
110
- gen_kwargs={"path": os.path.join(path, "val.txt")},
111
- ),
112
- datasets.SplitGenerator(
113
- name=datasets.Split.TEST,
114
- gen_kwargs={"path": os.path.join(path, "test.txt")},
115
- ),
116
- ]
117
-
118
- def _generate_examples(self, path=None):
119
- """Yields examples."""
120
- with open(path, encoding="utf-8") as f:
121
- for line in f:
122
- # Possible keys are:
123
- # "article_id": str
124
- # "article_text": list[str] article (list of paragraphs).
125
- # "abstract_text": list[str], abstract (list of paragraphs).
126
- # "section_names": list[str], list of section names.
127
- # "sections": list[list[str]], list of sections (list of paragraphs)
128
- d = json.loads(line)
129
- summary = "\n".join(d["abstract_text"])
130
- # In original paper, <S> and </S> are not used in vocab during training
131
- # or during decoding.
132
- # https://github.com/armancohan/long-summarization/blob/master/data.py#L27
133
- summary = summary.replace("<S>", "").replace("</S>", "")
134
- yield d["article_id"], {
135
- _DOCUMENT: "\n".join(d["article_text"]),
136
- _SUMMARY: summary,
137
- "section_names": "\n".join(d["section_names"]),
138
- }