system HF staff commited on
Commit
aa5356f
1 Parent(s): e2e3760

Update files from the datasets library (from 1.4.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.4.0

Files changed (2) hide show
  1. README.md +143 -63
  2. cnn_dailymail.py +6 -4
README.md CHANGED
@@ -21,115 +21,195 @@ task_ids:
21
  # Dataset Card for CNN Dailymail Dataset
22
 
23
  ## Table of Contents
24
- - [Tasks Supported](#tasks-supported)
25
- - [Purpose](#purpose)
26
- - [Languages](#languages)
27
- - [People Involved](#who-iswas-involved-in-the-dataset-use-and-creation)
28
- - [Data Characteristics](#data-characteristics)
29
- - [Dataset Structure](#dataset-structure)
30
- - [Known Limitations](#known-limitations)
31
- - [Licensing information](#licensing-information)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
 
33
- ## Tasks supported:
34
- ### Task categorization / tags
35
 
36
- [Versions 2.0.0 and 3.0.0 of the CNN / DailyMail Dataset](https://www.aclweb.org/anthology/K16-1028.pdf) were developed for abstractive and extractive summarization. [Version 1.0.0](https://papers.nips.cc/paper/5945-teaching-machines-to-read-and-comprehend.pdf) was developed for machine reading and comprehension and abstractive question answering.
37
 
38
- ## Purpose
39
 
40
- Version 1.0.0 aimed to support supervised neural methodologies for machine reading and question answering with a large amount of real natural language training data and released about 313k unique articles and nearly 1M Cloze style questions to go with the articles. Versions 2.0.0 and 3.0.0 changed the structure of the dataset to support summarization rather than question answering. Version 3.0.0 provided a non-anonymized version of the data, whereas both the previous versions were preprocessed to replace named entities with unique identifier labels.
41
 
42
- ## Languages
43
- ### Per language:
 
 
 
44
 
45
- The BCP-47 code for English as generally spoken in the United States is en-US and the BCP-47 code for English as generally spoken in the United Kingdom is en-GB. It is unknown if other varieties of English are represented in the data.
46
 
47
- ## Who is/was involved in the dataset use and creation?
48
- ### Who are the dataset curators?
 
 
49
 
50
- The data was originally collected by Karl Moritz Hermann, Tomáš Kočiský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom of Google DeepMind. Tomáš Kočiský and Phil Blunsom are also affiliated with the University of Oxford. They released scripts to collect and process the data into the question answering format.
51
 
52
- Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, and Bing Xiang of IMB Watson and Çağlar Gu̇lçehre of Université de Montréal modified Hermann et al's collection scripts to restore the data to a summary format. They also produced both anonymized and non-anonymized versions.
 
 
53
 
54
- The code for the non-anonymized version is made publicly available by Abigail See of Stanford University, Peter J. Liu of Google Brain and Christopher D. Manning of Stanford University at <https://github.com/abisee/cnn-dailymail>. The work at Stanford University was supported by the DARPA DEFT ProgramAFRL contract no. FA8750-13-2-0040.
55
 
 
56
 
57
- ### Who are the language producers (who wrote the text / created the base content)?
 
 
 
 
58
 
59
- The text was written by journalists at CNN and the Daily Mail.
 
 
60
 
61
- ### Who are the annotators?
62
 
63
- No annotation was provided with the dataset.
64
 
65
- ## Data characteristics
66
 
67
  The data consists of news articles and highlight sentences. In the question answering setting of the data, the articles are used as the context and entities are hidden one at a time in the highlight sentences, producing Cloze style questions where the goal of the model is to correctly guess which entity in the context has been hidden in the highlight. In the summarization setting, the highlight sentences are concatenated to form a summary of the article. The CNN articles were written between April 2007 and April 2015. The Daily Mail articles were written between June 2010 and April 2015.
68
 
69
- ### How was the data collected?
70
-
71
  The code for the original data collection is available at <https://github.com/deepmind/rc-data>. The articles were downloaded using archives of <www.cnn.com> and <www.dailymail.co.uk> on the Wayback Machine. Articles were not included in the Version 1.0.0 collection if they exceeded 2000 tokens. Due to accessibility issues with the Wayback Machine, Kyunghyun Cho has made the datasets available at <https://cs.nyu.edu/~kcho/DMQA/>. An updated version of the code that does not anonymize the data is available at <https://github.com/abisee/cnn-dailymail>.
72
 
 
73
 
74
- ### Normalization information
75
 
76
- Hermann et al provided their own tokenization script. The script provided by See uses the PTBTokenizer. It also lowercases the text and adds periods to lines missing them.
77
 
78
- ### Annotation process
79
 
80
- No annotation was provided with the dataset.
81
 
82
- ## Dataset Structure
83
- ### Splits, features, and labels
84
 
85
- The CNN/DailyMail dataset has 3 splits: _train_, _validation_, and _test_. Below are the statistics for Version 3.0.0 of the dataset.
86
 
87
- Dataset Split | Number of Instances in Split
88
- --------------|--------------------------------------------
89
- Train | 287,113
90
- Validation | 13,368
91
- Test | 11,490
92
 
93
- Each data instance contains the following features: _article_, _highlights_, _id_.
94
 
95
- Feature | Mean Token Count
96
- --------|-----------------
97
- Article | 781
98
- Highlights | 56
99
 
100
- ### Span indices
101
 
102
- No span indices are included in this dataset.
103
 
104
- ### Example ID
105
 
106
- An example ID is '0001d1afc246a7964130f43ae940af6bc6c57f01'. These are heximal formated SHA1 hashes of the urls where the stories were retrieved from.
107
 
108
- ### Free text description for context (e.g. describe difference between title / selftext / body in Reddit data) and example
109
 
110
- For each ID, there is a string for the article, a string for the highlights, and a string for the id. See the [CNN / Daily Mail dataset viewer](https://huggingface.co/datasets/viewer/?dataset=cnn_dailymail&config=3.0.0) to explore more examples.
111
 
112
- ID | Article | Hightlights
113
- ---|---------|------------
114
- 0054d6d30dbcad772e20b22771153a2a9cbeaf62 | (CNN) -- An American woman died aboard a cruise ship that docked at Rio de Janeiro on Tuesday, the same ship on which 86 passengers previously fell ill, according to the state-run Brazilian news agency, Agencia Brasil. The American tourist died aboard the MS Veendam, owned by cruise operator Holland America. Federal Police told Agencia Brasil that forensic doctors were investigating her death. The ship's doctors told police that the woman was elderly and suffered from diabetes and hypertension, according the agency. The other passengers came down with diarrhea prior to her death during an earlier part of the trip, the ship's doctors said. The Veendam left New York 36 days ago for a South America tour. | The elderly woman suffered from diabetes and hypertension, ship's doctors say .\nPreviously, 86 passengers had fallen ill on the ship, Agencia Brasil says .
115
 
116
- ### Suggested metrics / models:
117
 
118
- [Zhong et al (2020)](https://www.aclweb.org/anthology/2020.acl-main.552.pdf) report a ROUGE-1 score of 44.41. See the [Papers With Code leaderboard](https://paperswithcode.com/sota/document-summarization-on-cnn-daily-mail) for more models.
119
 
120
- ## Known Limitations
121
- ### Known social biases
122
 
123
- [Bordia and Bowman (2019)](https://www.aclweb.org/anthology/N19-3002.pdf) explore measuring gender bias and debiasing techniques in the CNN / Dailymail dataset, the Penn Treebank, and WikiText-2. They find the CNN / Dailymail dataset to have a slightly lower gender bias based on their metric compared to the other datasets, but still show evidence of gender bias when looking at words such as 'fragile'.
124
 
125
- ### Other known limitations
126
 
127
- News articles have been shown to conform to writing conventions in which important information is primarily presented in the first third of the article [(Kryściński et al, 2019)](https://www.aclweb.org/anthology/D19-1051.pdf). [Chen et al (2016)](https://www.aclweb.org/anthology/P16-1223.pdf) conducted a manual study of 100 random instances of the first version of the dataset and found 25% of the samples to be difficult even for humans to answer correctly due to ambiguity and coreference errors.
 
 
 
 
128
 
129
- ## Licensing information
 
 
130
 
131
  The CNN / Daily Mail dataset version 1.0.0 is released under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0).
132
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
133
  ### Contributions
134
 
135
- Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@jplu](https://github.com/jplu), [@jbragg](https://github.com/jbragg), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
 
 
21
  # Dataset Card for CNN Dailymail Dataset
22
 
23
  ## Table of Contents
24
+ - [Dataset Card for CNN Dailymail Dataset](#dataset-card-for-cnn-dailymail-dataset)
25
+ - [Table of Contents](#table-of-contents)
26
+ - [Dataset Description](#dataset-description)
27
+ - [Dataset Summary](#dataset-summary)
28
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
29
+ - [Languages](#languages)
30
+ - [Dataset Structure](#dataset-structure)
31
+ - [Data Instances](#data-instances)
32
+ - [Data Fields](#data-fields)
33
+ - [Data Splits](#data-splits)
34
+ - [Dataset Creation](#dataset-creation)
35
+ - [Curation Rationale](#curation-rationale)
36
+ - [Source Data](#source-data)
37
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
38
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
39
+ - [Annotations](#annotations)
40
+ - [Annotation process](#annotation-process)
41
+ - [Who are the annotators?](#who-are-the-annotators)
42
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
43
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
44
+ - [Social Impact of Dataset](#social-impact-of-dataset)
45
+ - [Discussion of Biases](#discussion-of-biases)
46
+ - [Other Known Limitations](#other-known-limitations)
47
+ - [Additional Information](#additional-information)
48
+ - [Dataset Curators](#dataset-curators)
49
+ - [Licensing Information](#licensing-information)
50
+ - [Citation Information](#citation-information)
51
+ - [Contributions](#contributions)
52
+
53
+ ## Dataset Description
54
+
55
+ - **Homepage:**
56
+ - **Repository:** [CNN / DailyMail Dataset repository](https://github.com/abisee/cnn-dailymail)
57
+ - **Paper:** [Abstractive Text Summarization Using Sequence-to-Sequence RNNs and Beyond](https://papers.nips.cc/paper/5945-teaching-machines-to-read-and-comprehend.pdf), [Get To The Point: Summarization with Pointer-Generator Networks](https://www.aclweb.org/anthology/K16-1028.pdf)
58
+ - **Leaderboard:** [Papers with Code leaderboard for CNN / Dailymail Dataset](https://paperswithcode.com/sota/document-summarization-on-cnn-daily-mail)
59
+ - **Point of Contact:** [Abigail See](mailto:abisee@stanford.edu)
60
+
61
+ ### Dataset Summary
62
+
63
+ The CNN / DailyMail Dataset is an English-language dataset containing just over 300k unique news aticles as written by journalists at CNN and the Daily Mail. The current version supports both extractive and abstractive summarization, though the original version was created for machine reading and comprehension and abstractive question answering.
64
+
65
+ ### Supported Tasks and Leaderboards
66
+
67
+ - 'summarization': [Versions 2.0.0 and 3.0.0 of the CNN / DailyMail Dataset](https://www.aclweb.org/anthology/K16-1028.pdf) can be used to train a model for abstractive and extractive summarization ([Version 1.0.0](https://papers.nips.cc/paper/5945-teaching-machines-to-read-and-comprehend.pdf) was developed for machine reading and comprehension and abstractive question answering). The model performance is measured by how high the output summary's [ROUGE](https://huggingface.co/metrics/rouge) score for a given article is when compared to the highlight as written by the original article author. [Zhong et al (2020)](https://www.aclweb.org/anthology/2020.acl-main.552.pdf) report a ROUGE-1 score of 44.41 when testing a model trained for extractive summarization. See the [Papers With Code leaderboard](https://paperswithcode.com/sota/document-summarization-on-cnn-daily-mail) for more models.
68
+
69
+ ### Languages
70
 
71
+ The BCP-47 code for English as generally spoken in the United States is en-US and the BCP-47 code for English as generally spoken in the United Kingdom is en-GB. It is unknown if other varieties of English are represented in the data.
 
72
 
73
+ ## Dataset Structure
74
 
75
+ ### Data Instances
76
 
77
+ For each instance, there is a string for the article, a string for the highlights, and a string for the id. See the [CNN / Daily Mail dataset viewer](https://huggingface.co/datasets/viewer/?dataset=cnn_dailymail&config=3.0.0) to explore more examples.
78
 
79
+ ```
80
+ {'id': '0054d6d30dbcad772e20b22771153a2a9cbeaf62',
81
+ 'article': '(CNN) -- An American woman died aboard a cruise ship that docked at Rio de Janeiro on Tuesday, the same ship on which 86 passengers previously fell ill, according to the state-run Brazilian news agency, Agencia Brasil. The American tourist died aboard the MS Veendam, owned by cruise operator Holland America. Federal Police told Agencia Brasil that forensic doctors were investigating her death. The ship's doctors told police that the woman was elderly and suffered from diabetes and hypertension, according the agency. The other passengers came down with diarrhea prior to her death during an earlier part of the trip, the ship's doctors said. The Veendam left New York 36 days ago for a South America tour.'
82
+ 'highlights': 'The elderly woman suffered from diabetes and hypertension, ship's doctors say .\nPreviously, 86 passengers had fallen ill on the ship, Agencia Brasil says .'}
83
+ ```
84
 
85
+ The average token count for the articles and the highlights are provided below:
86
 
87
+ | Feature | Mean Token Count |
88
+ | ---------- | ---------------- |
89
+ | Article | 781 |
90
+ | Highlights | 56 |
91
 
92
+ ### Data Fields
93
 
94
+ - `id`: a string containing the heximal formated SHA1 hash of the url where the story was retrieved from
95
+ - `article`: a string containing the body of the news article
96
+ - `highlights`: a string containing the highlight of the article as written by the article author
97
 
98
+ ### Data Splits
99
 
100
+ The CNN/DailyMail dataset has 3 splits: _train_, _validation_, and _test_. Below are the statistics for Version 3.0.0 of the dataset.
101
 
102
+ | Dataset Split | Number of Instances in Split |
103
+ | ------------- | ------------------------------------------- |
104
+ | Train | 287,113 |
105
+ | Validation | 13,368 |
106
+ | Test | 11,490 |
107
 
108
+ ## Dataset Creation
109
+
110
+ ### Curation Rationale
111
 
112
+ Version 1.0.0 aimed to support supervised neural methodologies for machine reading and question answering with a large amount of real natural language training data and released about 313k unique articles and nearly 1M Cloze style questions to go with the articles. Versions 2.0.0 and 3.0.0 changed the structure of the dataset to support summarization rather than question answering. Version 3.0.0 provided a non-anonymized version of the data, whereas both the previous versions were preprocessed to replace named entities with unique identifier labels.
113
 
114
+ ### Source Data
115
 
116
+ #### Initial Data Collection and Normalization
117
 
118
  The data consists of news articles and highlight sentences. In the question answering setting of the data, the articles are used as the context and entities are hidden one at a time in the highlight sentences, producing Cloze style questions where the goal of the model is to correctly guess which entity in the context has been hidden in the highlight. In the summarization setting, the highlight sentences are concatenated to form a summary of the article. The CNN articles were written between April 2007 and April 2015. The Daily Mail articles were written between June 2010 and April 2015.
119
 
 
 
120
  The code for the original data collection is available at <https://github.com/deepmind/rc-data>. The articles were downloaded using archives of <www.cnn.com> and <www.dailymail.co.uk> on the Wayback Machine. Articles were not included in the Version 1.0.0 collection if they exceeded 2000 tokens. Due to accessibility issues with the Wayback Machine, Kyunghyun Cho has made the datasets available at <https://cs.nyu.edu/~kcho/DMQA/>. An updated version of the code that does not anonymize the data is available at <https://github.com/abisee/cnn-dailymail>.
121
 
122
+ Hermann et al provided their own tokenization script. The script provided by See uses the PTBTokenizer. It also lowercases the text and adds periods to lines missing them.
123
 
124
+ #### Who are the source language producers?
125
 
126
+ The text was written by journalists at CNN and the Daily Mail.
127
 
128
+ ### Annotations
129
 
130
+ The dataset does not contain any additional annotations.
131
 
132
+ #### Annotation process
 
133
 
134
+ [N/A]
135
 
136
+ #### Who are the annotators?
 
 
 
 
137
 
138
+ [N/A]
139
 
140
+ ### Personal and Sensitive Information
 
 
 
141
 
142
+ Version 3.0 is not anonymized, so individuals' names can be found in the dataset. Information about the original author is not included in the dataset.
143
 
144
+ ## Considerations for Using the Data
145
 
146
+ ### Social Impact of Dataset
147
 
148
+ The purpose of this dataset is to help develop models that can summarize long paragraphs of text in one or two sentences.
149
 
150
+ This task is useful for efficiently presenting information given a large quantity of text. It should be made clear that any summarizations produced by models trained on this dataset are reflective of the language used in the articles, but are in fact automatically generated.
151
 
152
+ ### Discussion of Biases
153
 
154
+ [Bordia and Bowman (2019)](https://www.aclweb.org/anthology/N19-3002.pdf) explore measuring gender bias and debiasing techniques in the CNN / Dailymail dataset, the Penn Treebank, and WikiText-2. They find the CNN / Dailymail dataset to have a slightly lower gender bias based on their metric compared to the other datasets, but still show evidence of gender bias when looking at words such as 'fragile'.
 
 
155
 
156
+ Because the articles were written by and for people in the US and the UK, they will likely present specifically US and UK perspectives and feature events that are considered relevant to those populations during the time that the articles were published.
157
 
158
+ ### Other Known Limitations
159
 
160
+ News articles have been shown to conform to writing conventions in which important information is primarily presented in the first third of the article [(Kryściński et al, 2019)](https://www.aclweb.org/anthology/D19-1051.pdf). [Chen et al (2016)](https://www.aclweb.org/anthology/P16-1223.pdf) conducted a manual study of 100 random instances of the first version of the dataset and found 25% of the samples to be difficult even for humans to answer correctly due to ambiguity and coreference errors.
 
161
 
162
+ It should also be noted that machine-generated summarizations, even when extractive, may differ in truth values when compared to the original articles.
163
 
164
+ ## Additional Information
165
 
166
+ ### Dataset Curators
167
+
168
+ The data was originally collected by Karl Moritz Hermann, Tomáš Kočiský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom of Google DeepMind. Tomáš Kočiský and Phil Blunsom are also affiliated with the University of Oxford. They released scripts to collect and process the data into the question answering format.
169
+
170
+ Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, and Bing Xiang of IMB Watson and Çağlar Gu̇lçehre of Université de Montréal modified Hermann et al's collection scripts to restore the data to a summary format. They also produced both anonymized and non-anonymized versions.
171
 
172
+ The code for the non-anonymized version is made publicly available by Abigail See of Stanford University, Peter J. Liu of Google Brain and Christopher D. Manning of Stanford University at <https://github.com/abisee/cnn-dailymail>. The work at Stanford University was supported by the DARPA DEFT ProgramAFRL contract no. FA8750-13-2-0040.
173
+
174
+ ### Licensing Information
175
 
176
  The CNN / Daily Mail dataset version 1.0.0 is released under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0).
177
 
178
+ ### Citation Information
179
+
180
+ ```
181
+ @inproceedings{see-etal-2017-get,
182
+ title = "Get To The Point: Summarization with Pointer-Generator Networks",
183
+ author = "See, Abigail and
184
+ Liu, Peter J. and
185
+ Manning, Christopher D.",
186
+ booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
187
+ month = jul,
188
+ year = "2017",
189
+ address = "Vancouver, Canada",
190
+ publisher = "Association for Computational Linguistics",
191
+ url = "https://www.aclweb.org/anthology/P17-1099",
192
+ doi = "10.18653/v1/P17-1099",
193
+ pages = "1073--1083",
194
+ abstract = "Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.",
195
+ }
196
+ ```
197
+
198
+ ```
199
+ @inproceedings{DBLP:conf/nips/HermannKGEKSB15,
200
+ author={Karl Moritz Hermann and Tomás Kociský and Edward Grefenstette and Lasse Espeholt and Will Kay and Mustafa Suleyman and Phil Blunsom},
201
+ title={Teaching Machines to Read and Comprehend},
202
+ year={2015},
203
+ cdate={1420070400000},
204
+ pages={1693-1701},
205
+ url={http://papers.nips.cc/paper/5945-teaching-machines-to-read-and-comprehend},
206
+ booktitle={NIPS},
207
+ crossref={conf/nips/2015}
208
+ }
209
+
210
+ ```
211
+
212
  ### Contributions
213
 
214
+ Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@jplu](https://github.com/jplu), [@jbragg](https://github.com/jbragg), [@patrickvonplaten](https://github.com/patrickvonplaten) and [@mcmillanmajora](https://github.com/mcmillanmajora) for adding this dataset.
215
+
cnn_dailymail.py CHANGED
@@ -18,12 +18,14 @@
18
  from __future__ import absolute_import, division, print_function
19
 
20
  import hashlib
21
- import logging
22
  import os
23
 
24
  import datasets
25
 
26
 
 
 
 
27
  _DESCRIPTION = """\
28
  CNN/DailyMail non-anonymized summarization dataset.
29
 
@@ -110,7 +112,7 @@ def _get_url_hashes(path):
110
  try:
111
  u = u.encode("utf-8")
112
  except UnicodeDecodeError:
113
- logging.error("Cannot hash url: %s", u)
114
  h.update(u)
115
  return h.hexdigest()
116
 
@@ -130,7 +132,7 @@ def _find_files(dl_paths, publisher, url_dict):
130
  elif publisher == "dm":
131
  top_dir = os.path.join(dl_paths["dm_stories"], "dailymail", "stories")
132
  else:
133
- logging.fatal("Unsupported publisher: %s", publisher)
134
  files = sorted(os.listdir(top_dir))
135
 
136
  ret_files = []
@@ -151,7 +153,7 @@ def _subset_filenames(dl_paths, split):
151
  elif split == datasets.Split.TEST:
152
  urls = _get_url_hashes(dl_paths["test_urls"])
153
  else:
154
- logging.fatal("Unsupported split: %s", split)
155
  cnn = _find_files(dl_paths, "cnn", urls)
156
  dm = _find_files(dl_paths, "dm", urls)
157
  return cnn + dm
 
18
  from __future__ import absolute_import, division, print_function
19
 
20
  import hashlib
 
21
  import os
22
 
23
  import datasets
24
 
25
 
26
+ logger = datasets.logging.get_logger(__name__)
27
+
28
+
29
  _DESCRIPTION = """\
30
  CNN/DailyMail non-anonymized summarization dataset.
31
 
 
112
  try:
113
  u = u.encode("utf-8")
114
  except UnicodeDecodeError:
115
+ logger.error("Cannot hash url: %s", u)
116
  h.update(u)
117
  return h.hexdigest()
118
 
 
132
  elif publisher == "dm":
133
  top_dir = os.path.join(dl_paths["dm_stories"], "dailymail", "stories")
134
  else:
135
+ logger.fatal("Unsupported publisher: %s", publisher)
136
  files = sorted(os.listdir(top_dir))
137
 
138
  ret_files = []
 
153
  elif split == datasets.Split.TEST:
154
  urls = _get_url_hashes(dl_paths["test_urls"])
155
  else:
156
+ logger.fatal("Unsupported split: %s", split)
157
  cnn = _find_files(dl_paths, "cnn", urls)
158
  dm = _find_files(dl_paths, "dm", urls)
159
  return cnn + dm