system HF staff commited on
Commit
bc5e00f
1 Parent(s): 34ca7b8

Update files from the datasets library (from 1.9.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.9.0

README.md CHANGED
@@ -1,11 +1,38 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  paperswithcode_id: c4
3
  ---
4
 
5
  # Dataset Card for C4
6
 
7
  ## Table of Contents
8
- - [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)
 
9
  - [Table of Contents](#table-of-contents)
10
  - [Dataset Description](#dataset-description)
11
  - [Dataset Summary](#dataset-summary)
@@ -36,38 +63,62 @@ paperswithcode_id: c4
36
 
37
  ## Dataset Description
38
 
39
- - **Homepage:**
40
- - **Repository:**
41
- - **Paper:**
42
- - **Leaderboard:**
43
- - **Point of Contact:**
44
 
45
  ### Dataset Summary
46
 
47
- A colossal, cleaned version of Common Crawl's web crawl corpus. Based on Common Crawl dataset: "https://commoncrawl.org"
48
- Due to the overhead of cleaning the dataset, it is recommend you prepare it with a distributed service like Cloud Dataflow. More info at https://www.tensorflow.org/datasets/beam_datasets.
 
 
 
 
 
 
 
 
 
 
49
 
50
  ### Supported Tasks and Leaderboards
51
 
52
- [More Information Needed]
53
 
54
  ### Languages
55
 
56
- [More Information Needed]
57
 
58
  ## Dataset Structure
59
 
60
  ### Data Instances
61
 
62
- [More Information Needed]
 
 
 
 
 
 
 
 
63
 
64
  ### Data Fields
65
 
66
- [More Information Needed]
 
 
 
 
67
 
68
  ### Data Splits
69
 
70
- [More Information Needed]
 
 
 
 
 
71
 
72
  ## Dataset Creation
73
 
@@ -79,7 +130,9 @@ Due to the overhead of cleaning the dataset, it is recommend you prepare it with
79
 
80
  #### Initial Data Collection and Normalization
81
 
82
- [More Information Needed]
 
 
83
 
84
  #### Who are the source language producers?
85
 
@@ -121,7 +174,7 @@ Due to the overhead of cleaning the dataset, it is recommend you prepare it with
121
 
122
  ### Licensing Information
123
 
124
- [More Information Needed]
125
 
126
  ### Citation Information
127
 
@@ -138,6 +191,4 @@ Due to the overhead of cleaning the dataset, it is recommend you prepare it with
138
 
139
  ### Contributions
140
 
141
- Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
142
-
143
- Thanks to @thomwolf, @Narsil, @patrickvonplaten, @lhoestq, @lewtun for adding this dataset.
1
  ---
2
+ pretty_name: C4
3
+ annotations_creators:
4
+ - no-annotation
5
+ language_creators:
6
+ - found
7
+ languages:
8
+ - en
9
+ licenses:
10
+ - odc-by-1-0
11
+ multilinguality:
12
+ - multilingual
13
+ size_categories:
14
+ en:
15
+ - 100M<n<1B
16
+ en-noblocklist:
17
+ - 100M<n<1B
18
+ en-noclean:
19
+ - 1B<n<10B
20
+ realnewslike:
21
+ - 10M<n<100M
22
+ source_datasets:
23
+ - original
24
+ task_categories:
25
+ - sequence-modeling
26
+ task_ids:
27
+ - language-modeling
28
  paperswithcode_id: c4
29
  ---
30
 
31
  # Dataset Card for C4
32
 
33
  ## Table of Contents
34
+
35
+ - [Dataset Card for C4](#dataset-card-for-c4)
36
  - [Table of Contents](#table-of-contents)
37
  - [Dataset Description](#dataset-description)
38
  - [Dataset Summary](#dataset-summary)
63
 
64
  ## Dataset Description
65
 
66
+ - **Homepage:** https://huggingface.co/datasets/allenai/c4
67
+ - **Paper:** https://arxiv.org/abs/1910.10683
 
 
 
68
 
69
  ### Dataset Summary
70
 
71
+ A colossal, cleaned version of Common Crawl's web crawl corpus. Based on Common Crawl dataset: "https://commoncrawl.org".
72
+
73
+ This is the version prepared by AllenAI, hosted at this address: https://huggingface.co/datasets/allenai/c4
74
+
75
+ It comes in four variants:
76
+
77
+ - `en`: 305GB in JSON format
78
+ - `en.noblocklist`: 380GB in JSON format
79
+ - `en.noclean`: 2.3TB in JSON format
80
+ - `realnewslike`: 15GB in JSON format
81
+
82
+ The `en.noblocklist` variant is exactly the same as the `en` variant, except we turned off the so-called "badwords filter", which removes all documents that contain words from the lists at https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words.
83
 
84
  ### Supported Tasks and Leaderboards
85
 
86
+ C4 is mainly intended to pretrain language models and word representations.
87
 
88
  ### Languages
89
 
90
+ The dataset is in English.
91
 
92
  ## Dataset Structure
93
 
94
  ### Data Instances
95
 
96
+ An example form the `en` config is:
97
+
98
+ ```
99
+ {
100
+ 'url': 'https://klyq.com/beginners-bbq-class-taking-place-in-missoula/',
101
+ 'text': 'Beginners BBQ Class Taking Place in Missoula!\nDo you want to get better at making delicious BBQ? You will have the opportunity, put this on your calendar now. Thursday, September 22nd join World Class BBQ Champion, Tony Balay from Lonestar Smoke Rangers. He will be teaching a beginner level class for everyone who wants to get better with their culinary skills.\nHe will teach you everything you need to know to compete in a KCBS BBQ competition, including techniques, recipes, timelines, meat selection and trimming, plus smoker and fire information.\nThe cost to be in the class is $35 per person, and for spectators it is free. Included in the cost will be either a t-shirt or apron and you will be tasting samples of each meat that is prepared.',
102
+ 'timestamp': '2019-04-25T12:57:54Z'
103
+ }
104
+ ```
105
 
106
  ### Data Fields
107
 
108
+ The data have several fields:
109
+
110
+ - `url`: url of the source as a string
111
+ - `text`: text content as a string
112
+ - `timestamp`: timestamp as a string
113
 
114
  ### Data Splits
115
 
116
+ | name | train |validation|
117
+ |----------------|--------:|---------:|
118
+ | en |364868892| 364608|
119
+ | en.noblocklist |393391519| 393226|
120
+ | en.noclean | ?| ?|
121
+ | realnewslike | 13799838| 13863|
122
 
123
  ## Dataset Creation
124
 
130
 
131
  #### Initial Data Collection and Normalization
132
 
133
+ C4 dataset is a collection of about 750GB of English-language text sourced from the public Common Crawl web scrape. It includes heuristics to extract only natural language (as opposed to boilerplate and other gibberish) in addition to extensive deduplication. You can find the code that has been used to build this dataset in [c4.py](https://github.com/tensorflow/datasets/blob/5952d3d60d60e1727786fa7a9a23d24bb463d4d6/tensorflow_datasets/text/c4.py) by Tensorflow Datasets.
134
+
135
+ The dataset was explicitly designed to be English only: any page that was not given a probability of at least 99% of being English by [langdetect](https://github.com/Mimino666/langdetect) was discarded.
136
 
137
  #### Who are the source language producers?
138
 
174
 
175
  ### Licensing Information
176
 
177
+ AllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.
178
 
179
  ### Citation Information
180
 
191
 
192
  ### Contributions
193
 
194
+ Thanks to [@dirkgr](https://github.com/dirkgr) and [@lhoestq](https://github.com/lhoestq) for adding this dataset.
 
 
c4.py CHANGED
@@ -1,41 +1,11 @@
1
- # coding=utf-8
2
- # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- # Lint as: python3
17
  """C4 dataset based on Common Crawl."""
18
 
19
 
 
20
  import json
21
- import os
22
 
23
  import datasets
24
 
25
- from .c4_utils import (
26
- dedupe_urls,
27
- filter_by_webtextlike,
28
- get_clean_page_fn,
29
- get_counter_inc_fn,
30
- get_hashed_url_filter_fn,
31
- is_language,
32
- is_realnews_domain,
33
- is_valid_length,
34
- normalize_url,
35
- remove_duplicate_text,
36
- split_wet_file,
37
- )
38
-
39
 
40
  logger = datasets.logging.get_logger(__name__)
41
 
@@ -43,12 +13,11 @@ logger = datasets.logging.get_logger(__name__)
43
  _DESCRIPTION = """\
44
  A colossal, cleaned version of Common Crawl's web crawl corpus.
45
 
46
- Based on Common Crawl dataset: "https://commoncrawl.org"
47
 
48
- Due to the overhead of cleaning the dataset, it is recommend you prepare it with
49
- a distributed service like Cloud Dataflow. More info at
50
- https://www.tensorflow.org/datasets/beam_datasets.
51
  """
 
52
  _CITATION = """
53
  @article{2019t5,
54
  author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
@@ -59,275 +28,66 @@ _CITATION = """
59
  eprint = {1910.10683},
60
  }
61
  """
62
- _VERSION = datasets.Version("2.3.0", "Deduplicate lines within a page.")
63
-
64
- _DOWNLOAD_HOST = "https://commoncrawl.s3.amazonaws.com"
65
- _WET_PATH_URL = "https://commoncrawl.s3.amazonaws.com/crawl-data/CC-MAIN-{cc_version}/wet.paths.gz"
66
- _REALNEWS_DOMAINS_URL = "https://raw.githubusercontent.com/rowanz/grover/38f7184bd87237ae2d3bc330b99f1e2e246f6d51/realnews/domain_to_allowed_subdomains.json"
67
- _BADWORDS_URL = "https://raw.githubusercontent.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words/25e679f03d96baa721cde20db9944649e8d0a844/{lang}"
68
- _CHECKSUMS_URL = "https://storage.googleapis.com/tfds-data/manual_checksums/c4.txt"
69
- _OPENWEBTEXT_URLS_ZIP = "OpenWebText.zip"
70
- _OPENWEBTEXT_URLS_URL = "https://mega.nz/#F!EZZD0YwJ!9_PlEQzdMVLaNdKv_ICNVQ"
71
- _OPENWEBTEXT_URLS_FILE_PATTERN = "OpenWebText/Version 1/URLs/*.txt"
72
-
73
- _DEFAULT_CC_VERSIONS = ("2019-18",) # April 2019
74
- _DEFAULT_WEBTEXTLIKE_CC_VERSIONS = ( # August 2018 - July 2019
75
- "2018-34",
76
- "2018-39",
77
- "2018-43",
78
- "2018-47",
79
- "2018-51",
80
- "2019-04",
81
- "2019-09",
82
- "2019-13",
83
- "2019-18",
84
- "2019-22",
85
- "2019-26",
86
- "2019-30",
87
- )
88
 
 
89
 
90
- class C4Config(datasets.BuilderConfig):
91
- """BuilderConfig for C4 dataset."""
92
 
93
- def __init__(self, language, cc_versions=None, clean=True, realnewslike=False, webtextlike=False, **kwargs):
94
- """BuilderConfig for C4.
95
-
96
- Args:
97
- language: string, the language code, or "all" to disable language
98
- filtering.
99
- cc_versions: tuple(string), a collection of versions of Common Crawl to
100
- use as the raw source text. Set to None to use defaults.
101
- clean: bool, whether to clean the dataset for badwords, duplications, etc.
102
- realnewslike: bool, whether to limit to news domains as compiled by
103
- RealNews.
104
- webtextlike: bool, whether to limit to WebText-like URLs.
105
- **kwargs: keyword arguments forwarded to super.
106
- """
107
- name_parts = [language]
108
- if cc_versions:
109
- name_parts.append("_".join(cc_versions))
110
- if not clean:
111
- name_parts.append("noclean")
112
- if realnewslike:
113
- name_parts.append("realnewslike")
114
- if webtextlike:
115
- name_parts.append("webtextlike")
116
- name = ".".join(name_parts)
117
- super(C4Config, self).__init__(name=name, version=_VERSION, **kwargs)
118
- self.lang = language
119
- self.cc_versions = cc_versions or (_DEFAULT_WEBTEXTLIKE_CC_VERSIONS if webtextlike else _DEFAULT_CC_VERSIONS)
120
- self.clean = clean
121
- self.realnewslike = realnewslike
122
- self.webtextlike = webtextlike
123
 
 
124
 
125
- class C4(datasets.BeamBasedBuilder):
126
- """C4 dataset based on Common Crawl."""
127
 
128
- BUILDER_CONFIGS = [
129
- C4Config(language="en", description="English C4 dataset."),
130
- C4Config(
131
- language="en",
132
- clean=False,
133
- description="Disables all cleaning (deduplication, removal based on bad words, " "etc.)",
134
- ),
135
- C4Config(
136
- language="en",
137
- realnewslike=True,
138
- description="Filters from the default config to only include content from the "
139
- "domains used in the 'RealNews' dataset (Zellers et al., 2019).",
140
- ),
141
- C4Config(
142
- language="en",
143
- webtextlike=True,
144
- description="Filters from the default config to only include content from the "
145
- "URLs in OpenWebText (https://github.com/jcpeterson/openwebtext).",
146
- ),
147
- ]
148
 
149
- @property
150
- def manual_download_instructions(self):
151
- return """\
152
- For the WebText-like config, you must manually download 'OpenWebText.zip'
153
- (from https://mega.nz/#F!EZZD0YwJ!9_PlEQzdMVLaNdKv_ICNVQ) and the Common Crawl
154
- WET files from August 2018 to July 2019
155
- (https://commoncrawl.org/the-data/get-started/) and place them in the
156
- `data_dir`.
157
- """
158
 
159
  def _info(self):
160
- features = {
161
- "text": datasets.Value("string"),
162
- "url": datasets.Value("string"),
163
- "content-type": datasets.Value("string"),
164
- "content-length": datasets.Value("string"),
165
- "timestamp": datasets.Value("string"),
166
- }
167
  return datasets.DatasetInfo(
168
  description=_DESCRIPTION,
169
- features=datasets.Features(features),
 
 
 
 
 
 
 
 
170
  citation=_CITATION,
171
- homepage="https://github.com/google-research/text-to-text-transfer-transformer#datasets",
172
  )
173
 
174
- def _split_generators(self, dl_manager, pipeline):
175
- import apache_beam as beam
176
-
177
- # We will automatically down the default CC version(s), but others need to
178
- # be manually downloaded.
179
- cc_versions = set(self.config.cc_versions)
180
- auto_cc_versions = cc_versions & set(_DEFAULT_CC_VERSIONS)
181
- manual_cc_versions = cc_versions - set(_DEFAULT_CC_VERSIONS)
182
-
183
- files_to_download = {}
184
- files_to_download["wet_path_urls"] = [
185
- _WET_PATH_URL.format(cc_version=cc_version) for cc_version in auto_cc_versions
186
- ]
187
- if self.config.clean:
188
- files_to_download["badwords"] = _BADWORDS_URL.format(lang=self.config.lang)
189
- if self.config.realnewslike:
190
- files_to_download["realnews_domains"] = _REALNEWS_DOMAINS_URL
191
- file_paths = dl_manager.download_and_extract(files_to_download)
192
-
193
- if self.config.webtextlike:
194
- owt_path = os.path.join(dl_manager.manual_dir, _OPENWEBTEXT_URLS_ZIP)
195
- if not os.path.exists(owt_path):
196
- raise FileNotFoundError(
197
- "{} does not exist. Make sure you insert a manual dir via `datasets.load_dataset('c4', data_dir=...)` that includes a file name {}. Manual download instructions: {})".format(
198
- owt_path, _OPENWEBTEXT_URLS_ZIP, self.manual_download_instructions
199
- )
200
- )
201
- file_paths["openwebtext_urls_zip"] = dl_manager.extract(owt_path)
202
-
203
- wet_urls = []
204
- for wet_path_url in file_paths["wet_path_urls"]:
205
- with open(wet_path_url, "r", encoding="utf-8") as f:
206
- wet_urls.extend(["%s/%s" % (_DOWNLOAD_HOST, line.strip()) for line in f])
207
- file_paths["wet_urls"] = wet_urls
208
- file_paths["wet_files"] = []
209
-
210
- for cc_version in manual_cc_versions:
211
- cc_dir = os.path.join(dl_manager.manual_dir, cc_version)
212
- wet_files = beam.io.filesystems.FileSystems.match(os.path.join(cc_dir, "*.warc.wet.gz"))
213
- if not os.path.exists(cc_dir):
214
- raise FileNotFoundError(
215
- "{} does not exist. Make sure you insert a manual dir via `datasets.load_dataset('c4', data_dir=...)` that includes the files {}. Manual download instructions: {})".format(
216
- cc_dir, "*.warc.wet.gz", self.manual_download_instructions
217
- )
218
- )
219
- logger.info("Adding %d WET files for manually downloaded version %s.", len(wet_files), cc_version)
220
- file_paths["wet_files"].extend(wet_files)
221
-
222
- page_content_pcollection = self._get_page_content(pipeline, file_paths, dl_manager)
223
  return [
 
224
  datasets.SplitGenerator(
225
- name=datasets.Split.TRAIN,
226
- gen_kwargs=dict(
227
- split="train",
228
- page_content=page_content_pcollection,
229
- hashed_url_predicate=lambda x: x % 1000 != 0, # 99.9%
230
- ),
231
- ),
232
- datasets.SplitGenerator(
233
- name=datasets.Split.VALIDATION,
234
- gen_kwargs=dict(
235
- split="validation",
236
- page_content=page_content_pcollection,
237
- hashed_url_predicate=lambda x: x % 1000 == 0, # 0.01%
238
- ),
239
  ),
240
  ]
241
 
242
- def _get_page_content(self, pipeline, file_paths, dl_manager):
243
- """Build PCollection of un-split page content."""
244
- import apache_beam as beam
245
-
246
- wet_file_paths = pipeline | "create_wet_files" >> beam.Create(file_paths["wet_files"])
247
- if "wet_urls" in file_paths:
248
-
249
- def download_url(url, downloader, pipeline):
250
- path = downloader.download(url)
251
- if not pipeline.is_local():
252
- path = downloader.ship_files_with_pipeline(path, pipeline)
253
- return path
254
-
255
- dl_wet_file_paths = (
256
- pipeline
257
- | "create_wet_urls" >> beam.Create(file_paths["wet_urls"])
258
- | beam.Map(download_url, downloader=dl_manager, pipeline=pipeline)
259
- )
260
- wet_file_paths = (wet_file_paths, dl_wet_file_paths) | beam.Flatten()
261
-
262
- # Parse WET files and filter by length.
263
- # Output: url, text
264
- page_content = wet_file_paths | beam.FlatMap(split_wet_file) | beam.Filter(is_valid_length)
265
-
266
- # Optionally filter for RealNews domains.
267
- # Output: url, text
268
- if self.config.realnewslike:
269
- with open(file_paths["realnews_domains"], "r", encoding="utf-8") as f:
270
- realnews_domains = json.load(f)
271
- page_content = page_content | beam.Filter(is_realnews_domain, realnews_domains)
272
-
273
- # Normalize and deduplicate by URL.
274
- # Output: url, text
275
- page_content = (
276
- page_content
277
- | "normalize_url" >> beam.Map(normalize_url)
278
- | "group_url" >> beam.GroupByKey()
279
- | beam.Map(dedupe_urls)
280
- )
281
-
282
- # Optionally filter for WebText-like URLs.
283
- # Output: url, text
284
- if self.config.webtextlike:
285
- webtextlike_urls = (
286
- pipeline
287
- | "read_webtextlike_urls"
288
- >> beam.io.ReadFromText(
289
- os.path.join(file_paths["openwebtext_urls_zip"], _OPENWEBTEXT_URLS_FILE_PATTERN)
290
- )
291
- | "add_dummy_page" >> beam.Map(lambda x: (x, ""))
292
- | "normal_webtext_url" >> beam.Map(normalize_url)
293
- )
294
- page_content = (
295
- {"text": page_content, "webtextlike_urls": webtextlike_urls}
296
- | "group_webtextlike_urls" >> beam.CoGroupByKey()
297
- | beam.FlatMap(filter_by_webtextlike)
298
- )
299
-
300
- # Optionally clean pages of badwords, boilerpolate text, and duplicate
301
- # spans of sentences.
302
- # Output: url, text
303
- if self.config.clean:
304
- with open(file_paths["badwords"], "r", encoding="utf-8") as f:
305
- badwords = [line.strip() for line in f]
306
- page_content = page_content | "clean_pages" >> beam.FlatMap(get_clean_page_fn(badwords))
307
- page_content = remove_duplicate_text(page_content)
308
-
309
- # Optionally filter out non-`language` pages. We do this after cleaning
310
- # since it may change the predominate language.
311
- if self.config.lang != "all":
312
- page_content |= beam.Filter(is_language, language=self.config.lang)
313
-
314
- return page_content
315
-
316
- def _build_pcollection(self, unused_pipeline, split, page_content, hashed_url_predicate):
317
- import apache_beam as beam
318
-
319
- def _emit_examples(el):
320
- get_counter_inc_fn(split)("examples")
321
- _, features = el
322
- return (
323
- features["url"],
324
- {
325
- "url": features["url"],
326
- "text": features["text"],
327
- "content-type": features["content-type"],
328
- "content-length": features["content-length"],
329
- "timestamp": features["timestamp"],
330
- },
331
- )
332
-
333
- return page_content | beam.Filter(get_hashed_url_filter_fn(hashed_url_predicate)) | beam.Map(_emit_examples)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  """C4 dataset based on Common Crawl."""
2
 
3
 
4
+ import gzip
5
  import json
 
6
 
7
  import datasets
8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
 
10
  logger = datasets.logging.get_logger(__name__)
11
 
13
  _DESCRIPTION = """\
14
  A colossal, cleaned version of Common Crawl's web crawl corpus.
15
 
16
+ Based on Common Crawl dataset: "https://commoncrawl.org".
17
 
18
+ This is the processed version of Google's C4 dataset by AllenAI.
 
 
19
  """
20
+
21
  _CITATION = """
22
  @article{2019t5,
23
  author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
28
  eprint = {1910.10683},
29
  }
30
  """
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
 
32
+ _URL = "https://github.com/allenai/allennlp/discussions/5056"
33
 
34
+ _VARIANTS = ["en", "realnewslike", "en.noblocklist", "en.noclean"]
 
35
 
36
+ _N_SHARDS_PER_SPLIT = {
37
+ "en": {"train": 1024, "validation": 8},
38
+ "realnewslike": {"train": 512, "validation": 1},
39
+ "en.noblocklist": {"train": 1024, "validation": 8},
40
+ "en.noclean": {"train": 7168, "validation": 64},
41
+ }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
 
43
+ _DATA_URL = "https://huggingface.co/datasets/allenai/c4/resolve/1ddc917116b730e1859edef32896ec5c16be51d0/{name}/c4-{split}.{index:05d}-of-{n_shards:05d}.json.gz"
44
 
 
 
45
 
46
+ class C4(datasets.GeneratorBasedBuilder):
47
+ """C4, a colossal, cleaned version of Common Crawl's web crawl corpus."""
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
 
49
+ BUILDER_CONFIGS = [datasets.BuilderConfig(name) for name in _VARIANTS]
 
 
 
 
 
 
 
 
50
 
51
  def _info(self):
 
 
 
 
 
 
 
52
  return datasets.DatasetInfo(
53
  description=_DESCRIPTION,
54
+ features=datasets.Features(
55
+ {
56
+ "text": datasets.Value("string"),
57
+ "timestamp": datasets.Value("string"),
58
+ "url": datasets.Value("string"),
59
+ }
60
+ ),
61
+ supervised_keys=None,
62
+ homepage=_URL,
63
  citation=_CITATION,
 
64
  )
65
 
66
+ def _split_generators(self, dl_manager):
67
+ data_urls = {}
68
+ for split in ["train", "validation"]:
69
+ n_shards = _N_SHARDS_PER_SPLIT[self.config.name][split]
70
+ data_urls[split] = [
71
+ _DATA_URL.format(name=self.config.name, split=split, index=index, n_shards=n_shards)
72
+ for index in range(n_shards)
73
+ ]
74
+ train_downloaded_files = dl_manager.download(data_urls["train"])
75
+ validation_downloaded_files = dl_manager.download(data_urls["validation"])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
76
  return [
77
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepaths": train_downloaded_files}),
78
  datasets.SplitGenerator(
79
+ name=datasets.Split.VALIDATION, gen_kwargs={"filepaths": validation_downloaded_files}
 
 
 
 
 
 
 
 
 
 
 
 
 
80
  ),
81
  ]
82
 
83
+ def _generate_examples(self, filepaths):
84
+ """This function returns the examples in the raw (text) form by iterating on all the files."""
85
+ id_ = 0
86
+ for filepath in filepaths:
87
+ logger.info("generating examples from = %s", filepath)
88
+ with gzip.open(open(filepath, "rb"), "rt", encoding="utf-8") as f:
89
+ for line in f:
90
+ if line:
91
+ example = json.loads(line)
92
+ yield id_, example
93
+ id_ += 1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c4_utils.py DELETED
@@ -1,488 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- # Lint as: python3
17
- """Utilities for generating the C4 dataset."""
18
-
19
-
20
- import functools
21
- import gzip
22
- import hashlib
23
- import io
24
- import re
25
- import threading
26
-
27
-
28
- # WET file constants
29
- _PAGE_DELIMITER = "WARC/1.0"
30
- _URL_KEY = "WARC-Target-URI:"
31
- _URL_DATE = "WARC-Date:"
32
- _CONTENT_TYPE = "Content-Type:"
33
- _CONTENT_LEN = "Content-Length:"
34
- _METADATA_PREFIXES = ("WARC", "CONTENT-", "Content-")
35
-
36
- # Filters
37
- _MIN_WORDS_PER_LINE = 5
38
- _MIN_NUM_SENTENCES = 3
39
- _MAX_WORD_LENGTH = 1000
40
- _END_MARKS = (".", "?", "!", '"')
41
- _ELLIPSIS = "..."
42
- _POLICY_SUBSTRINGS = [
43
- "terms of use",
44
- "privacy policy",
45
- "cookie policy",
46
- "uses cookies",
47
- "use of cookies",
48
- "use cookies",
49
- ]
50
-
51
- # Memoized sentence tokenizer.
52
- _SENTENCE_TOKENIZER = None
53
-
54
-
55
- def get_counter_inc_fn(namespace):
56
- import apache_beam as beam
57
-
58
- def counter_inc_fn(counter, amt=1):
59
- beam.metrics.Metrics.counter(namespace, counter).inc(amt)
60
-
61
- return counter_inc_fn
62
-
63
-
64
- def get_hashed_url_filter_fn(predicate_fn):
65
- import tensorflow.compat.v2 as tf
66
-
67
- def filter_fn(el):
68
- url, _ = el
69
- val = int(hashlib.md5(tf.compat.as_text(url).encode("utf-8")).hexdigest(), 16)
70
- return predicate_fn(val)
71
-
72
- return filter_fn
73
-
74
-
75
- def _load_sentence_tokenizer():
76
- """Returns a sentence tokenization function."""
77
- # Lock to avoid a race-condition in the creation of the download directory.
78
- with threading.Lock():
79
- import nltk
80
-
81
- nltk.download("punkt")
82
- return nltk.data.load("nltk:tokenizers/punkt/english.pickle")
83
-
84
-
85
- def _get_sentences(text):
86
- import tensorflow.compat.v2 as tf
87
-
88
- global _SENTENCE_TOKENIZER
89
- if not _SENTENCE_TOKENIZER:
90
- _SENTENCE_TOKENIZER = _load_sentence_tokenizer()
91
- return list(_SENTENCE_TOKENIZER.tokenize(tf.compat.as_text(text)))
92
-
93
-
94
- def _get_sentences_by_line(text, lower=False):
95
- sentences = []
96
- for line in text.splitlines():
97
- sentences.append([s.lower() if lower else s for s in _get_sentences(line)])
98
- return sentences
99
-
100
-
101
- def is_language(page, language, min_probability=0.99):
102
- """Returns True iff text is in `language` with at least `min_probability`."""
103
- unused_url, features = page
104
- text = features["text"]
105
-
106
- counter_inc_fn = get_counter_inc_fn("detected-lang")
107
-
108
- # Make langdetect predictions deterministic.
109
- import langdetect
110
-
111
- langdetect.DetectorFactory.seed = 0
112
- try:
113
- predictions = langdetect.detect_langs(text)
114
- except langdetect.lang_detect_exception.LangDetectException:
115
- counter_inc_fn("langdetect-exception")
116
- return False
117
- if not predictions:
118
- counter_inc_fn("page-filtered-nolangpredictions")
119
- return False
120
- best_prediction = predictions[0]
121
- if best_prediction.prob < min_probability:
122
- counter_inc_fn("page-filtered-lowlangdetectconf")
123
- return False
124
- if best_prediction.lang != language:
125
- counter_inc_fn("page-filtered-ignoredlang")
126
- counter_inc_fn("page-filtered-ignoredlang-%s" % (best_prediction.lang))
127
- return False
128
- counter_inc_fn("page-emited-%s" % best_prediction.lang)
129
- return True
130
-
131
-
132
- def get_clean_page_fn(badwords=None):
133
- """Returns `clean_page` with pre-compiled badword and citation regexes."""
134
- # Used to filter citation from Wikipedia pages (among others).
135
- citation_regex = re.compile(r"\[\d*\]|\[edit\]|\[citation needed\]")
136
- if badwords:
137
- badwords_regex = re.compile("[^a-z]({})[^a-z]".format("|".join(badwords or [])))
138
- else:
139
- badwords_regex = None
140
- return functools.partial(clean_page, citation_regex=citation_regex, badwords_regex=badwords_regex)
141
-
142
-
143
- def clean_page(
144
- url_and_features,
145
- citation_regex,
146
- badwords_regex=None,
147
- counter_inc_fn=None,
148
- min_words_per_line=_MIN_WORDS_PER_LINE,
149
- min_num_sentences=_MIN_NUM_SENTENCES,
150
- max_word_length=_MAX_WORD_LENGTH,
151
- ):
152
- """Cleans a CommonCrawl page, yielding nothing if it should be skipped.
153
-
154
- Cleaning removes lines with no end marks or with too few words. After line
155
- filtering, pages are filtered out if they have too few sentences based on a
156
- simple count of end marks.
157
-
158
- Args:
159
- url_and_features: tuple(string, dict), the url and features of the page.
160
- citation_regex: Regex to use for finding Wikipedia-like citations to filter.
161
- badwords_regex: Regex to use for finding badwords. Default None, which means
162
- don't apply badwords filtering.
163
- counter_inc_fn: function, a function taking the name of a counter to be
164
- incremented and the (optional) amount. Defaults to a beam Metric counter.
165
- min_words_per_line: int, the minimum number of words a line needs to not be
166
- removed.
167
- min_num_sentences: int, the minimum number of sentences a page needs to not
168
- be skipped.
169
- max_word_length: int, the maximum number of characters allowed in a word.
170
- Lines containing a word with too many characters are removed.
171
- Yields:
172
- The url and cleaned text for the page.
173
- """
174
- url, features = url_and_features
175
- text = features["text"]
176
-
177
- if not counter_inc_fn:
178
- counter_inc_fn = get_counter_inc_fn("clean-page")
179
-
180
- lines = text.splitlines()
181
- valid_lines = []
182
- num_sentences = 0
183
-
184
- def line_has_too_long_word(line):
185
- for word in line.split():
186
- if len(word) > max_word_length:
187
- return True
188
- return False
189
-
190
- for line in lines:
191
- line = line.strip()
192
- if line_has_too_long_word(line):
193
- counter_inc_fn("lines-with-too-long-word")
194
- continue
195
- line = citation_regex.sub("", line)
196
- if not line.endswith(_END_MARKS) or line.endswith(_ELLIPSIS):
197
- counter_inc_fn("lines-no-endmark")
198
- continue
199
- if len(line.split()) < min_words_per_line:
200
- counter_inc_fn("lines-too-short")
201
- continue
202
- line_lower = line.lower()
203
- # Remove documents which contain lorem ipsum
204
- if "lorem ipsum" in line_lower:
205
- counter_inc_fn("filtered-page-loremipsum")
206
- return
207
- # Remove "javascript must be enabled" notices
208
- if "javascript" in line_lower:
209
- counter_inc_fn("lines-javascript")
210
- continue
211
- # Remove docs which probably contain javascript code
212
- if "{" in line:
213
- counter_inc_fn("filtered-page-squigglybracket")
214
- return
215
- # Remove policy lines
216
- if any(p in line_lower for p in _POLICY_SUBSTRINGS):
217
- counter_inc_fn("lines-policy")
218
- continue
219
- # If any badword appears on its own in the line, skip this doc
220
- if badwords_regex:
221
- badwords_found = badwords_regex.search(line_lower)
222
- if badwords_found is not None:
223
- counter_inc_fn("filtered-page-badword")
224
- return
225
- num_sentences += len(_get_sentences(line))
226
- valid_lines.append(line)
227
- counter_inc_fn("lines-valid")
228
-
229
- if num_sentences < min_num_sentences:
230
- counter_inc_fn("filtered-page-toofewsentences")
231
- return
232
- counter_inc_fn("emitted-clean-pages")
233
- features["text"] = "\n".join(valid_lines).strip()
234
- yield url, features
235
-
236
-
237
- def _hash_line(line):
238
- import tensorflow.compat.v2 as tf
239
-
240
- m = hashlib.md5()
241
- m.update(tf.compat.as_text(line).encode("utf-8").strip().lower())
242
- return m.hexdigest()
243
-
244
-
245
- def _emit_url_to_lines(page):
246
- """Emits url to all (lower-cased, hashed) lines."""
247
- url, features = page
248
- text = features["text"]
249
- for line in text.split("\n"):
250
- yield _hash_line(line), url
251
-
252
-
253
- def _emit_line_to_urls(el, counter_inc_fn):
254
- """Emits (hashed) line to all but one url."""
255
- import tensorflow.compat.v2 as tf
256
-
257
- line, urls = el
258
- # Materialize urls as a list.
259
- urls = list(urls)
260
- # Hash urls and sort to have a consistent, but unbiased, selection when the
261
- # same urls exist for multiple lines.
262
- skip_url = min(urls, key=lambda x: hashlib.md5(tf.compat.as_text(x).encode("utf-8")).hexdigest())
263
- for url in urls:
264
- if url != skip_url:
265
- yield url, line
266
- counter_inc_fn("emitted-line-duplicate", amt=len(urls) - 1)
267
-
268
-
269
- def _remove_lines_from_text(el, counter_inc_fn, min_num_sentences=_MIN_NUM_SENTENCES):
270
- """Removes matching lines from the page.
271
-
272
- Process the result of a join containing a single value for 'features' and zero
273
- or more values for 'lines'. Each value in 'lines' is a lower-cased, hashed
274
- line.
275
-
276
- If a line has fewer sentences than `max_window_size`, the full line is
277
- compared for a match.
278
-
279
- Args:
280
- el: `(string, {'features': features_dict, 'lines': [string]})`,
281
- element containing the result of a join on key with both the page text
282
- and lower-cased, hashed lines to remove.
283
- counter_inc_fn: function, a function taking the name of a counter to be
284
- incremented and the (optional) amount.
285
- min_num_sentences: int, the minimum number of sentences a page needs to not
286
- be skipped.
287
-
288
- Yields:
289
- url: The URL of the page.
290
- features: The page features with lines removed from text.
291
- """
292
- url, join_values = el
293
- features = join_values["features"]
294
-
295
- assert len(features) == 1, "Invalid page count (%d) for %s" % (len(features), url)
296
- features = features[0]
297
- text = features["text"]
298
- lines_to_remove = set(join_values["lines"])
299
- new_lines = []
300
- hashed_lines = set()
301
- for line in text.split("\n"):
302
- hashed_line = _hash_line(line)
303
- if hashed_line in lines_to_remove:
304
- counter_inc_fn("filtered-lines-duplicate")
305
- elif hashed_line not in hashed_lines:
306
- new_lines.append(line)
307
- hashed_lines.add(hashed_line)
308
- new_text = "\n".join(new_lines)
309
- if len(_get_sentences(new_text)) < min_num_sentences:
310
- counter_inc_fn("filtered-doc-toofewsentences")
311
- return
312
- new_features = features.copy()
313
- new_features["text"] = new_text
314
- yield (url, new_features)
315
-
316
-
317
- def remove_duplicate_text(pages):
318
- """Utility to remove duplicate lines across text documents."""
319
- # Output: url, lines
320
- import apache_beam as beam
321
-
322
- counter_inc_fn = get_counter_inc_fn("dedupe-lines")
323
- lines_to_remove = (
324
- pages
325
- | beam.FlatMap(_emit_url_to_lines)
326
- | "group_sentences" >> beam.GroupByKey()
327
- | beam.FlatMap(_emit_line_to_urls, counter_inc_fn=counter_inc_fn)
328
- )
329
-
330
- # Output: url, text
331
- final_docs = (
332
- {"features": pages, "lines": lines_to_remove}
333
- | "group_features_and_lines_by_url" >> beam.CoGroupByKey()
334
- | beam.FlatMap(_remove_lines_from_text, counter_inc_fn=counter_inc_fn)
335
- )
336
-
337
- return final_docs
338
-
339
-
340
- def split_wet_file(wet_file_path, counter_inc_fn=None):
341
- """Split a WET file into separate pages."""
342
- from absl import logging
343
-
344
- logging.info("Splitting file: %s", wet_file_path)
345
- if not counter_inc_fn:
346
- counter_inc_fn = get_counter_inc_fn("split-wet-file")
347
- counter_inc_fn("wet-file")
348
-
349
- import apache_beam as beam
350
-
351
- with beam.io.filesystems.FileSystems.open(wet_file_path) as f, gzip.GzipFile(fileobj=f) as g:
352
- url = None
353
- content = None
354
- content_len = None
355
- content_type = None
356
- timestamp = None
357
-
358
- def _maybe_get_page():
359
- """Generate a (url, {features}) page."""
360
- if not url and url is not None:
361
- counter_inc_fn("page-filtered-nourl")
362
- if not content and content is not None:
363
- counter_inc_fn("page-filtered-nocontent")
364
- if not content_type and content_type is not None:
365
- counter_inc_fn("page-nocontenttype")
366
- if not content_len and content_len is not None:
367
- counter_inc_fn("page-nocontentlen")
368
- if not timestamp and timestamp is not None:
369
- counter_inc_fn("page-notimestamp")
370
- if content and url:
371
- counter_inc_fn("page-emitted")
372
- return (
373
- url,
374
- {
375
- "text": "\n".join(content),
376
- "content-type": content_type,
377
- "content-length": content_len,
378
- "timestamp": timestamp,
379
- "url": url,
380
- },
381
- )
382
- return None
383
-
384
- for line in io.TextIOWrapper(g, encoding="utf-8"):
385
- line = line.strip()
386
- if not line:
387
- continue
388
- if line == _PAGE_DELIMITER:
389
- page = _maybe_get_page()
390
- if page:
391
- yield page
392
- url = ""
393
- content = []
394
- content_len = ""
395
- content_type = ""
396
- timestamp = ""
397
-
398
- if line.startswith(_URL_KEY):
399
- url = line[len(_URL_KEY) :].strip()
400
-
401
- if line.startswith(_URL_DATE):
402
- timestamp = line[len(_URL_DATE) :].strip()
403
-
404
- if line.startswith(_CONTENT_TYPE):
405
- content_type = line[len(_CONTENT_TYPE) :].strip()
406
-
407
- if line.startswith(_CONTENT_LEN):
408
- content_len = line[len(_CONTENT_LEN) :].strip()
409
-
410
- if line.startswith(_METADATA_PREFIXES):
411
- continue
412
-
413
- content.append(line)
414
-
415
- page = _maybe_get_page()
416
- if page:
417
- yield page
418
-
419
-
420
- def dedupe_urls(el):
421
- """Returns the first value for a given URL."""
422
- counter_inc_fn = get_counter_inc_fn("dedupe-urls")
423
- url, vals = el
424
- cnt = 0
425
- v = None
426
- for v in vals:
427
- cnt += 1
428
- counter_inc_fn("filtered-url-duplicate", cnt - 1)
429
- counter_inc_fn("unique-url")
430
- return url, v
431
-
432
-
433
- def is_valid_length(el, max_length=1.9e5):
434
- """Returns False iff page's text is too long."""
435
- counter_inc_fn = get_counter_inc_fn("is-valid-length")
436
- _, page = el
437
- if len(page["text"]) > max_length:
438
- counter_inc_fn("filtered-page-contenttoolong")
439
- return False
440
- counter_inc_fn("valid-length")
441
- return True
442
-
443
-
444
- def is_realnews_domain(el, realnews_domains):
445
- """Returns False iff page's (sub)domain is not allowed."""
446
- import tldextract
447
-
448
- counter_inc_fn = get_counter_inc_fn("is-realnews-domain")
449
- url, _ = el
450
- ext = tldextract.extract(url)
451
- main_domain = ext.domain + "." + ext.suffix
452
- if main_domain not in realnews_domains:
453
- counter_inc_fn("filtered-url-invaliddomain")
454
- return False
455
- allowed_subdomains = realnews_domains[main_domain]
456
- if isinstance(allowed_subdomains, list) and ext.subdomain not in allowed_subdomains:
457
- counter_inc_fn("filtered-url-invalidsubdomain")
458
- return False
459
- counter_inc_fn("realnews-domain")
460
- return True
461
-
462
-
463
- def filter_by_webtextlike(el):
464
- """Yields only pages with a matching WebText-like URL."""
465
- counter_inc_fn = get_counter_inc_fn("filter-by-webtextlike")
466
- url, join_values = el
467
- text = join_values["text"]
468
- webtextlike = join_values["webtextlike_urls"]
469
- if not webtextlike:
470
- counter_inc_fn("filtered-url-notwebtextlike")
471
- return
472
- if not text:
473
- counter_inc_fn("missing-webtextlike")
474
- return
475
- assert len(text) == 1
476
- counter_inc_fn("found-webtextlike")
477
- yield url, text[0]
478
-
479
-
480
- def normalize_url(el):
481
- import tensorflow.compat.v2 as tf
482
-
483
- url, val = el
484
- url = tf.compat.as_text(url)
485
- url = re.sub(r"https?:\/\/(www\.)?", "", url)
486
- url = re.sub(r"\?(utm_|ref|feed).*", "", url)
487
- url = url.rstrip("/")
488
- return url, val
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json ADDED
The diff for this file is too large to render. See raw diff
dummy/en.noblocklist/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f2dd736dbf68e9be548cfb6d09c6580d9f6cd442f456429db66dd640385604e
3
+ size 5689
dummy/en.noclean/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74d7b9c2a2bb72b7391026de21e3667d915180106860fcc00d4f26ebf782e101
3
+ size 5689
dummy/en/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8bd9d00092e2938655d4d48cd1a33c2af7f22ac001da1665a686fefd9ef6069e
3
+ size 5689
dummy/realnewslike/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:45adef93b869be93504c6385bca72531dcb7ea03f6557b2f914c33a81bbfb732
3
+ size 5689