Yeb Havinga commited on
Commit
56b7cad
β€’
1 Parent(s): 5b9af41

Renamed validation files back to '-validation'

Browse files
README.md CHANGED
@@ -94,7 +94,7 @@ In summary, the preprocessing procedure includes:
94
  - Not identified as prevalently Dutch by the `LangDetect` package.
95
 
96
  Using parallel processing with 96 CPU cores on a TPUv3 via Google Cloud to perform the complete clean of all the original Dutch
97
- shards of mC4 (1024 of ~220Mb train, 8 of ~24Mb validation) required roughly 10 hours due to the demanding steps of sentence
98
  tokenization and language detection. The total size of compressed `.json.gz` files is roughly halved after the procedure.
99
 
100
  ## Dataset Structure
@@ -121,13 +121,16 @@ The data contains the following fields:
121
 
122
  ### Data Splits
123
 
124
- To build mC4, the original authors used [CLD3](https://github.com/google/cld3) to identify over 100 languages. For Dutch, the whole corpus of scraped text was divided in `1032` jsonl files, `1024` for training following the naming style `c4-it.tfrecord-0XXXX-of-01024.json.gz` and 8 for validation following the naming style `c4-it-validation.tfrecord-0000X-of-00008.json.gz`. The full set of preprocessed files takes roughly 215GB of disk space to download with Git LFS.
 
 
 
125
 
126
  For ease of use under different storage capacities, the following incremental splits are available (sizes are estimates). **Important**: The sizes in GB represent the estimated weight for :
127
 
128
  |split |train size (docs, words, download + preproc disk space)|validation size|
129
  |:-----|------------------------------------------------------:|--------------:|
130
- |tiny | 10M docs, 4B words (9 GB + 27 GB) | 12k docs |
131
  |small | 20M docs, 8B words (18 GB + 54 GB) | 24k docs |
132
  |medium| 50M docs, 20B words (47 GB + 135 GB) | 48k docs |
133
  |large | 75M docs, 30B words (71 GB + 203 GB) | 72k docs |
@@ -139,6 +142,22 @@ You can load any subset like this:
139
  from datasets import load_dataset
140
 
141
  datasets = load_dataset('yhavinga/mc4_nl_cleaned', 'tiny', streaming=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
142
  ```
143
 
144
  Since splits are quite large, you may want to traverse them using the streaming mode available starting from β€” Datasets v1.9.0:
 
94
  - Not identified as prevalently Dutch by the `LangDetect` package.
95
 
96
  Using parallel processing with 96 CPU cores on a TPUv3 via Google Cloud to perform the complete clean of all the original Dutch
97
+ shards of mC4 (1024 of ~220Mb train, 4 of ~24Mb validation) required roughly 10 hours due to the demanding steps of sentence
98
  tokenization and language detection. The total size of compressed `.json.gz` files is roughly halved after the procedure.
99
 
100
  ## Dataset Structure
 
121
 
122
  ### Data Splits
123
 
124
+ To build mC4, the original authors used [CLD3](https://github.com/google/cld3) to identify over 100 languages.
125
+ For Dutch, the whole corpus of scraped text was divided in `1032` jsonl files, `1024` for training following
126
+ the naming style `c4-nl-cleaned.tfrecord-0XXXX-of-01024.json.gz` and 4 for validation following the
127
+ naming style `c4-nl-cleaned.tfrecord-0000X-of-00004.json.gz`. The full set of preprocessed files takes roughly 215GB of disk space to download with Git LFS.
128
 
129
  For ease of use under different storage capacities, the following incremental splits are available (sizes are estimates). **Important**: The sizes in GB represent the estimated weight for :
130
 
131
  |split |train size (docs, words, download + preproc disk space)|validation size|
132
  |:-----|------------------------------------------------------:|--------------:|
133
+ |tiny | 6M docs, 4B words (9 GB + 27 GB) | 16k docs |
134
  |small | 20M docs, 8B words (18 GB + 54 GB) | 24k docs |
135
  |medium| 50M docs, 20B words (47 GB + 135 GB) | 48k docs |
136
  |large | 75M docs, 30B words (71 GB + 203 GB) | 72k docs |
 
142
  from datasets import load_dataset
143
 
144
  datasets = load_dataset('yhavinga/mc4_nl_cleaned', 'tiny', streaming=True)
145
+ print(datasets)
146
+ ```
147
+
148
+ Yields output
149
+
150
+ ```
151
+ DatasetDict({
152
+ train: Dataset({
153
+ features: ['text', 'timestamp', 'url'],
154
+ num_rows: 6303893
155
+ })
156
+ validation: Dataset({
157
+ features: ['text', 'timestamp', 'url'],
158
+ num_rows: 16189
159
+ })
160
+ })
161
  ```
162
 
163
  Since splits are quite large, you may want to traverse them using the streaming mode available starting from β€” Datasets v1.9.0:
mc4_nl_cleaned.py CHANGED
@@ -49,11 +49,11 @@ _HOMEPAGE = "https://github.com/allenai/allennlp/discussions/5056"
49
 
50
  _LICENSE = "Open Data Commons Attribution License (ODC-By) v1.0"
51
 
52
- _BASE_URL = "https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned/resolve/main/mc4_nl_cleaned/{split}/c4-nl-cleaned.tfrecord-{index:05d}-of-{n_shards:05d}.json.gz"
53
 
54
  _CONFIGS = dict(
55
  tiny={"train": 100, "validation": 1},
56
- small={"train": 250, "validation": 2},
57
  medium={"train": 500, "validation": 2},
58
  large={"train": 750, "validation": 3},
59
  full={"train": 1024, "validation": 4},
@@ -150,6 +150,7 @@ class Mc4(datasets.GeneratorBasedBuilder):
150
  _BASE_URL.format(
151
  split=split,
152
  index=index,
 
153
  n_shards=4 if split == "validation" else 1024,
154
  )
155
  for index in range(_CONFIGS[self.config.name][split])
 
49
 
50
  _LICENSE = "Open Data Commons Attribution License (ODC-By) v1.0"
51
 
52
+ _BASE_URL = "https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned/resolve/main/mc4_nl_cleaned/{split}/c4-nl{validation}-cleaned.tfrecord-{index:05d}-of-{n_shards:05d}.json.gz"
53
 
54
  _CONFIGS = dict(
55
  tiny={"train": 100, "validation": 1},
56
+ small={"train": 250, "validation": 1},
57
  medium={"train": 500, "validation": 2},
58
  large={"train": 750, "validation": 3},
59
  full={"train": 1024, "validation": 4},
 
150
  _BASE_URL.format(
151
  split=split,
152
  index=index,
153
+ validation="-validation" if split=="validation" else "",
154
  n_shards=4 if split == "validation" else 1024,
155
  )
156
  for index in range(_CONFIGS[self.config.name][split])
mc4_nl_cleaned/validation/{c4-nl-cleaned.tfrecord-00000-of-00004.json.gz β†’ c4-nl-validation-cleaned.tfrecord-00000-of-00004.json.gz} RENAMED
File without changes
mc4_nl_cleaned/validation/{c4-nl-cleaned.tfrecord-00001-of-00004.json.gz β†’ c4-nl-validation-cleaned.tfrecord-00001-of-00004.json.gz} RENAMED
File without changes
mc4_nl_cleaned/validation/{c4-nl-cleaned.tfrecord-00002-of-00004.json.gz β†’ c4-nl-validation-cleaned.tfrecord-00002-of-00004.json.gz} RENAMED
File without changes
mc4_nl_cleaned/validation/{c4-nl-cleaned.tfrecord-00003-of-00004.json.gz β†’ c4-nl-validation-cleaned.tfrecord-00003-of-00004.json.gz} RENAMED
File without changes