Yeb Havinga commited on
Commit
04998b9
1 Parent(s): 8375e27

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -12
README.md CHANGED
@@ -61,7 +61,7 @@ paperswithcode_id: mc4
61
 
62
  ### Dataset Summary
63
 
64
- A thoroughly cleaned version of the Dutch split of the multilingual colossal, cleaned version of Common Crawl's web crawl corpus (mC4).
65
  Based on the [Common Crawl dataset](https://commoncrawl.org).
66
  The original version was prepared by [AllenAI](https://allenai.org/), hosted at the address [https://huggingface.co/datasets/allenai/c4](https://huggingface.co/datasets/allenai/c4).
67
 
@@ -78,7 +78,7 @@ In summary, the preprocessing procedure includes:
78
 
79
  - Less than 3 words.
80
 
81
- - A word longer than 1000 characters.
82
 
83
  - An end symbol not matching end-of-sentence punctuation.
84
 
@@ -123,17 +123,17 @@ The data contains the following fields:
123
  To build mC4, the original authors used [CLD3](https://github.com/google/cld3) to identify over 100 languages.
124
  For Dutch, the whole corpus of scraped text was divided in `1032` jsonl files, `1024` for training following
125
  the naming style `c4-nl-cleaned.tfrecord-0XXXX-of-01024.json.gz` and 4 for validation following the
126
- naming style `c4-nl-cleaned.tfrecord-0000X-of-00004.json.gz`. The full set of preprocessed files takes roughly 215GB of disk space to download with Git LFS.
127
 
128
- For ease of use under different storage capacities, the following incremental splits are available (sizes are estimates). **Important**: The sizes in GB represent the estimated weight for :
129
 
130
  |split |train size (docs, words, download + preproc disk space)|validation size|
131
  |:-----|------------------------------------------------------:|--------------:|
132
- |tiny | 6M docs, 4B words (9 GB + 27 GB) | 16k docs |
133
- |small | 20M docs, 8B words (18 GB + 54 GB) | 24k docs |
134
- |medium| 50M docs, 20B words (47 GB + 135 GB) | 48k docs |
135
- |large | 75M docs, 30B words (71 GB + 203 GB) | 72k docs |
136
- |full | 103M docs, 41B words (109 GB + 279 GB) | 96k docs |
137
 
138
  You can load any subset like this:
139
 
@@ -176,14 +176,14 @@ Refer to the original paper for more considerations regarding the choice of sour
176
 
177
  ### Social Impact of Dataset
178
 
179
- With more than 200GB of cleaned Dutch text and more than 41B estimated words, this is by far the largest available corpus for the Dutch language.
180
- The second largest dataset available is [OSCAR](https://oscar-corpus.com/), which is only 39GB in size for its deduplicated variant.
181
  Using this corpus for training language models with adequate computational resources will allow researchers to reach parity with the performances observed for the English language.
182
  This can in turn have important repercussions for the development of commercial language technology applications for the Dutch language.
183
 
184
  ### Discussion of Biases
185
 
186
- Despit the cleaning procedure aimed at removing vulgarity and profanity, it must be considered that model trained on this scraped corpus will
187
  inevitably reflect biases present in blog articles and comments on the Internet.
188
  This makes the corpus especially interesting in the context of studying data biases and how to limit their impacts.
189
 
61
 
62
  ### Dataset Summary
63
 
64
+ A cleaned version (151GB) of the Dutch split (277GB) of the C4 multilingual dataset (mC4).
65
  Based on the [Common Crawl dataset](https://commoncrawl.org).
66
  The original version was prepared by [AllenAI](https://allenai.org/), hosted at the address [https://huggingface.co/datasets/allenai/c4](https://huggingface.co/datasets/allenai/c4).
67
 
78
 
79
  - Less than 3 words.
80
 
81
+ - A word longer than 250 characters.
82
 
83
  - An end symbol not matching end-of-sentence punctuation.
84
 
123
  To build mC4, the original authors used [CLD3](https://github.com/google/cld3) to identify over 100 languages.
124
  For Dutch, the whole corpus of scraped text was divided in `1032` jsonl files, `1024` for training following
125
  the naming style `c4-nl-cleaned.tfrecord-0XXXX-of-01024.json.gz` and 4 for validation following the
126
+ naming style `c4-nl-cleaned.tfrecord-0000X-of-00004.json.gz`. The full set of preprocessed files takes roughly 208GB of disk space to download with Git LFS.
127
 
128
+ For ease of use under different storage capacities, the following incremental splits are available: (note: files on disk are compressed)
129
 
130
  |split |train size (docs, words, download + preproc disk space)|validation size|
131
  |:-----|------------------------------------------------------:|--------------:|
132
+ |tiny | 6M docs, 2B words (6 GB + 15 GB) | 16k docs |
133
+ |small | 15M docs, 6B words (14 GB + 36 GB) | 16k docs |
134
+ |medium| 31M docs, 12B words (28 GB + 72 GB) | 32k docs |
135
+ |large | 47M docs, 19B words (42 GB + 108 GB) | 48k docs |
136
+ |full | 64M docs, 25B words (58 GB + 148 GB) | 64k docs |
137
 
138
  You can load any subset like this:
139
 
176
 
177
  ### Social Impact of Dataset
178
 
179
+ With more than 151GB (58GB compressed) of cleaned Dutch text and more than 23B estimated words, this is by far the largest available cleaned corpus for the Dutch language.
180
+ The second largest dataset available is [OSCAR](https://oscar-corpus.com/), which is only 39GB in size for its deduplicated variant, and contains vulgarity.
181
  Using this corpus for training language models with adequate computational resources will allow researchers to reach parity with the performances observed for the English language.
182
  This can in turn have important repercussions for the development of commercial language technology applications for the Dutch language.
183
 
184
  ### Discussion of Biases
185
 
186
+ Despite the cleaning procedure aimed at removing vulgarity and profanity, it must be considered that model trained on this scraped corpus will
187
  inevitably reflect biases present in blog articles and comments on the Internet.
188
  This makes the corpus especially interesting in the context of studying data biases and how to limit their impacts.
189