yhavinga commited on
Commit
bc9bc91
1 Parent(s): 500054b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -11,7 +11,6 @@ license:
11
  - odc-by
12
  multilinguality:
13
  - monolingual
14
- - multilingual
15
  - en-nl
16
  size_categories:
17
  micro:
@@ -66,7 +65,8 @@ paperswithcode_id: mc4
66
 
67
  ### Dataset Summary
68
 
69
- A cleaned version (151GB) of the Dutch split (277GB) of the C4 multilingual dataset (mC4).
 
70
  Based on the [Common Crawl dataset](https://commoncrawl.org).
71
  The original version was prepared by [AllenAI](https://allenai.org/), hosted at the address [https://huggingface.co/datasets/allenai/c4](https://huggingface.co/datasets/allenai/c4).
72
 
@@ -123,16 +123,16 @@ The data contains the following fields:
123
  - `text`: text content as a string
124
  - `timestamp`: timestamp of extraction as a string
125
 
126
- ### Data Splits
127
 
128
  To build mC4, the original authors used [CLD3](https://github.com/google/cld3) to identify over 100 languages.
129
  For Dutch, the whole corpus of scraped text was divided in `1032` jsonl files, `1024` for training following
130
  the naming style `c4-nl-cleaned.tfrecord-0XXXX-of-01024.json.gz` and 4 for validation following the
131
- naming style `c4-nl-cleaned.tfrecord-0000X-of-00004.json.gz`. The full set of preprocessed files takes roughly 208GB of disk space to download with Git LFS.
132
 
133
  For ease of use under different storage capacities, the following incremental configs are available: (note: files on disk are compressed)
134
 
135
- | subset | train size (docs, words, download + preproc disk space) | validation size |
136
  |:-------|--------------------------------------------------------:|----------------:|
137
  | micro | 125k docs, 23M words (<1GB) | 16k docs |
138
  | tiny | 6M docs, 2B words (6 GB + 15 GB) | 16k docs |
@@ -141,10 +141,10 @@ For ease of use under different storage capacities, the following incremental co
141
  | large | 47M docs, 19B words (42 GB + 108 GB) | 48k docs |
142
  | full | 64M docs, 25B words (58 GB + 148 GB) | 64k docs |
143
 
144
- For each subset there also exists a config `<name>_en_nl` that interleaves `nl` and `en` examples from the cleaned
145
  `en` variant of C4.
146
 
147
- You can load any subset like this:
148
 
149
  ```python
150
  from datasets import load_dataset
@@ -168,7 +168,7 @@ DatasetDict({
168
  })
169
  ```
170
 
171
- Since splits are quite large, you may want to traverse them using the streaming mode available starting from — Datasets v1.9.0:
172
 
173
  ```python
174
  from datasets import load_dataset
 
11
  - odc-by
12
  multilinguality:
13
  - monolingual
 
14
  - en-nl
15
  size_categories:
16
  micro:
 
65
 
66
  ### Dataset Summary
67
 
68
+ A cleaned version (151GB) of the Dutch part (277GB) of the C4 multilingual dataset (mC4).
69
+ While this dataset is monolingual, it is possible to download `en-nl` interleaved data, see the Dataset Config section below.
70
  Based on the [Common Crawl dataset](https://commoncrawl.org).
71
  The original version was prepared by [AllenAI](https://allenai.org/), hosted at the address [https://huggingface.co/datasets/allenai/c4](https://huggingface.co/datasets/allenai/c4).
72
 
 
123
  - `text`: text content as a string
124
  - `timestamp`: timestamp of extraction as a string
125
 
126
+ ### Data Configs
127
 
128
  To build mC4, the original authors used [CLD3](https://github.com/google/cld3) to identify over 100 languages.
129
  For Dutch, the whole corpus of scraped text was divided in `1032` jsonl files, `1024` for training following
130
  the naming style `c4-nl-cleaned.tfrecord-0XXXX-of-01024.json.gz` and 4 for validation following the
131
+ naming style `c4-nl-cleaned.tfrecord-0000X-of-00004.json.gz`. The full set of pre-processed files takes roughly 208GB of disk space to download with Git LFS.
132
 
133
  For ease of use under different storage capacities, the following incremental configs are available: (note: files on disk are compressed)
134
 
135
+ | config | train size (docs, words, download + preproc disk space) | validation size |
136
  |:-------|--------------------------------------------------------:|----------------:|
137
  | micro | 125k docs, 23M words (<1GB) | 16k docs |
138
  | tiny | 6M docs, 2B words (6 GB + 15 GB) | 16k docs |
 
141
  | large | 47M docs, 19B words (42 GB + 108 GB) | 48k docs |
142
  | full | 64M docs, 25B words (58 GB + 148 GB) | 64k docs |
143
 
144
+ For each config above there also exists a config `<name>_en_nl` that interleaves `nl` and `en` examples from the cleaned
145
  `en` variant of C4.
146
 
147
+ You can load any config like this:
148
 
149
  ```python
150
  from datasets import load_dataset
 
168
  })
169
  ```
170
 
171
+ Since the configs are quite large, you may want to traverse them using the streaming mode available starting from — Datasets v1.9.0:
172
 
173
  ```python
174
  from datasets import load_dataset