Datasets:

Multilinguality:
monolingual
en-nl
Language Creators:
found
Annotations Creators:
no-annotation
Source Datasets:
extended
ArXiv:
License:
File size: 9,490 Bytes
69c5645
 
 
 
 
cd77052
69c5645
d9a79cb
cd77052
 
69c5645
 
500054b
69c5645
109f542
 
 
 
8e6113c
 
 
109f542
69c5645
 
 
8e6113c
69c5645
 
 
8e6113c
69c5645
 
613de92
69c5645
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bc9bc91
 
69c5645
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
04998b9
69c5645
 
 
 
 
 
 
 
 
 
 
 
 
 
56b7cad
69c5645
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bc9bc91
69c5645
56b7cad
 
 
bc9bc91
69c5645
d68620a
 
bc9bc91
d68620a
72ad1eb
d68620a
 
 
 
 
 
bc9bc91
d68620a
69c5645
bc9bc91
69c5645
 
 
 
 
56b7cad
 
 
8375e27
56b7cad
 
 
 
 
 
 
 
 
 
 
 
69c5645
 
bc9bc91
69c5645
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
04998b9
 
69c5645
 
 
 
 
04998b9
69c5645
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- nl
- en
license:
- odc-by
multilinguality:
- monolingual
- en-nl
size_categories:
  - n<1K
  - 1K<n<10K
  - 10K<n<100K
  - 100K<n<1M
  - 1M<n<10M
  - 10M<n<100M
  - 100M<n<1B
  - 1B<n<10B
source_datasets:
- extended
task_categories:
- text-generation
task_ids:
- language-modeling
paperswithcode_id: mc4
pretty_name: mC4_nl_cleaned
---

# Dataset Card for Clean Dutch mC4

## Table of Contents

- [Dataset Card for Clean](#dataset-card-for-mc4)
  - [Table of Contents](#table-of-contents)
  - [Dataset Description](#dataset-description)
    - [Dataset Summary](#dataset-summary)
    - [Preprocessing](#preprocessing)
    - [Languages](#languages)
  - [Dataset Structure](#dataset-structure)
    - [Data Instances](#data-instances)
    - [Data Fields](#data-fields)
    - [Data Splits](#data-splits)
  - [Dataset Creation](#dataset-creation)
  - [Considerations for Using the Data](#considerations-for-using-the-data)
    - [Social Impact of Dataset](#social-impact-of-dataset)
    - [Discussion of Biases](#discussion-of-biases)
    - [Other Known Limitations](#other-known-limitations)
  - [Additional Information](#additional-information)
    - [Licensing Information](#licensing-information)
    - [Citation Information](#citation-information)
    - [Contributions](#contributions)

## Dataset Description

- **Original Homepage:** [HF Hub](https://huggingface.co/datasets/allenai/c4)
- **Paper:** [ArXiv](https://arxiv.org/abs/1910.10683)

### Dataset Summary

A cleaned version (151GB) of the Dutch part (277GB) of the C4 multilingual dataset (mC4).
While this dataset is monolingual, it is possible to download `en-nl` interleaved data, see the Dataset Config section below.
Based on the [Common Crawl dataset](https://commoncrawl.org).
The original version was prepared by [AllenAI](https://allenai.org/), hosted at the address [https://huggingface.co/datasets/allenai/c4](https://huggingface.co/datasets/allenai/c4).

### Preprocessing

The Dutch portion of mC4 was cleaned in a similar fashion as the English cleaned C4 version.
See  [GitLab](https://gitlab.com/yhavinga/c4nlpreproc) for details.

In summary, the preprocessing procedure includes:

 - Removing documents containing words from a selection of the [Dutch and English List of Dirty Naught Obscene and Otherwise Bad Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words).
 
 - Removing sentences containing:
 
   - Less than 3 words.
   
   - A word longer than 250 characters.
   
   - An end symbol not matching end-of-sentence punctuation.
   
   - Strings associated to javascript code (e.g. `{`), lorem ipsum, policy information in Dutch or English.

 - Removing documents (after sentence filtering):
 
   - Containing less than 5 sentences.
   
   - Containing less than 500 or more than 50'000 characters.
   
   - Not identified as prevalently Dutch by the `LangDetect` package.

Using parallel processing with 96 CPU cores on a TPUv3 via Google Cloud to perform the complete clean of all the original Dutch
shards of mC4 (1024 of ~220Mb train, 4 of ~24Mb validation) required roughly 10 hours due to the demanding steps of sentence
tokenization and language detection. The total size of compressed `.json.gz` files is roughly halved after the procedure.

## Dataset Structure

### Data Instances

An example from the dataset:

```
{
  'timestamp': '2019-02-22T15:37:25Z',
  'url': 'https://ondernemingen.bnpparibasfortis.be/nl/artikel?n=vijf-gouden-tips-voor-succesvol-zaken-doen-met-japan',
  'text': 'Japanse bedrijven zijn niet alleen hondstrouw aan hun leveranciers , ze betalen ook nog eens erg stipt. Alleen is het niet zo makkelijk er een voet tussen de deur te krijgen. Met de volgende tips hebt u alvast een streepje voor.\nIn Japan draait alles om vertrouwen. Neem voldoende tijd om een relatie op te bouwen.Aarzel niet om tijdig een lokale vertrouwenspersoon in te schakelen.\nJapan is een erg competitieve markt.Kwaliteit en prijs zijn erg belangrijk, u zult dus het beste van uzelf moeten geven. Gelukkig is de beloning groot. Japanse zakenlui zijn loyaal en betalen stipt!\nJapanners houden er eigenzinnige eisen op na. Kom dus niet aanzetten met uw standaardproducten voor de Europese markt. Zo moet een producent van diepvriesfrieten bijvoorbeeld perfect identieke frietjes kunnen leveren in mini- verpakkingen. Het goede nieuws is dat Japanners voor kwaliteit graag diep in hun buidel tasten.\nEn u dacht dat Europa lijdt aan reglementitis? Japanners kennen er ook wat van. Tal van voorschriften zeggen wat je wel en niet mag doen. Gelukkig zijn de regels helder geformuleerd.\nHet gebruik van het Engels is niet echt ingeburgerd in Japan. Betrek een tolk bij uw onderhandelingen en zorg voor correcte vertalingen van handleidingen of softwareprogramma’s.'
}
```

### Data Fields

The data contains the following fields:

- `url`: url of the source as a string
- `text`: text content as a string
- `timestamp`: timestamp of extraction as a string

### Data Configs

To build mC4, the original authors used [CLD3](https://github.com/google/cld3) to identify over 100 languages.
For Dutch, the whole corpus of scraped text was divided in `1032` jsonl files, `1024` for training following
the naming style `c4-nl-cleaned.tfrecord-0XXXX-of-01024.json.gz` and 4 for validation following the
naming style `c4-nl-cleaned.tfrecord-0000X-of-00004.json.gz`. The full set of pre-processed files takes roughly 208GB of disk space to download with Git LFS.

For ease of use under different storage capacities, the following incremental configs are available: (note: files on disk are compressed)

| config | train size (docs, words, download + preproc disk space) | validation size |
|:-------|--------------------------------------------------------:|----------------:|
| micro  |                             125k docs, 23M words (<1GB) |        16k docs |
| tiny   |                        6M docs, 2B words (6 GB + 15 GB) |        16k docs |
| small  |                      15M docs, 6B words (14 GB + 36 GB) |        16k docs |
| medium |                     31M docs, 12B words (28 GB + 72 GB) |        32k docs |
| large  |                    47M docs, 19B words (42 GB + 108 GB) |        48k docs |
| full   |                    64M docs, 25B words (58 GB + 148 GB) |        64k docs |

For each config above there also exists a config `<name>_en_nl` that interleaves `nl` and `en` examples from the cleaned
`en` variant of C4.

You can load any config like this:

```python
from datasets import load_dataset

datasets = load_dataset('yhavinga/mc4_nl_cleaned', 'tiny', streaming=True)
print(datasets)
```

This will print

```
DatasetDict({
    train: Dataset({
        features: ['text', 'timestamp', 'url'],
        num_rows: 6303893
    })
    validation: Dataset({
        features: ['text', 'timestamp', 'url'],
        num_rows: 16189
    })
})
```

Since the configs are quite large, you may want to traverse them using the streaming mode available starting from — Datasets v1.9.0:

```python
from datasets import load_dataset

mc4_nl_full_stream = load_dataset('yhavinga/mc4_nl_cleaned', "full", split='train', streaming=True)
print(next(iter(mc4_nl_full_stream))) # Prints the example presented above
```

## Dataset Creation

Refer to the original paper for more considerations regarding the choice of sources and the scraping process for creating `mC4`.

## Considerations for Using the Data

### Social Impact of Dataset

With more than 151GB (58GB compressed) of cleaned Dutch text and more than 23B estimated words, this is by far the largest available cleaned corpus for the Dutch language.
The second largest dataset available is [OSCAR](https://oscar-corpus.com/), which is only 39GB in size for its deduplicated variant, and contains vulgarity.
Using this corpus for training language models with adequate computational resources will allow researchers to reach parity with the performances observed for the English language.
This can in turn have important repercussions for the development of commercial language technology applications for the Dutch language.

### Discussion of Biases

Despite the cleaning procedure aimed at removing vulgarity and profanity, it must be considered that model trained on this scraped corpus will
inevitably reflect biases present in blog articles and comments on the Internet.
This makes the corpus especially interesting in the context of studying data biases and how to limit their impacts.

## Additional Information

### Licensing Information

AllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.

### Citation Information

```
@article{2019t5,
    author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
    title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
    journal = {arXiv e-prints},
    year = {2019},
    archivePrefix = {arXiv},
    eprint = {1910.10683},
}
```

### Contributions

Thanks to [gabriele.sarti996@gmail.com](mailto:gabriele.sarti996@gmail.com), [@dirkgr](https://github.com/dirkgr) and [@lhoestq](https://github.com/lhoestq) for 
providing the `cleaned_it_mc4` example that shows how upload a dataset to the Huggingface hub.