File size: 2,222 Bytes
31d802a
 
34c61cd
31d802a
 
 
b03d3f1
 
 
 
 
34c61cd
 
31d802a
 
 
b03d3f1
31d802a
 
b03d3f1
31d802a
 
b03d3f1
 
31d802a
 
72e0114
31d802a
 
b03d3f1
 
 
34c61cd
 
b03d3f1
34c61cd
 
ba9859b
 
 
b03d3f1
34c61cd
 
 
b03d3f1
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
This is the processed version of [Google's C4 dataset](https://www.tensorflow.org/datasets/catalog/c4).

We prepared five variants of the data: `en`, `en.noclean`, `en.noblocklist`, `realnewslike`, and `multilingual`.

For reference, these are the sizes of the variants:

- `en`: 305 GB
- `en.noclean`: 2.3 TB
- `en.noblocklist`: 380 GB
- `realnewslike`: 15 GB
- `multilingual`: 14 TB

The `en.noblocklist` variant is exactly the same as the `en` variant, except we turned off the so-called "badwords filter", which removes all documents that contain words from the lists at https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words.

# How do I download this?

First, make sure you have [Git Large File Storage](https://git-lfs.github.com) installed. Once that is done, downloading the whole dataset, all three variants, is easy:

```bash
git clone --depth 1 --branch mC4_3.1.0 https://huggingface.co/datasets/allenai/c4
```

This will download 18 TB to your local drive.
If you want to be more precise with what you are downloading, download the repo like this:

```bash
GIT_LFS_SKIP_SMUDGE=1 git clone --depth 1 --branch mC4_3.1.0 https://huggingface.co/datasets/allenai/c4
```

The `git clone` command in this variant will download a bunch of stub files that Git LFS uses, so you can see all the filenames that exist that way.
You can then convert the stubs into their real files with `git lfs pull --include "..."`.
For example, if you wanted all the Dutch documents from the multilingual set, you would run

```bash
cd c4
git lfs pull --include "multilingual/c4-nl.*.json.gz"
```

# Acknowledgements

Big ups to the good folks at [Common Crawl](https://commoncrawl.org) whose data made this possible ([consider donating](http://commoncrawl.org/donate/)!), to Google for creating the code that curates and filters the data, and to Huggingface, who had no issue with hosting these 18 TB of data for public download!

### License

We are releasing this dataset under the terms of [ODC-BY](https://opendatacommons.org/licenses/by/1-0/).
By using this, you are also bound by the [Common Crawl terms of use](https://commoncrawl.org/terms-of-use/) in respect of the content contained in the dataset.