Datasets:

Multilinguality:
multilingual
Language Creators:
found
Annotations Creators:
no-annotation
Source Datasets:
original
ArXiv:
License:
File size: 1,548 Bytes
31d802a
 
 
 
 
 
 
f888b0f
 
31d802a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ba9859b
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
This is the processed version of [Google's C4 dataset](https://www.tensorflow.org/datasets/catalog/c4).

We prepared three variants of the data: `en`, `en.noclean`, and `realnewslike`. There is a fourth one, `webtextlike`, which was not ready as I am writing this, but we are working on it. If you are interested in the `multilingual` version, please get in touch.

For reference, these are the sizes of the variants:

- `en`: 300GB
- `en.noclean`: 2.3TB
- `realnewslike`: 15GB

# How do I download this?

Unfortunately we ran out of time making this into a proper Huggingface dataset, accessible through the `datasets` Python package. Until we get that ready, please use git to do the download. First, make sure you have [Git Large File Storage](https://git-lfs.github.com) installed. Once that is done, downloading the whole dataset, all three variants, is easy:

```bash
git clone https://huggingface.co/datasets/allenai/c4
```

If you want only one of the variants, you need some more commands:

```bash
git clone -n https://huggingface.co/datasets/allenai/c4
cd c4
git sparse-checkout init --cone
git sparse-checkout set en
```

You can use `git sparse-checkout set` multiple times to select multiple datasets.

# Acknowledgements

Big ups to the good folks at [Common Crawl](https://commoncrawl.org) whose data made this possible ([consider donating](http://commoncrawl.org/donate/)!), to Google for creating the code that curates and filters the data, and to Huggingface, who had no issue with hosting these 3TB of data for public download!