This is the processed version of Google's C4 dataset.

We prepared three variants of the data: en, en.noclean, and realnewslike. There is a fourth one, webtextlike, which was not ready as I am writing this, but we are working on it. If you are interested in the multilingual version, please get in touch.

For reference, these are the sizes of the variants:

  • en: 300GB
  • en.noclean: 2.3TB
  • realnewslike: 15GB

How do I download this?

Unfortunately we ran out of time making this into a proper Huggingface dataset, accessible through the datasets Python package. Until we get that ready, please use git to do the download. First, make sure you have Git Large File Storage installed. Once that is done, downloading the whole dataset, all three variants, is easy:

git clone https://huggingface.co/datasets/allenai/c4

If you want only one of the variants, you need some more commands:

git clone -n https://huggingface.co/datasets/allenai/c4
cd c4
git sparse-checkout init --cone
git sparse-checkout set en

You can use git sparse-checkout set multiple times to select multiple datasets.

Acknowledgements

Big ups to the good folks at Common Crawl whose data made this possible (consider donating!), to Google for creating the code that curates and filters the data, and to Huggingface, who had no issue with hosting these 3TB of data for public download!

Explore dataset Edit Model Tags

Models trained or fine-tuned on allenai/c4

None yet