This is the processed version of Google's C4 dataset.
We prepared five variants of the data:
For reference, these are the sizes of the variants:
en: 305 GB
en.noclean: 2.3 TB
en.noblocklist: 380 GB
realnewslike: 15 GB
multilingual: 14 TB
en.noblocklist variant is exactly the same as the
en variant, except we turned off the so-called "badwords filter", which removes all documents that contain words from the lists at https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words.
How do I download this?
First, make sure you have Git Large File Storage installed. Once that is done, downloading the whole dataset, all three variants, is easy:
git clone --depth 1 --branch mC4_3.1.0 https://huggingface.co/datasets/allenai/c4
This will download 18 TB to your local drive. If you want to be more precise with what you are downloading, download the repo like this:
GIT_LFS_SKIP_SMUDGE=1 git clone --depth 1 --branch mC4_3.1.0 https://huggingface.co/datasets/allenai/c4
git clone command in this variant will download a bunch of stub files that Git LFS uses, so you can see all the filenames that exist that way.
You can then convert the stubs into their real files with
git lfs pull --include "...".
For example, if you wanted all the Dutch documents from the multilingual set, you would run
cd c4 git lfs pull --include "multilingual/c4-nl.*.json.gz"
Big ups to the good folks at Common Crawl whose data made this possible (consider donating!), to Google for creating the code that curates and filters the data, and to Huggingface, who had no issue with hosting these 18 TB of data for public download!