Datasets:
Update readme to temporarily point to the 3.1.0 branch
Browse files
README.md
CHANGED
@@ -4,40 +4,43 @@ We prepared five variants of the data: `en`, `en.noclean`, `en.noblocklist`, `re
|
|
4 |
|
5 |
For reference, these are the sizes of the variants:
|
6 |
|
7 |
-
- `en`:
|
8 |
-
- `en.noclean`: 2.
|
9 |
-
- `en.noblocklist`:
|
10 |
-
- `realnewslike`:
|
11 |
-
- `multilingual`:
|
12 |
|
13 |
The `en.noblocklist` variant is exactly the same as the `en` variant, except we turned off the so-called "badwords filter", which removes all documents that contain words from the lists at https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words.
|
14 |
|
15 |
# How do I download this?
|
16 |
|
17 |
-
|
18 |
|
19 |
```bash
|
20 |
-
git clone https://huggingface.co/datasets/allenai/c4
|
21 |
```
|
22 |
|
23 |
-
This will download
|
|
|
24 |
|
25 |
```bash
|
26 |
-
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/allenai/c4
|
27 |
-
cd c4
|
28 |
-
git lfs pull --include "en/*"
|
29 |
```
|
30 |
|
31 |
-
The `git clone` command in this variant will download a bunch of stub files that Git LFS uses, so you can see all the filenames that exist that way.
|
|
|
|
|
32 |
|
33 |
```bash
|
|
|
34 |
git lfs pull --include "multilingual/c4-nl.*.json.gz"
|
35 |
```
|
36 |
|
37 |
# Acknowledgements
|
38 |
|
39 |
-
Big ups to the good folks at [Common Crawl](https://commoncrawl.org) whose data made this possible ([consider donating](http://commoncrawl.org/donate/)!), to Google for creating the code that curates and filters the data, and to Huggingface, who had no issue with hosting these
|
40 |
|
41 |
### License
|
42 |
|
43 |
-
We are releasing this dataset under the terms of [ODC-BY](https://opendatacommons.org/licenses/by/1-0/).
|
|
|
|
4 |
|
5 |
For reference, these are the sizes of the variants:
|
6 |
|
7 |
+
- `en`: 305 GB
|
8 |
+
- `en.noclean`: 2.3 TB
|
9 |
+
- `en.noblocklist`: 380 GB
|
10 |
+
- `realnewslike`: 15 GB
|
11 |
+
- `multilingual`: 14 TB
|
12 |
|
13 |
The `en.noblocklist` variant is exactly the same as the `en` variant, except we turned off the so-called "badwords filter", which removes all documents that contain words from the lists at https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words.
|
14 |
|
15 |
# How do I download this?
|
16 |
|
17 |
+
First, make sure you have [Git Large File Storage](https://git-lfs.github.com) installed. Once that is done, downloading the whole dataset, all three variants, is easy:
|
18 |
|
19 |
```bash
|
20 |
+
git clone --depth 1 --branch mC4_3.1.0 https://huggingface.co/datasets/allenai/c4
|
21 |
```
|
22 |
|
23 |
+
This will download 18 TB to your local drive.
|
24 |
+
If you want to be more precise with what you are downloading, download the repo like this:
|
25 |
|
26 |
```bash
|
27 |
+
GIT_LFS_SKIP_SMUDGE=1 git clone --depth 1 --branch -mC4_3.1.0 https://huggingface.co/datasets/allenai/c4
|
|
|
|
|
28 |
```
|
29 |
|
30 |
+
The `git clone` command in this variant will download a bunch of stub files that Git LFS uses, so you can see all the filenames that exist that way.
|
31 |
+
You can then convert the stubs into their real files with `git lfs pull --include "..."`.
|
32 |
+
For example, if you wanted all the Dutch documents from the multilingual set, you would run
|
33 |
|
34 |
```bash
|
35 |
+
cd c4
|
36 |
git lfs pull --include "multilingual/c4-nl.*.json.gz"
|
37 |
```
|
38 |
|
39 |
# Acknowledgements
|
40 |
|
41 |
+
Big ups to the good folks at [Common Crawl](https://commoncrawl.org) whose data made this possible ([consider donating](http://commoncrawl.org/donate/)!), to Google for creating the code that curates and filters the data, and to Huggingface, who had no issue with hosting these 18 TB of data for public download!
|
42 |
|
43 |
### License
|
44 |
|
45 |
+
We are releasing this dataset under the terms of [ODC-BY](https://opendatacommons.org/licenses/by/1-0/).
|
46 |
+
By using this, you are also bound by the [Common Crawl terms of use](https://commoncrawl.org/terms-of-use/) in respect of the content contained in the dataset.
|