Datasets:

Multilinguality:
multilingual
Language Creators:
found
Annotations Creators:
no-annotation
Source Datasets:
original
ArXiv:
License:
dirkgr commited on
Commit
a8922de
1 Parent(s): 1ddc917

Added docs for the `noblocklist` data.

Browse files
Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -6,8 +6,11 @@ For reference, these are the sizes of the variants:
6
 
7
  - `en`: 300GB
8
  - `en.noclean`: 2.3TB
 
9
  - `realnewslike`: 15GB
10
 
 
 
11
  # How do I download this?
12
 
13
  Unfortunately we ran out of time making this into a proper Huggingface dataset, accessible through the `datasets` Python package. Until we get that ready, please use git to do the download. First, make sure you have [Git Large File Storage](https://git-lfs.github.com) installed. Once that is done, downloading the whole dataset, all three variants, is easy:
 
6
 
7
  - `en`: 300GB
8
  - `en.noclean`: 2.3TB
9
+ - `en.noblocklist`: 380GB
10
  - `realnewslike`: 15GB
11
 
12
+ The `en.noblocklist` variant is exactly the same as the `en` variant, except we turned off the so-called "badwords filter", which removes all documents that contain words from the lists at https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words.
13
+
14
  # How do I download this?
15
 
16
  Unfortunately we ran out of time making this into a proper Huggingface dataset, accessible through the `datasets` Python package. Until we get that ready, please use git to do the download. First, make sure you have [Git Large File Storage](https://git-lfs.github.com) installed. Once that is done, downloading the whole dataset, all three variants, is easy: