Update README.md
Browse files
README.md
CHANGED
@@ -21,8 +21,29 @@ more filtered, it may lack the recall needed for some applications.
|
|
21 |
There are two versions released: the **noisy** dataset, which has no filtering
|
22 |
except document-level LangID, and the **clean** dataset, which has a variety of
|
23 |
filters applied, though it naturally has a fair amount of noise itself. Each
|
24 |
-
dataset is released in
|
25 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
|
27 |
## LangID model and Crawl
|
28 |
|
@@ -930,12 +951,14 @@ A few comments too long to fit in the table above:
|
|
930 |
The number of documents, sentences, tokens, characters, and bytes for the noisy
|
931 |
and clean splits of the data. Note that the "toks" field below uses whitespace
|
932 |
for tokenization, so is not appropriate for non-whitespace-separating languages
|
933 |
-
like Chinese (see section above).
|
|
|
|
|
934 |
|
935 |
BCP-47 | docs (noisy) | docs (clean) | sents (noisy) | sents (clean) | toks (noisy) | toks (clean) | chars (noisy) | chars (clean) | clean | noisy |
|
936 |
----------------|:---------------|:---------------|:----------------|:----------------|:---------------|:---------------|:----------------|:----------------|:---------|:---------|
|
937 |
-
total
|
938 |
-
en
|
939 |
ru | 823M | 402.5M | 823M | 12.4B | 416.5B | 240.9B | 3.1T | 1.8T | 832.9 G | 1.4 T |
|
940 |
es | 476.4M | 250.9M | 8.3B | 4.5B | 325.7B | 170.4B | 2.1T | 1.1T | 380.9 G | 747.5 G |
|
941 |
de | 478.6M | 225.1M | 11.5B | 6B | 299.5B | 139.6B | 2.2T | 1T | 370.6 G | 815.5 G |
|
|
|
21 |
There are two versions released: the **noisy** dataset, which has no filtering
|
22 |
except document-level LangID, and the **clean** dataset, which has a variety of
|
23 |
filters applied, though it naturally has a fair amount of noise itself. Each
|
24 |
+
dataset is released in a document-level form that has been deduplicated.
|
25 |
+
|
26 |
+
## Loading
|
27 |
+
|
28 |
+
You can load both the clean and noisy versions of any language by specifing its LangID:
|
29 |
+
|
30 |
+
~~~
|
31 |
+
madlad_ape = load_dataset("allenai/madlad-400", "ape")
|
32 |
+
~~~
|
33 |
+
|
34 |
+
A list of langagues can also be supplied with a keyword argument:
|
35 |
+
|
36 |
+
~~~
|
37 |
+
madlad_multilang = load_dataset("allenai/madlad-400", languages=["ape", "abt", "ace"])
|
38 |
+
~~~
|
39 |
+
|
40 |
+
Additionally, you can load the noisy and clean subsets seperately with the split keyword argument:
|
41 |
+
|
42 |
+
~~~
|
43 |
+
madlad_multilang_clean = load_dataset("allenai/madlad-400", languages=["ape", "abt", "ace"], split="clean")
|
44 |
+
~~~
|
45 |
+
|
46 |
+
|
47 |
|
48 |
## LangID model and Crawl
|
49 |
|
|
|
951 |
The number of documents, sentences, tokens, characters, and bytes for the noisy
|
952 |
and clean splits of the data. Note that the "toks" field below uses whitespace
|
953 |
for tokenization, so is not appropriate for non-whitespace-separating languages
|
954 |
+
like Chinese (see section above). Note that the english subset in this version of
|
955 |
+
is missing 18% of documents that were included in the published analysis of the dataset.
|
956 |
+
These documents will be added in an update coming soon.
|
957 |
|
958 |
BCP-47 | docs (noisy) | docs (clean) | sents (noisy) | sents (clean) | toks (noisy) | toks (clean) | chars (noisy) | chars (clean) | clean | noisy |
|
959 |
----------------|:---------------|:---------------|:----------------|:----------------|:---------------|:---------------|:----------------|:----------------|:---------|:---------|
|
960 |
+
total* | 7.2B | 3.7B | 133.1B | 97.5B | 4.6T | 2.6T | 30.6T | 16.0T | 11.4 T | 6.3 T
|
961 |
+
en* | 3.0B | 1.5B | 71.1B | 45.4B | 2.0T | 1.3T | 12.3T | 7.6T | 2.6 T | 4.3 T |
|
962 |
ru | 823M | 402.5M | 823M | 12.4B | 416.5B | 240.9B | 3.1T | 1.8T | 832.9 G | 1.4 T |
|
963 |
es | 476.4M | 250.9M | 8.3B | 4.5B | 325.7B | 170.4B | 2.1T | 1.1T | 380.9 G | 747.5 G |
|
964 |
de | 478.6M | 225.1M | 11.5B | 6B | 299.5B | 139.6B | 2.2T | 1T | 370.6 G | 815.5 G |
|