TristanThrush
commited on
Commit
•
6e88724
1
Parent(s):
273f456
changes to readme
Browse files
README.md
CHANGED
@@ -629,6 +629,7 @@ configs:
|
|
629 |
This repo is a fork of the original Hugging Face Wikipedia repo [here](https://huggingface.co/datasets/wikipedia).
|
630 |
The difference is that this fork does away with the need for `apache-beam`, and this fork is also very fast if you have a lot of CPUs on your machine.
|
631 |
It will also use all CPUs available to create a clean Wikipedia pretraining dataset. It takes less than an hour to process all of English wikipedia on a GCP n1-standard-96.
|
|
|
632 |
|
633 |
## Table of Contents
|
634 |
- [Dataset Description](#dataset-description)
|
@@ -709,44 +710,6 @@ An example looks as follows:
|
|
709 |
}
|
710 |
```
|
711 |
|
712 |
-
Some subsets of Wikipedia have already been processed by HuggingFace, as you can see below:
|
713 |
-
|
714 |
-
#### 20220301.de
|
715 |
-
|
716 |
-
- **Size of downloaded dataset files:** 6523.22 MB
|
717 |
-
- **Size of the generated dataset:** 8905.28 MB
|
718 |
-
- **Total amount of disk used:** 15428.50 MB
|
719 |
-
|
720 |
-
#### 20220301.en
|
721 |
-
|
722 |
-
- **Size of downloaded dataset files:** 20598.31 MB
|
723 |
-
- **Size of the generated dataset:** 20275.52 MB
|
724 |
-
- **Total amount of disk used:** 40873.83 MB
|
725 |
-
|
726 |
-
#### 20220301.fr
|
727 |
-
|
728 |
-
- **Size of downloaded dataset files:** 5602.57 MB
|
729 |
-
- **Size of the generated dataset:** 7375.92 MB
|
730 |
-
- **Total amount of disk used:** 12978.49 MB
|
731 |
-
|
732 |
-
#### 20220301.frr
|
733 |
-
|
734 |
-
- **Size of downloaded dataset files:** 12.44 MB
|
735 |
-
- **Size of the generated dataset:** 9.13 MB
|
736 |
-
- **Total amount of disk used:** 21.57 MB
|
737 |
-
|
738 |
-
#### 20220301.it
|
739 |
-
|
740 |
-
- **Size of downloaded dataset files:** 3516.44 MB
|
741 |
-
- **Size of the generated dataset:** 4539.94 MB
|
742 |
-
- **Total amount of disk used:** 8056.39 MB
|
743 |
-
|
744 |
-
#### 20220301.simple
|
745 |
-
|
746 |
-
- **Size of downloaded dataset files:** 239.68 MB
|
747 |
-
- **Size of the generated dataset:** 235.07 MB
|
748 |
-
- **Total amount of disk used:** 474.76 MB
|
749 |
-
|
750 |
### Data Fields
|
751 |
|
752 |
The data fields are the same among all configurations:
|
@@ -756,21 +719,6 @@ The data fields are the same among all configurations:
|
|
756 |
- `title` (`str`): Title of the article.
|
757 |
- `text` (`str`): Text content of the article.
|
758 |
|
759 |
-
### Data Splits
|
760 |
-
|
761 |
-
Here are the number of examples for several configurations:
|
762 |
-
|
763 |
-
| name | train |
|
764 |
-
|-----------------|--------:|
|
765 |
-
| 20220301.de | 2665357 |
|
766 |
-
| 20220301.en | 6458670 |
|
767 |
-
| 20220301.fr | 2402095 |
|
768 |
-
| 20220301.frr | 15199 |
|
769 |
-
| 20220301.it | 1743035 |
|
770 |
-
| 20220301.simple | 205328 |
|
771 |
-
|
772 |
-
## Dataset Creation
|
773 |
-
|
774 |
### Curation Rationale
|
775 |
|
776 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
@@ -839,7 +787,3 @@ the text.
|
|
839 |
url = "https://dumps.wikimedia.org"
|
840 |
}
|
841 |
```
|
842 |
-
|
843 |
-
### Contributions
|
844 |
-
|
845 |
-
Thanks to [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
|
|
|
629 |
This repo is a fork of the original Hugging Face Wikipedia repo [here](https://huggingface.co/datasets/wikipedia).
|
630 |
The difference is that this fork does away with the need for `apache-beam`, and this fork is also very fast if you have a lot of CPUs on your machine.
|
631 |
It will also use all CPUs available to create a clean Wikipedia pretraining dataset. It takes less than an hour to process all of English wikipedia on a GCP n1-standard-96.
|
632 |
+
This fork is also used in the [OLM Project](https://github.com/huggingface/olm-datasets) to pull and process up-to-date wikipedia snapshots.
|
633 |
|
634 |
## Table of Contents
|
635 |
- [Dataset Description](#dataset-description)
|
|
|
710 |
}
|
711 |
```
|
712 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
713 |
### Data Fields
|
714 |
|
715 |
The data fields are the same among all configurations:
|
|
|
719 |
- `title` (`str`): Title of the article.
|
720 |
- `text` (`str`): Text content of the article.
|
721 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
722 |
### Curation Rationale
|
723 |
|
724 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
|
|
787 |
url = "https://dumps.wikimedia.org"
|
788 |
}
|
789 |
```
|
|
|
|
|
|
|
|