Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -16,6 +16,9 @@ This repo contains
|
|
16 |
* scripts: scripts to parse the raw xml dump as a huggingface datasets compatible dataset and the cleaning notebook that was used
|
17 |
* source: the raw xml dump and other file generated by mediawiki-dump-generator
|
18 |
|
|
|
|
|
|
|
19 |
The raw dump is over 50GBs and has been split into parts, they can be recombined using `cat`
|
20 |
|
21 |
The converted text is about 35M tokens, however it is adviced that you perform more thorough cleaning of the data before use
|
|
|
16 |
* scripts: scripts to parse the raw xml dump as a huggingface datasets compatible dataset and the cleaning notebook that was used
|
17 |
* source: the raw xml dump and other file generated by mediawiki-dump-generator
|
18 |
|
19 |
+
**DO NOT USE GIT LFS TO DOWNLOAD THE FILES. USE HUGGINFACE-CLI INSTEAD**
|
20 |
+
Cloning with git-lfs will pull the files into the index for the non-active branch and into the index and the working directory for the active branch, this will essentially lead to downloading approximately 70GBs even if you only need the `main` branch
|
21 |
+
|
22 |
The raw dump is over 50GBs and has been split into parts, they can be recombined using `cat`
|
23 |
|
24 |
The converted text is about 35M tokens, however it is adviced that you perform more thorough cleaning of the data before use
|