diff --git "a/README.md" "b/README.md"
deleted file mode 100644--- "a/README.md"
+++ /dev/null
@@ -1,5757 +0,0 @@
----
-annotations_creators:
-- no-annotation
-language_creators:
-- crowdsourced
-pretty_name: Wikipedia
-paperswithcode_id: null
-license:
-- cc-by-sa-3.0
-- gfdl
-task_categories:
-- text-generation
-- fill-mask
-task_ids:
-- language-modeling
-- masked-language-modeling
-source_datasets:
-- original
-multilinguality:
-- multilingual
-size_categories:
-- n<1K
-- 1K
-and identify the date.
-
-### 2. [Optional] Get a refreshed list of languages
-
-This is optional because it not very likely that a new language will have
-suddenly appeared since the last version _and_ have a significant dataset.
-
-Navigate to and copy the
-languages column from the "Detailed list" table (near the end of the page).
-
-Copy that content in the form of a Python list into `lang_def.py` (at the top
-of the repo) under a new date.
-
-### 3. [Optional] Create Media and Category aliases
-
-In order to properly extract links to images and media in all languages, we
-must refresh the two corresponding files. To do so, from the root of the repo,
-run
-
-```sh
-python -m prep.create_aliases
-```
-
-This will create or update these two files at the root of the repo:
-
-- `media_aliases.py`
-- `category_aliases.py`
-
-These files are used in the final step
-
-### 4. Build and prepare the datasets into sharded parquet files
-
-Running this script downloads the wikipedia dumps for each language in
-`lang_def.py` and shards each language dataset into the appropriate number of
-shards (max size ~ 250MB).
-
-```sh
-python -m prep.build --date 20230601
-```
-
-There are other options:
-
-```text
-$ python -m prep.build --help
-usage: Wikipedia Builder [-h] [--date DATE] [--language [LANG ...]] [--cache-dir DIR] [--mirror MIRROR]
-
-Prepares the Wikipedia dataset for each language
-
-optional arguments:
- -h, --help show this help message and exit
- --date DATE Wikipedia dump date (e.g. 20230601)
- --language [LANG ...] Language code (e.g. en). If missing, all languages are processed
- --cache-dir DIR Cache directory for 🤗 Datasets
- --mirror MIRROR Mirror URL
-```
-
-For instance, for faster downloads of the dumps, use the mirror option:
-
-```sh
-python -m prep.build \
- --date 20230601 \
- --language bs \
- --mirror https://mirror.accum.se/mirror/wikimedia.org/dumps/
-```
-
-It will download the dumps at around 60MB/s instead of the capped speed
-(~4MB/s) from . The script will skip existing
-directories, allowing you to run the script in several passes.
-
-Notes:
-
-- These instructions build upon the build process of the
- [Wikipedia](https://huggingface.co/datasets/wikipedia) 🤗 Dataset. HF did a
- fantastic job, I just pushed it a bit further.
-- Be aware that not all mirrors contain all dumps. For instance mirror.accum.se
- does not contain dumps for languages such as be-x-old or cbk-zam. My own
- solution is to run a first pass using the aforementioned mirror, and a second
- pass with the official `https://dumps.wikimedia.org` site (omitting the
- `--mirror` parameter).
-