german_common_crawl / README.md
patrickvonplaten's picture
Create README.md
2de4625
|
raw
history blame
559 Bytes

TODO

The dataset script logic is more or less ready, but we still need to download all the data files - so far this has been done only for the file: https://opendata.iisys.de/systemintegration/Datasets/CommonCrawl/head/de_head_0000_2015-48.tar.gz

You can already use the script to download the data in this file as follows:

from datasets import load_dataset
ds = load_dataset("flax-community/german_common_crawl", "first")

Now we need to convert all other files correctly. It should be as simple as:

a) Cloning this repo with git lfs