Providing the Parquet version

#2
by arxyzan - opened

First of all, such a great job and big thanks for publishing this huge corpus for the community.

I just noticed that the dataset structure contains thousands of jsonl files. Downloading these files from the hub requires a lot of time even with large bandwidths since the files must be downloaded separately.

It'd be beneficial to be able to have the Parquet version of the dataset hosted here so that a lot of features are unlocked like sharding, streaming, lower memory usage, maximum download speed, datasets server API, etc. The HF Hub does convert any dataset to the Parquet format automatically as long as the dataset is public or owned by a Pro user. (Parquet version resides in the refs/convert/parquet branch) You can also convert the dataset to Parquet and upload it manually.

Since the dataset is gated right now, the first option is only possible if the dataset goes public without any restriction (and then gate it again).

Targoman Intelligent Processing org

Thanks for your suggestion. I'll try to create some merged datasets in order to speedup the download process

If you could create the Parquets it would be awesome! I calculated the size of the whole repo files and it's roughly 72 GB to download. After downloading, HF would create the Parquet files in the local cache which would generate another dozen-GB folder. So if you already have the loaded version of the dataset you can use those files too to upload here.

I know that these methods have a lot of overhead and I'd still recommend letting the HF Hub handle it for you using the auto converter bot (by making the dataset public for a couple of hours).

I don't have the proper resources to be able to load the whole dataset to do all this and would have to do it iteratively so I don't run out of disk ;) But I'm actively trying.

Targoman Intelligent Processing org
edited Apr 4

I'll give a try

Sign up or log in to comment