Datasets:

Multilinguality:
multilingual
Size Categories:
n>1T
Source Datasets:
original
ArXiv:
License:

Changing into Parquet

#5
by hac541309 - opened

Currently, the datasets are direct jsonls compressed by zst.
Would there be value in converting it into zst compressed parquet?
This would make the compression part transparent to the enduser and facilitate usage with huggingface dataset formats.

If so, what form would be beneficial for downstream projects?
(consistent number of shards or consistent shard size etc)

OSCAR org

Dear @hac541309 ,

Thanks a lot for the suggestion! @uj and I are already looking into this for the upcoming version of Ungoliant (our pipeline), so future versions of OSCAR will be definitely published in parquet (the tricky question there is the schema). What I'm not sure about, is converting the old OSCARs. We would really have to check how much time and resources is needed for this conversion and even if it would make sense after the first parquet OSCARs are out.

Let me know if you have any opinion about this or if you would like us to convert an old OSCAR for any particular reason.

All the best,
Pedro

Thanks for such efforts! Parquet (or other suitable dataformats) would be definitely useful. I've had a lot of difficulty dealing with json files and more often then not they are converted into arrow/parquet before anykind of processing.

Time and resources would definitely be substantial and it depends on how extensive the conversion would be done.
The most naive json2parquet option(this is a specific library but there are numerous options like it), if it works, would be fastest and painless but may or may not retain certain properties of the original.
The option I had in mind would have been to load this dataset from huggingface dataset and push it back to hub as parquet.
While this might be the most compatible, it might be the most resource consuming option that might have no reasonable way forward at one go.

In anycase an absolute minimum would be 32 core 256GiB machine and if possible 128core 1Tb machine.
I was trying to secure either the compute or budget from multiple organisations to achieve this.
Obviously i had to come to the developers of OSCAR for your experience, opinions and guidelines.

The reason I wanted such a conversion was I'm trying to extend oscar as a part of multilingual foundational text datasets.
Due to the web based nature of CC based datasets they need extensive downstream filtering.
Also uncompressed jsonls are large, and it is difficult to work directly on zst files when trying downstream processing and reconstitution.
That is why i felt parquet would be better.

However that would not fix everything. It seems many issues are simply insurmountable. The fact of the matter is, even with parquet based formats, on the fly streaming with random access, with minimal memory and storage overhead would not be easy from huggingface-datasets. Especially for multi TiB datasets across multi thousand files (I know that OSCAR-2301-hpc is a bit better in this aspect)

So i am still investigating options at this point.

Sign up or log in to comment