Datasets:

Multilinguality:
multilingual
Language Creators:
found
Annotations Creators:
no-annotation
Source Datasets:
original
ArXiv:
License:

RE: c4 multilignual 3.1.0 (mc4 310) Converting into parquet

#3
by hac541309 - opened

Right now the files are json files compressed into gz, decompressed and processed on the fly.
This has resulted in unsatisfactory and unexpected behavior when expected huggingface datasets.

Would conversion into zst backed parquet be useful?
If so, what form should it take? Necessary considerations would include
compression level, shard per language / maximum shard size considerations.

For my purposes, the end usecase would be HPC so it smaller number of larger parquets would be useful, but i would like to understand what would be useful in the general case.
(a bit like OSCAR-2301-hpc)

Allen Institute for AI org

I think that would be a fine thing to have, but we also need to keep the json.gz format, since many people rely on that. If it doesn't break the downloading scripts, I would love to have both versions in this repo, maybe the parquet version in a separate directory.

  • Compression levels should be high, since CPUs are cheaper than bandwidth for most people.
  • It would be best to have a 1:1 mapping between parquet files and JSON files. The JSON files aim to be about 1GB each. If that's not big enough, we could combine multiple JSON files into one parquet file. You might have noticed, the JSON files are already concatenations of the original files from the Apache Beam job that creates these. We want to keep that correspondence, at least in the filename, since it links us back to the Apache Beam code at https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/text/c4.py. For many file formats it makes sense to stay below 4GB, because even in 2023, some tools break at 4GB, but maybe for parquet that doesn't matter.
  • Compression levels should be high, since CPUs are cheaper than bandwidth for most people.

This is an important point. However, moving outside default zst settings for higher compression degrade compression speeds by an order of magnitude for both compression and decompression (the degree is less for compression) which might cause great bottleneck. I think trying to stay with zst defaults (still brings compression benefits compared to gz) and then conducting experiments to clarify the costs would be valuable.

  • It would be best to have a 1:1 mapping between parquet files and JSON files. For many file formats it makes sense to stay below 4GB, because even in 2023, some tools break at 4GB, but maybe for parquet that doesn't matter.

This might be much more difficult. My original plan was to load the c4 language by language, and then to_parquet the datasets by language slice and then find a way to reconstitute them. There are ways to satisfy the per file size constraint but not 1:1 mapping. There is a way to directly convert the json files into parquet files but it is unclear what the value would be to me, since all of the otherscripts would have to change to accomodate that change.
On the other hand simply changing gz into zst would be more straightforward and require minimal code changes(it might add a dependency to a zst library but this is common)

  • For many file formats it makes sense to stay below 4GB, because even in 2023, some tools break at 4GB, but maybe for parquet that doesn't matter.

Atleast for libraries that deal with building, modifying and using parquet files, this is not a concern.

Allen Institute for AI org

Why would changing the files from .jzon.gz to .jzon.zst solve all the problems? Are there tools that can ingest .jzon.zst but not .jzon.gz?

Oh I meant that it would be easier and more straightforward to implement. It definitely does NOT solve all problems. just become a bit smaller and faster to process...

Allen Institute for AI org

There are lots of better alternatives to JSON and gzip. We didn't chose those because they were the best. We chose them because everybody has the needed tools already installed, and nobody needs to read documentation to use them.

Sign up or log in to comment