Expose a Parquet version so that I can do queries directly without downloading the dataset locally

#10
by julien-c HF staff - opened
BigCode org

The current version is parquet - does that work?

The datasets server does not process this dataset for now due to its size. The dataset only has one split, so we would have one big shared parquet file, instead of one parquet file per language. @lhoestq : we can try to put this dataset on an allow list, hopefully we will be able to process it.

We will try to run it on the datasets server: https://github.com/huggingface/datasets-server/pull/983

We put the dataset on the allow list, but it still cannot be processed because the datasets server does not support converting to parquet for gated datasets when the gate requires filling extra fields...

We now have... JobManagerExceededMaximumDurationError. Maybe we should release the "zombie" detector for the "datasets allow list". Obviously, this job will run for .... a long time. cc @albertvillanova @lhoestq what do you think?

the dataset is made of paquet files - let me implement the parquet copy to refs/convert/parquet and we'll be good :)

PS: the "zombie" detector detects jobs that are marked as started in the queue but are not running anymore smh. Here JobManagerExceededMaximumDurationError comes from the maximum job duration limit

the dataset is made of paquet files - let me implement the parquet copy to refs/convert/parquet and we'll be good :)

Yes!!!!

Done! Thanks a lot @lhoestq for the improvements you made to support big datasets as this one!

https://huggingface.co/datasets/bigcode/the-stack/viewer/bigcode--the-stack/train?p=1234567

Capture d’écran 2023-06-29 à 11.52.04.png

Yay! great job team!

Sign up or log in to comment