Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
Spanish
Libraries:
Datasets
Dask
License:

[bot] Conversion to Parquet

#1
by parquet-converter - opened

The parquet-converter bot has created a version of this dataset in the Parquet format in the refs/convert/parquet branch.

What is Parquet?

Apache Parquet is a popular columnar storage format known for:

  • reduced memory requirement,
  • fast data retrieval and filtering,
  • efficient storage.

This is what powers the dataset viewer on each dataset page and every dataset on the Hub can be accessed with the same code (you can use HF Datasets, ClickHouse, DuckDB, Pandas or Polars, up to you).

You can learn more about the advantages associated with Parquet in the documentation.

How to access the Parquet version of the dataset?

You can access the Parquet version of the dataset by following this link: refs/convert/parquet

What if my dataset was already in Parquet?

When the dataset is already in Parquet format, the data are not converted and the files in refs/convert/parquet are links to the original files. This rule has an exception to ensure the dataset viewer API to stay fast: if the row group size of the original Parquet files is too big, new Parquet files are generated.

What should I do?

You don't need to do anything. The Parquet version of the dataset is available for you to use. Refer to the documentation for examples and code snippets on how to query the Parquet files with ClickHouse, DuckDB, Pandas or Polars.

If you have any questions or concerns, feel free to ask in the discussion below. You can also close the discussion if you don't have any questions.

Fundación de Investigación y Salvaguarda de Textos Recuperados Originales org

Ya está en parquet, tonto

chiquitazo changed discussion status to closed

Hola, tienes toda la razón: 1. ya está en Parquet, 2. nuestro bot es tonto.

Como lo mencionamos arriba:

When the dataset is already in Parquet format, the data are not converted and the files in refs/convert/parquet are links to the original files. This rule has an exception to ensure the dataset viewer API to stay fast: if the row group size of the original Parquet files is too big, new Parquet files are generated.

Igual, avisamos cuando ya estan disponibles los archivos parquet en la rama especial "refs/convert/parquet" porque permite un acceso "estandar" para todos los datasets del Hub como se puede ver aquí: https://huggingface.co/docs/datasets-server/parquet_process

chiquitazo changed discussion status to open
Fundación de Investigación y Salvaguarda de Textos Recuperados Originales org
edited May 23

Ya hay dos "partial-*" en la rama "refs/convert/parquet" pero son solamente para el dataset viewer, supongo, porque son unos 5GB. ¿Entiendo que después de 9 días no va a hacer nada más con el dataset completo?

Exacto, así es, limitamos el tamaño de los datos convertidos por razones de costo. Eso nos permite proveer el viewer a >100k datasets, que sean grandes o pequeños. Obviamente, si alguien usa su dataset para entrenar un modelo, accederá a los archivos completos.

chiquitazo changed discussion status to closed
Fundación de Investigación y Salvaguarda de Textos Recuperados Originales org

Gracias por la info, y perdón por insultar a tu criatura

😄

Sign up or log in to comment