[bot] Conversion to Parquet

#1
by parquet-converter - opened

The parquet-converter bot has created a version of this dataset in the Parquet format in the refs/convert/parquet branch.

What is Parquet?

Apache Parquet is a popular columnar storage format known for:

  • reduced memory requirement,
  • fast data retrieval and filtering,
  • efficient storage.

This is what powers the Dataset Viewer on each dataset page and every dataset on the Hub can be accessed with the same code (you can use HF Datasets, ClickHouse, DuckDB, Pandas or Polars, up to you).

You can learn more about the advantages associated with Parquet in the documentation.

How to access the Parquet version of the dataset?

You can access the Parquet version of the dataset by following this link: refs/convert/parquet

What if my dataset was already in Parquet?

When the dataset is already in Parquet format, the data are not converted and the files in refs/convert/parquet are links to the original files. This rule has an exception to ensure the Datasets Server API to stay fast: if the row group size of the original Parquet files is too big, new Parquet files are generated.

What should I do?

You don't need to do anything. The Parquet version of the dataset is available for you to use. Refer to the documentation for examples and code snippets on how to query the Parquet files with ClickHouse, DuckDB, Pandas or Polars.

If you have any questions or concerns, feel free to ask in the discussion below. You can also close the discussion if you don't have any questions.

Wikimedia Movement org

What is needed for dataset-viewer to be available for these splits?

This Parquet file does not seem to be supported, the underlying error is:

ValueError: Arrow type map<string, struct<language: string, value: string> ('labels')> does not have a datasets dtype equivalent.

does it ring a bell @lhoestq @polinaeterna ?

Yes we don't support Arrow/Parquet map type yet, see https://github.com/huggingface/datasets/issues/5612

It would be amazing to support this type though, feel free to react/comment in the GitHub issue

Wikimedia Movement org
edited Mar 15

@lhoestq Thanks for the heads up! I've added a reaction and started watching the GH issue.

Do you have any suggestions for how to schema the more dynamic parts of the Wikidata JSON format (Specifically around the more dynamic key-value pairs for statements and qualifiers)? https://doc.wikimedia.org/Wikibase/master/php/docs_topics_json.html

The schema could enumerate every valid language in a struct, so I'm less worried about that.

Wikimedia Movement org

@lhoestq @polinaeterna @severo I could also push the 82G json.bz2 I've been running through the conversion script I've written, if that has value. I wasn't sure if the Datasets conversion bot would support the json.bz2 format. It's just a very large array of the object format I've pasted above in a BZip2 archive.

You can also use lists instead of maps in the meantime.

For example instead of

labels: map<string, struct<language: string, value: string> ('labels')>

with content like

[('el', {'language': 'el', 'value': 'Βέλγιο'}),
 ('ay', {'language': 'ay', 'value': 'Bilkiya'}),
 ('pnb', {'language': 'pnb', 'value': 'بیلجیئم'}),
 ('na', {'language': 'na', 'value': 'Berdjiyum'}),
 ('mk', {'language': 'mk', 'value': 'Белгија'}),
 ...
]

you can use

labels: list<struct<language: string, value: string>>

with content like

[{'language': 'el', 'value': 'Βέλγιο'},
 {'language': 'ay', 'value': 'Bilkiya'},
 {'language': 'pnb', 'value': 'بیلجیئم'},
 {'language': 'na', 'value': 'Berdjiyum'},
 {'language': 'mk', 'value': 'Белгија'},
 ...
]

Sign up or log in to comment