Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Tags:
DOI:
License:

Error when loading data after git lfs download

#6
by keremturgutlu - opened

pyarrow.lib.ArrowInvalid: Unable to merge: Field timestamp has incompatible types: timestamp[ms] vs timestamp[s]

90 of 463 files I downloaded has this issue. Few examples:

['train-03497-of-05534-b18074c6a5507fd6.parquet',
'train-04154-of-05534-40acd52f9a4c941f.parquet',
'train-05088-of-05534-389220a3ef6bf3aa.parquet',
'train-05014-of-05534-cb96414f5f8d7ea9.parquet',
'train-00625-of-05534-b31d354bc4e5aef5.parquet',
'train-00322-of-05534-7b8829fb066e93cc.parquet',
'train-03856-of-05534-40ee9a4bb4a97dfa.parquet',
'train-03861-of-05534-9a63cad6883bf2fb.parquet',
'train-04768-of-05534-0645280ccd71c2ea.parquet',
'train-03383-of-05534-1d2be0457df926d9.parquet']

It would be nice to be able to specify columns while reading from parquet files similar to pd.read_parquet, given that it is a columnar based file format it should be pretty easy and efficient to implement. I was able to read all these flies with data = pd.read_parquet(dataset_files[0], columns=[args.text_column])[args.text_column].values but failed with data = datasets.load_dataset(path=".", split='train', data_files=dataset_files[:20], keep_in_memory=False).

Sign up or log in to comment