ArrowInvalid issue when saving to disk

#2
by RaymondAISG - opened

Hi!
Thank you very much for the dataset!

I'm attempting to save the dataset to disk using the following,

from datasets import load_dataset
ds = load_dataset("biglam/blbooks-parquet")
for k in ds.keys():
    ds[k].to_json(
    f"/data/blbooks/blbooks_{k}.jsonl",
        batch_size=128,
        force_ascii=False,
    )

I've encountered this exception at the saving to disk stage

multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
  File "/home/users/nus/rng/.conda/envs/peft-38/lib/python3.8/multiprocessing/pool.py", line 125, in worker
    result = (True, func(*args, **kwds))
  File "/home/users/nus/rng/.conda/envs/peft-38/lib/python3.8/site-packages/datasets/io/json.py", line 123, in _batch_json
    json_str = batch.to_pandas().to_json(path_or_buf=None, orient=orient, lines=lines, **to_json_kwargs)
  File "pyarrow/array.pxi", line 837, in pyarrow.lib._PandasConvertible.to_pandas
  File "pyarrow/table.pxi", line 4114, in pyarrow.lib.Table._to_pandas
  File "/home/users/nus/rng/.local/lib/python3.8/site-packages/pyarrow/pandas_compat.py", line 820, in table_to_blockmanager
    blocks = _table_to_blocks(options, table, categories, ext_columns_dtypes)
  File "/home/users/nus/rng/.local/lib/python3.8/site-packages/pyarrow/pandas_compat.py", line 1168, in _table_to_blocks
    result = pa.lib.table_to_blocks(options, block_table, categories,
  File "pyarrow/table.pxi", line 2771, in pyarrow.lib.table_to_blocks
  File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Casting from timestamp[s] to timestamp[ns] would result in out of bounds timestamp: -9751017600
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "download_dataset.py", line 75, in <module>
    download()
  File "download_dataset.py", line 66, in download
    ds[k].to_json(
  File "/home/users/nus/rng/.conda/envs/peft-38/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 4861, in to_json
    return JsonDatasetWriter(self, path_or_buf, batch_size=batch_size, num_proc=num_proc, **to_json_kwargs).write()
  File "/home/users/nus/rng/.conda/envs/peft-38/lib/python3.8/site-packages/datasets/io/json.py", line 105, in write
    written = self._write(file_obj=buffer, orient=orient, lines=lines, **self.to_json_kwargs)
  File "/home/users/nus/rng/.conda/envs/peft-38/lib/python3.8/site-packages/datasets/io/json.py", line 152, in _write
    for json_str in hf_tqdm(
  File "/home/users/nus/rng/.local/lib/python3.8/site-packages/tqdm/std.py", line 1178, in __iter__
    for obj in iterable:
  File "/home/users/nus/rng/.conda/envs/peft-38/lib/python3.8/multiprocessing/pool.py", line 868, in next
    raise value
pyarrow.lib.ArrowInvalid: Casting from timestamp[s] to timestamp[ns] would result in out of bounds timestamp: -9751017600

Is there a way to bypass this error?

Thank you!

BigLAM: BigScience Libraries, Archives and Museums org

Hi @RaymondAISG , according to the pandas documentation, to_json() accepts a date_unit argument
You could try setting that to 's' and see if that helps

BigLAM: BigScience Libraries, Archives and Museums org

If what @shamikbose89 suggested doesn't work and you don't mind a slightly hacky approach you could also cast the date to a string. That should make it possible to save to JSON i.e.

from datasets import load_dataset
from datasets import Value

ds = load_dataset("biglam/blbooks-parquet")
ds = ds.cast_column("date", Value("string"))

for k in ds.keys():
    ds[k].to_json(
    f"/data/blbooks/blbooks_{k}.jsonl",
        batch_size=128,
        force_ascii=False,
    )

Hi @shamikbose89 ,
Thank you for your reply! I've tried adding the date_unit argument to to_json(), unfortunately it still returns the same error.

Hi @davanstrien ,
Thank you for your reply! Your solution works and I'm able to save the dataset now, thank you for the dataset again.

RaymondAISG changed discussion status to closed

Sign up or log in to comment