Error loading dataset

#2
by maveriq - opened

While loading the dataset with the command doclaynet = load_dataset("pierreguillou/DocLayNet-Large"), I get the error ArrowInvalid: JSON parse error: Invalid value. in row 0, which also raised the exception UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 0: invalid start byte.

I am using datasets version '2.11.0'

Hi. DocLayNet large is very, very large. The instance you used can be the reason of the problem.
I do download without problem this dataset on the following instance of Lambda (https://cloud.lambdalabs.com/instances): A10 (24 GP PCle) (30 vCPUs, 200 Gib RAM, 1.4 Tib SSD)
Can you try another instance that the one you used?
Thank you.

Hi. The download completes successfully. It's in the subsequent extraction that I get this error.

I have attached the screenshots of the error below. The dataset is already downloaded in a previous run, so the loading time is 0 sec. The machine is 96-core Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 756G RAM

Screenshot from 2023-04-08 11-32-38.png

Screenshot from 2023-04-08 11-34-42.png

Screenshot from 2023-04-08 11-35-05.png

Hi @maveriq .

Sorry for the delay in my response.
Here is the notebook where I downloaded the DocLayNet large dataset: https://nbviewer.org/github/piegu/language-models/blob/master/download_DocLayNet_large_21abril2023.ipynb

As you can see, I had no problem doing this on the following instance: NVIDIA A10 of https://cloud.lambdalabs.com/instances (VRAM per GPU: 24 GB, vCPUs: 30, RAM: 200 GiB, Storage, 1.4 TiB, Price: $0.60 / hr)

Hope you can solve the problem on your side.

Best,

Pierre

pierreguillou changed discussion status to closed

Sign up or log in to comment