Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
Dask
License:

JSON FILE ERROR

#1
by asad - opened

Hello, all train.json files show "invalid string length" error, whats the potential fix!

OSU NLP Group org

Hi, I tested to load the local version and downloaded a small train_10.json file, and they seem to be working. Could you provide more information on the error that you are seeing? Can you just do json.load with the downloaded files?
It might be some of the files are corrupted via transferring, I can test it later with a clean repo and see if I can reproduce.

only train_10 is working for me too, other files like train_0 and the rest of them except train_10 are all showing "invalid string length" in VScode and "kernel failure" in colab. while I also can't preview the train_0.json file but I can preview the train_10.json file, attaching a screenshot for reference
Screenshot 2023-07-12 at 4.55.08 PM.png

OSU NLP Group org

Can you try load with python. I fixed the raw_html field and now some of the data sample could be extremely long given the huge original html file, and cannot be directly previewed in IDEs.

Yeah, i did and it just keeps on running until the "kernel dies" event! same behavior in colab too, runtime was restarted!
Screenshot 2023-07-12 at 4.59.10 PM.png

I also tweaked a bit by adjusting the buffer size of the file, but still, it could not load the JSON file! Thank you

OSU NLP Group org

Can you try show only a single example or action. As you can see, the data is load successfully and just cannot be viewed due ti its size

Yeah, i guess that's causing the kernel to fail secondly, the train split shows 1.01k rows but I can't preview the 2nd page on hugging face, and also when I set up LFS and clone the repo it only shows 11 tasks from task 0 to task 10, are there only 11 tasks or others are too big to display whats the potential fix!

OSU NLP Group org

As you can see in the meta data, there is 1,009 rows. Given the large size of raw_html, it might be hard to preview. You can try to process the data and remove the raw_html field. But in general I am not sure if directly displaying large data file with notebook is a good practice as it often cause issue with the notebook kernel and make subsequent executions slow.

XiangD-OSU changed discussion status to closed

Sign up or log in to comment