Datasets:

Multilinguality:
multilingual
Size Categories:
10M<n<100M
Language Creators:
found
Annotations Creators:
other
Source Datasets:
original
ArXiv:
License:

EOFError during data splitting period.

#9
by 0wxw0 - opened

When I used the load_dataset function to load data, I encountered the following error at the split stage, and I encountered this problem after repeated attempts. Is there any suggested solution?

Generating train split: 9203891 examples [45:30, 5988.03 examples/s]Error while processing file blablabla
Traceback (most recent call last):
File "blablabla/huggingface/modules/datasets_modules/datasets/joelito--Multi_Legal_Pile/453fcdf95171db34c9daf28f359d524754b752b9f6b8ee6f3e66b0865ebc5837/Multi_Legal_Pile.py", line 477, in _generate_examples
for line in f:
File "/root/miniconda3/lib/python3.11/lzma.py", line 212, in read1
return self._buffer.read1(size)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/lib/python3.11/_compression.py", line 68, in readinto
data = self.read(len(byte_view))
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/lib/python3.11/_compression.py", line 99, in read
raise EOFError("Compressed file ended before the "
EOFError: Compressed file ended before the end-of-stream marker was reached
Generating train split: 14210798 examples [1:54:37, 15.07 examples/s]Killed

Thanks for your message. Would you mind sharing the code you used for loading the dataset?

Sign up or log in to comment