Datasets:

Languages:
English
Size:
n>1T
ArXiv:
License:

Streaming access to the dataset raises an error

#26
by LorMolf - opened

Hi!
I'm trying to use the dataset in streaming mode. When I try to download the first instance, I get the following error:

ValueError: The HTTP server doesn't appear to support range requests. Only reading this file from the beginning is supported. Open with block_size=0 for a streaming file interface.

Here's the code returning the error:

dataset = load_dataset("allenai/dolma", "v1_6", streaming=True, split='train')
ds_iterator = iter(dataset)
data_sample = next(ds_iterator)

Here's the whole error stack:

/usr/local/lib/python3.10/dist-packages/datasets/iterable_dataset.py in __iter__(self)
   1386             return
   1387 
-> 1388         for key, example in ex_iterable:
   1389             if self.features:
   1390                 # `IterableDataset` automatically fills missing columns with None.

/usr/local/lib/python3.10/dist-packages/datasets/iterable_dataset.py in __iter__(self)
    985         # this is the shuffle buffer that we keep in memory
    986         mem_buffer = []
--> 987         for x in self.ex_iterable:
    988             if len(mem_buffer) == buffer_size:  # if the buffer is full, pick and example from it
    989                 i = next(indices_iterator)

/usr/local/lib/python3.10/dist-packages/datasets/iterable_dataset.py in __iter__(self)
    260         rng = deepcopy(self.generator)
    261         kwargs_with_shuffled_shards = _shuffle_gen_kwargs(rng, self.kwargs)
--> 262         yield from self.generate_examples_fn(**kwargs_with_shuffled_shards)
    263 
    264     def shard_data_sources(self, worker_id: int, num_workers: int) -> "ExamplesIterable":

~/.cache/huggingface/modules/datasets_modules/datasets/allenai--dolma/4ffcc06d84b368ddbd84d7426b091d2e9242e951fb741da8a4f78d4d638d2ea4/dolma.py in _generate_examples(self, files)
    127 
    128             with gzip.open(fn, mode="rt", encoding="utf-8") as f:
--> 129                 for line in f:
    130                     row = json.loads(line)
    131                     yield row["id"], {

/usr/lib/python3.10/gzip.py in read1(self, size)
    312         if size < 0:
    313             size = io.DEFAULT_BUFFER_SIZE
--> 314         return self._buffer.read1(size)
    315 
    316     def peek(self, n):

/usr/lib/python3.10/_compression.py in readinto(self, b)
     66     def readinto(self, b):
     67         with memoryview(b) as view, view.cast("B") as byte_view:
---> 68             data = self.read(len(byte_view))
     69             byte_view[:len(data)] = data
     70         return len(data)

/usr/lib/python3.10/gzip.py in read(self, size)
    492 
    493             # Read a chunk of data from the file
--> 494             buf = self._fp.read(io.DEFAULT_BUFFER_SIZE)
    495 
    496             uncompress = self._decompressor.decompress(buf, size)

/usr/lib/python3.10/gzip.py in read(self, size)
     95             self._read = None
     96             return self._buffer[read:] + \
---> 97                    self.file.read(size-self._length+read)
     98 
     99     def prepend(self, prepend=b''):

/usr/local/lib/python3.10/dist-packages/datasets/download/streaming_download_manager.py in read_with_retries(*args, **kwargs)
    340         for retry in range(1, max_retries + 1):
    341             try:
--> 342                 out = read(*args, **kwargs)
    343                 break
    344             except (ClientError, TimeoutError) as err:

/usr/local/lib/python3.10/dist-packages/fsspec/implementations/http.py in read(self, length)
    598         else:
    599             length = min(self.size - self.loc, length)
--> 600         return super().read(length)
    601 
    602     async def async_fetch_all(self):

/usr/local/lib/python3.10/dist-packages/fsspec/spec.py in read(self, length)
   1788             # don't even bother calling fetch
   1789             return b""
-> 1790         out = self.cache._fetch(self.loc, self.loc + length)
   1791         self.loc += len(out)
   1792         return out

/usr/local/lib/python3.10/dist-packages/fsspec/caching.py in _fetch(self, start, end)
    394                 self.start = start
    395             else:
--> 396                 new = self.fetcher(self.end, bend)
    397                 self.cache = self.cache + new
    398 

/usr/local/lib/python3.10/dist-packages/fsspec/asyn.py in wrapper(*args, **kwargs)
    119     def wrapper(*args, **kwargs):
    120         self = obj or args[0]
--> 121         return sync(self.loop, func, *args, **kwargs)
    122 
    123     return wrapper

/usr/local/lib/python3.10/dist-packages/fsspec/asyn.py in sync(loop, func, timeout, *args, **kwargs)
    104         raise FSTimeoutError from return_result
    105     elif isinstance(return_result, BaseException):
--> 106         raise return_result
    107     else:
    108         return return_result

/usr/local/lib/python3.10/dist-packages/fsspec/asyn.py in _runner(event, coro, result, timeout)
     59         coro = asyncio.wait_for(coro, timeout=timeout)
     60     try:
---> 61         result[0] = await coro
     62     except Exception as ex:
     63         result[0] = ex

/usr/local/lib/python3.10/dist-packages/fsspec/implementations/http.py in async_fetch_range(self, start, end)
    669                 out = await r.read()
    670             elif start > 0:
--> 671                 raise ValueError(
    672                     "The HTTP server doesn't appear to support range requests. "
    673                     "Only reading this file from the beginning is supported. "

ValueError: The HTTP server doesn't appear to support range requests. Only reading this file from the beginning is supported. Open with block_size=0 for a streaming file interface.

Can I do something to prevent this error, or is it a configuration error on the maintainer's side?
Thank you in advance!

Ai2 org

I just tried to run this snippet, and it ran just fine. Maybe a transient problem with huggingface?

I don’t know if the issue is related to HuggingFace. However, I can confirm that the problem persists. Occasionally, the streaming access works for a few iterations, but in the best-case scenario, it raises the same error as soon as it reaches the 30th request.

I am running into exactly the same issue. I can read a few samples, and then it fails with that error.

I also ran into this issue too. @dirkgr

Same issue here, it typically manifests after some amount of time, iterating over a handful of samples won't catch this error, it typically happens after a couple hundred steps for me.

Hm, this runs entirely on huggingface code and servers. We just upload data, so it's hard for me to debug it. I'll try to get some attention on it from someone from huggingface.

I've been writing some code that downloads the source files from the urls contained in v1.7 txt file in the dolma repo. I got some errors when attempting to download some of the files using aria2c. I think the issue may be related to access to a handful of specific dataset files. It also seems to be fairly consistent so it appears those specific files may be the culprit.

Sign up or log in to comment