sileod/deberta-v3-base-tasksource-nli
•
Updated
•
16.5k
•
5
Error code: StreamingRowsError Exception: ClientPayloadError Message: 400, message='Can not decode content-encoding: gzip' Traceback: Traceback (most recent call last): File "/src/workers/datasets_based/src/datasets_based/workers/first_rows.py", line 485, in compute_first_rows_response rows = get_rows( File "/src/workers/datasets_based/src/datasets_based/workers/first_rows.py", line 120, in decorator return func(*args, **kwargs) File "/src/workers/datasets_based/src/datasets_based/workers/first_rows.py", line 176, in get_rows rows_plus_one = list(itertools.islice(ds, rows_max_number + 1)) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 917, in __iter__ for key, example in ex_iterable: File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 113, in __iter__ yield from self.generate_examples_fn(**self.kwargs) File "/tmp/modules-cache/datasets_modules/datasets/hellaswag/c37cd37196278995f42bc32f532730ae9b0d5f0f4a2d3b97735c17ff3ad67169/hellaswag.py", line 95, in _generate_examples for id_, row in enumerate(f): File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 594, in read return super().read(length) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 1684, in read out = self.cache._fetch(self.loc, self.loc + length) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/caching.py", line 381, in _fetch self.cache = self.fetcher(start, bend) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 114, in wrapper return sync(self.loop, func, *args, **kwargs) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 99, in sync raise return_result File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 54, in _runner result[0] = await coro File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 663, in async_fetch_range out = await r.read() File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/aiohttp/client_reqrep.py", line 1037, in read self._body = await self.content.read() File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/aiohttp/streams.py", line 349, in read raise self._exception aiohttp.client_exceptions.ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
Need help to make the dataset viewer work? Open an discussion for direct support.
HellaSwag: Can a Machine Really Finish Your Sentence? is a new dataset for commonsense NLI. A paper was published at ACL2019.
An example of 'train' looks as follows.
This example was too long and was cropped:
{
"activity_label": "Removing ice from car",
"ctx": "Then, the man writes over the snow covering the window of a car, and a woman wearing winter clothes smiles. then",
"ctx_a": "Then, the man writes over the snow covering the window of a car, and a woman wearing winter clothes smiles.",
"ctx_b": "then",
"endings": "[\", the man adds wax to the windshield and cuts it.\", \", a person board a ski lift, while two men supporting the head of the per...",
"ind": 4,
"label": "3",
"source_id": "activitynet~v_-1IBHYS3L-Y",
"split": "train",
"split_type": "indomain"
}
The data fields are the same among all splits.
ind
: a int32
feature.activity_label
: a string
feature.ctx_a
: a string
feature.ctx_b
: a string
feature.ctx
: a string
feature.endings
: a list
of string
features.source_id
: a string
feature.split
: a string
feature.split_type
: a string
feature.label
: a string
feature.name | train | validation | test |
---|---|---|---|
default | 39905 | 10042 | 10003 |
MIT https://github.com/rowanz/hellaswag/blob/master/LICENSE
@inproceedings{zellers2019hellaswag,
title={HellaSwag: Can a Machine Really Finish Your Sentence?},
author={Zellers, Rowan and Holtzman, Ari and Bisk, Yonatan and Farhadi, Ali and Choi, Yejin},
booktitle ={Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics},
year={2019}
}
Thanks to @albertvillanova, @mariamabarham, @thomwolf, @patrickvonplaten, @lewtun for adding this dataset.