id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
โŒ€
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
โŒ€
is_pull_request
bool
2 classes
1,232,681,207
https://api.github.com/repos/huggingface/datasets/issues/4316
https://github.com/huggingface/datasets/pull/4316
4,316
Support passing config_kwargs to CLI run_beam
closed
1
2022-05-11T13:53:37
2022-05-11T14:36:49
2022-05-11T14:28:31
albertvillanova
[]
This PR supports passing `config_kwargs` to CLI run_beam, so that for example for "wikipedia" dataset, we can pass: ``` --date 20220501 --language ca ```
true
1,232,549,330
https://api.github.com/repos/huggingface/datasets/issues/4315
https://github.com/huggingface/datasets/pull/4315
4,315
Fix CLI run_beam namespace
closed
1
2022-05-11T12:21:00
2022-05-11T13:13:00
2022-05-11T13:05:08
albertvillanova
[]
Currently, it raises TypeError: ``` TypeError: __init__() got an unexpected keyword argument 'namespace' ```
true
1,232,326,726
https://api.github.com/repos/huggingface/datasets/issues/4314
https://github.com/huggingface/datasets/pull/4314
4,314
Catch pull error when mirroring
closed
1
2022-05-11T09:38:35
2022-05-11T12:54:07
2022-05-11T12:46:42
lhoestq
[]
Catch pull errors when mirroring so that the script continues to update the other datasets. The error will still be printed at the end of the job. In this case the job also fails, and asks to manually update the datasets that failed.
true
1,231,764,100
https://api.github.com/repos/huggingface/datasets/issues/4313
https://github.com/huggingface/datasets/pull/4313
4,313
Add API code examples for Builder classes
closed
1
2022-05-10T22:22:32
2022-05-12T17:02:43
2022-05-12T12:36:57
stevhliu
[ "documentation" ]
This PR adds API code examples for the Builder classes.
true
1,231,662,775
https://api.github.com/repos/huggingface/datasets/issues/4312
https://github.com/huggingface/datasets/pull/4312
4,312
added TR-News dataset
closed
1
2022-05-10T20:33:00
2022-10-03T09:36:45
2022-10-03T09:36:45
batubayk
[ "dataset contribution" ]
null
true
1,231,369,438
https://api.github.com/repos/huggingface/datasets/issues/4311
https://github.com/huggingface/datasets/pull/4311
4,311
[Imagefolder] Docs + Don't infer labels from file names when there are metadata + Error messages when metadata and images aren't linked correctly
closed
2
2022-05-10T15:52:15
2022-05-10T17:19:42
2022-05-10T17:11:47
lhoestq
[]
I updated the `docs/source/image_process.mdx` documentation and added an example for image captioning and object detection using `ImageFolder`. While doing so I also improved a few aspects: - we don't need to infer labels from file names when there are metadata - they can just be in the metadata if necessary - raise informative error messages when metadata and images aren't linked correctly: - when an image is missing a metadata file - when a metadata file is missing an image I added some tests for these changes as well cc @mariosasko
true
1,231,319,815
https://api.github.com/repos/huggingface/datasets/issues/4310
https://github.com/huggingface/datasets/issues/4310
4,310
Loading dataset with streaming: '_io.BufferedReader' object has no attribute 'loc'
closed
0
2022-05-10T15:12:53
2022-05-11T16:46:31
2022-05-11T16:46:31
milmin
[ "bug" ]
## Describe the bug Loading a datasets with `load_dataset` and `streaming=True` returns `AttributeError: '_io.BufferedReader' object has no attribute 'loc'`. Notice that loading with `streaming=False` works fine. In the following steps we load parquet files but the same happens with pickle files. The problem seems to come from `fsspec` lib, I put in the environment info also `s3fs` and `fsspec` versions since I'm loading from an s3 bucket. ## Steps to reproduce the bug ```python from datasets import load_dataset # path is the path to parquet files data_files = {"train": path + "meta_train.parquet.gzip", "test": path + "meta_test.parquet.gzip"} dataset = load_dataset("parquet", data_files=data_files, streaming=True) ``` ## Expected results A dataset object `datasets.dataset_dict.DatasetDict` ## Actual results ``` AttributeError Traceback (most recent call last) <command-562086> in <module> 11 12 data_files = {"train": path + "meta_train.parquet.gzip", "test": path + "meta_test.parquet.gzip"} ---> 13 dataset = load_dataset("parquet", data_files=data_files, streaming=True) /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1679 if streaming: 1680 extend_dataset_builder_for_streaming(builder_instance, use_auth_token=use_auth_token) -> 1681 return builder_instance.as_streaming_dataset( 1682 split=split, 1683 use_auth_token=use_auth_token, /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/builder.py in as_streaming_dataset(self, split, base_path, use_auth_token) 904 ) 905 self._check_manual_download(dl_manager) --> 906 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)} 907 # By default, return all splits 908 if split is None: /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/packaged_modules/parquet/parquet.py in _split_generators(self, dl_manager) 30 if not self.config.data_files: 31 raise ValueError(f"At least one data file must be specified, but got data_files={self.config.data_files}") ---> 32 data_files = dl_manager.download_and_extract(self.config.data_files) 33 if isinstance(data_files, (str, list, tuple)): 34 files = data_files /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in download_and_extract(self, url_or_urls) 798 799 def download_and_extract(self, url_or_urls): --> 800 return self.extract(self.download(url_or_urls)) 801 802 def iter_archive(self, urlpath_or_buf: Union[str, io.BufferedReader]) -> Iterable[Tuple]: /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in extract(self, path_or_paths) 776 777 def extract(self, path_or_paths): --> 778 urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True) 779 return urlpaths 780 /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm, desc) 312 num_proc = 1 313 if num_proc <= 1 or len(iterable) <= num_proc: --> 314 mapped = [ 315 _single_map_nested((function, obj, types, None, True, None)) 316 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc) /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0) 313 if num_proc <= 1 or len(iterable) <= num_proc: 314 mapped = [ --> 315 _single_map_nested((function, obj, types, None, True, None)) 316 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc) 317 ] /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args) 267 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar} 268 else: --> 269 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar] 270 if isinstance(data_struct, list): 271 return mapped /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0) 267 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar} 268 else: --> 269 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar] 270 if isinstance(data_struct, list): 271 return mapped /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args) 249 # Singleton first to spare some computation 250 if not isinstance(data_struct, dict) and not isinstance(data_struct, types): --> 251 return function(data_struct) 252 253 # Reduce logging to keep things readable in multiprocessing with tqdm /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _extract(self, urlpath) 781 def _extract(self, urlpath: str) -> str: 782 urlpath = str(urlpath) --> 783 protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token) 784 if protocol is None: 785 # no extraction /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _get_extraction_protocol(urlpath, use_auth_token) 371 urlpath, kwargs = urlpath, {} 372 with fsspec.open(urlpath, **kwargs) as f: --> 373 return _get_extraction_protocol_with_magic_number(f) 374 375 /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _get_extraction_protocol_with_magic_number(f) 335 def _get_extraction_protocol_with_magic_number(f) -> Optional[str]: 336 """read the magic number from a file-like object and return the compression protocol""" --> 337 prev_loc = f.loc 338 magic_number = f.read(MAGIC_NUMBER_MAX_LENGTH) 339 f.seek(prev_loc) /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/fsspec/implementations/local.py in __getattr__(self, item) 337 338 def __getattr__(self, item): --> 339 return getattr(self.f, item) 340 341 def __enter__(self): AttributeError: '_io.BufferedReader' object has no attribute 'loc' ``` ## Environment info - `datasets` version: 2.1.0 - Platform: Linux-5.4.0-1071-aws-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 8.0.0 - Pandas version: 1.4.2 - `fsspec` version: 2021.08.1 - `s3fs` version: 2021.08.1
false
1,231,232,935
https://api.github.com/repos/huggingface/datasets/issues/4309
https://github.com/huggingface/datasets/pull/4309
4,309
[WIP] Add TEDLIUM dataset
closed
11
2022-05-10T14:12:47
2022-06-17T12:54:40
2022-06-17T11:44:01
sanchit-gandhi
[ "dataset request", "speech" ]
Adds the TED-LIUM dataset https://www.tensorflow.org/datasets/catalog/tedlium#tedliumrelease3 TODO: - [x] Port `tedium.py` from TF datasets using `convert_dataset.sh` script - [x] Make `load_dataset` work - [ ] ~~Run `datasets-cli` command to generate `dataset_infos.json`~~ - [ ] ~~Create dummy data for continuous testing~~ - [ ] ~~Dummy data tests~~ - [ ] ~~Real data tests~~ - [ ] Create the metadata JSON - [ ] Close PR and add directly to the Hub under LIUM org
true
1,231,217,783
https://api.github.com/repos/huggingface/datasets/issues/4308
https://github.com/huggingface/datasets/pull/4308
4,308
Remove unused multiprocessing args from test CLI
closed
1
2022-05-10T14:02:15
2022-05-11T12:58:25
2022-05-11T12:50:43
albertvillanova
[]
Multiprocessing is not used in the test CLI.
true
1,231,175,639
https://api.github.com/repos/huggingface/datasets/issues/4307
https://github.com/huggingface/datasets/pull/4307
4,307
Add packaged builder configs to the documentation
closed
1
2022-05-10T13:34:19
2022-05-10T14:03:50
2022-05-10T13:55:54
lhoestq
[]
Add the packaged builders configurations to the docs reference is useful to show the list of all parameters one can use when loading data in many formats: CSV, JSON, etc.
true
1,231,137,204
https://api.github.com/repos/huggingface/datasets/issues/4306
https://github.com/huggingface/datasets/issues/4306
4,306
`load_dataset` does not work with certain filename.
closed
1
2022-05-10T13:14:04
2022-05-10T18:58:36
2022-05-10T18:58:09
whatever60
[ "bug" ]
## Describe the bug This is a weird bug that took me some time to find out. I have a JSON dataset that I want to load with `load_dataset` like this: ``` data_files = dict(train="train.json.zip", val="val.json.zip") dataset = load_dataset("json", data_files=data_files, field="data") ``` ## Expected results No error. ## Actual results The val file is loaded as expected, but the train file throws JSON decoding error: ``` โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ <ipython-input-74-97947e92c100>:5 in <module> โ”‚ โ”‚ โ”‚ โ”‚ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/load.py:1687 in โ”‚ โ”‚ load_dataset โ”‚ โ”‚ โ”‚ โ”‚ 1684 โ”‚ try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES โ”‚ โ”‚ 1685 โ”‚ โ”‚ โ”‚ 1686 โ”‚ # Download and prepare data โ”‚ โ”‚ โฑ 1687 โ”‚ builder_instance.download_and_prepare( โ”‚ โ”‚ 1688 โ”‚ โ”‚ download_config=download_config, โ”‚ โ”‚ 1689 โ”‚ โ”‚ download_mode=download_mode, โ”‚ โ”‚ 1690 โ”‚ โ”‚ ignore_verifications=ignore_verifications, โ”‚ โ”‚ โ”‚ โ”‚ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/builder.py:605 in โ”‚ โ”‚ download_and_prepare โ”‚ โ”‚ โ”‚ โ”‚ 602 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ except ConnectionError: โ”‚ โ”‚ 603 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ logger.warning("HF google storage unreachable. Downloa โ”‚ โ”‚ 604 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ if not downloaded_from_gcs: โ”‚ โ”‚ โฑ 605 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ self._download_and_prepare( โ”‚ โ”‚ 606 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ dl_manager=dl_manager, verify_infos=verify_infos, **do โ”‚ โ”‚ 607 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ ) โ”‚ โ”‚ 608 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ # Sync info โ”‚ โ”‚ โ”‚ โ”‚ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/builder.py:694 in โ”‚ โ”‚ _download_and_prepare โ”‚ โ”‚ โ”‚ โ”‚ 691 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ 692 โ”‚ โ”‚ โ”‚ try: โ”‚ โ”‚ 693 โ”‚ โ”‚ โ”‚ โ”‚ # Prepare split will record examples associated to the split โ”‚ โ”‚ โฑ 694 โ”‚ โ”‚ โ”‚ โ”‚ self._prepare_split(split_generator, **prepare_split_kwargs) โ”‚ โ”‚ 695 โ”‚ โ”‚ โ”‚ except OSError as e: โ”‚ โ”‚ 696 โ”‚ โ”‚ โ”‚ โ”‚ raise OSError( โ”‚ โ”‚ 697 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ "Cannot find data file. " โ”‚ โ”‚ โ”‚ โ”‚ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/builder.py:1151 in โ”‚ โ”‚ _prepare_split โ”‚ โ”‚ โ”‚ โ”‚ 1148 โ”‚ โ”‚ โ”‚ โ”‚ 1149 โ”‚ โ”‚ generator = self._generate_tables(**split_generator.gen_kwargs) โ”‚ โ”‚ 1150 โ”‚ โ”‚ with ArrowWriter(features=self.info.features, path=fpath) as writer: โ”‚ โ”‚ โฑ 1151 โ”‚ โ”‚ โ”‚ for key, table in logging.tqdm( โ”‚ โ”‚ 1152 โ”‚ โ”‚ โ”‚ โ”‚ generator, unit=" tables", leave=False, disable=True # not loggin โ”‚ โ”‚ 1153 โ”‚ โ”‚ โ”‚ ): โ”‚ โ”‚ 1154 โ”‚ โ”‚ โ”‚ โ”‚ writer.write_table(table) โ”‚ โ”‚ โ”‚ โ”‚ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/tqdm/notebook.py:257 in โ”‚ โ”‚ __iter__ โ”‚ โ”‚ โ”‚ โ”‚ 254 โ”‚ โ”‚ โ”‚ 255 โ”‚ def __iter__(self): โ”‚ โ”‚ 256 โ”‚ โ”‚ try: โ”‚ โ”‚ โฑ 257 โ”‚ โ”‚ โ”‚ for obj in super(tqdm_notebook, self).__iter__(): โ”‚ โ”‚ 258 โ”‚ โ”‚ โ”‚ โ”‚ # return super(tqdm...) will not catch exception โ”‚ โ”‚ 259 โ”‚ โ”‚ โ”‚ โ”‚ yield obj โ”‚ โ”‚ 260 โ”‚ โ”‚ # NB: except ... [ as ...] breaks IPython async KeyboardInterrupt โ”‚ โ”‚ โ”‚ โ”‚ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/tqdm/std.py:1183 in โ”‚ โ”‚ __iter__ โ”‚ โ”‚ โ”‚ โ”‚ 1180 โ”‚ โ”‚ # If the bar is disabled, then just walk the iterable โ”‚ โ”‚ 1181 โ”‚ โ”‚ # (note: keep this check outside the loop for performance) โ”‚ โ”‚ 1182 โ”‚ โ”‚ if self.disable: โ”‚ โ”‚ โฑ 1183 โ”‚ โ”‚ โ”‚ for obj in iterable: โ”‚ โ”‚ 1184 โ”‚ โ”‚ โ”‚ โ”‚ yield obj โ”‚ โ”‚ 1185 โ”‚ โ”‚ โ”‚ return โ”‚ โ”‚ โ”‚ โ”‚ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/packaged_modules/j โ”‚ โ”‚ son/json.py:90 in _generate_tables โ”‚ โ”‚ โ”‚ โ”‚ 87 โ”‚ โ”‚ โ”‚ # If the file is one json object and if we need to look at the list of โ”‚ โ”‚ 88 โ”‚ โ”‚ โ”‚ if self.config.field is not None: โ”‚ โ”‚ 89 โ”‚ โ”‚ โ”‚ โ”‚ with open(file, encoding="utf-8") as f: โ”‚ โ”‚ โฑ 90 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ dataset = json.load(f) โ”‚ โ”‚ 91 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ 92 โ”‚ โ”‚ โ”‚ โ”‚ # We keep only the field we are interested in โ”‚ โ”‚ 93 โ”‚ โ”‚ โ”‚ โ”‚ dataset = dataset[self.config.field] โ”‚ โ”‚ โ”‚ โ”‚ /home/tiankang/software/anaconda3/lib/python3.8/json/__init__.py:293 in load โ”‚ โ”‚ โ”‚ โ”‚ 290 โ”‚ To use a custom ``JSONDecoder`` subclass, specify it with the ``cls`` โ”‚ โ”‚ 291 โ”‚ kwarg; otherwise ``JSONDecoder`` is used. โ”‚ โ”‚ 292 โ”‚ """ โ”‚ โ”‚ โฑ 293 โ”‚ return loads(fp.read(), โ”‚ โ”‚ 294 โ”‚ โ”‚ cls=cls, object_hook=object_hook, โ”‚ โ”‚ 295 โ”‚ โ”‚ parse_float=parse_float, parse_int=parse_int, โ”‚ โ”‚ 296 โ”‚ โ”‚ parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw) โ”‚ โ”‚ โ”‚ โ”‚ /home/tiankang/software/anaconda3/lib/python3.8/json/__init__.py:357 in loads โ”‚ โ”‚ โ”‚ โ”‚ 354 โ”‚ if (cls is None and object_hook is None and โ”‚ โ”‚ 355 โ”‚ โ”‚ โ”‚ parse_int is None and parse_float is None and โ”‚ โ”‚ 356 โ”‚ โ”‚ โ”‚ parse_constant is None and object_pairs_hook is None and not kw): โ”‚ โ”‚ โฑ 357 โ”‚ โ”‚ return _default_decoder.decode(s) โ”‚ โ”‚ 358 โ”‚ if cls is None: โ”‚ โ”‚ 359 โ”‚ โ”‚ cls = JSONDecoder โ”‚ โ”‚ 360 โ”‚ if object_hook is not None: โ”‚ โ”‚ โ”‚ โ”‚ /home/tiankang/software/anaconda3/lib/python3.8/json/decoder.py:337 in decode โ”‚ โ”‚ โ”‚ โ”‚ 334 โ”‚ โ”‚ containing a JSON document). โ”‚ โ”‚ 335 โ”‚ โ”‚ โ”‚ โ”‚ 336 โ”‚ โ”‚ """ โ”‚ โ”‚ โฑ 337 โ”‚ โ”‚ obj, end = self.raw_decode(s, idx=_w(s, 0).end()) โ”‚ โ”‚ 338 โ”‚ โ”‚ end = _w(s, end).end() โ”‚ โ”‚ 339 โ”‚ โ”‚ if end != len(s): โ”‚ โ”‚ 340 โ”‚ โ”‚ โ”‚ raise JSONDecodeError("Extra data", s, end) โ”‚ โ”‚ โ”‚ โ”‚ /home/tiankang/software/anaconda3/lib/python3.8/json/decoder.py:353 in raw_decode โ”‚ โ”‚ โ”‚ โ”‚ 350 โ”‚ โ”‚ โ”‚ โ”‚ 351 โ”‚ โ”‚ """ โ”‚ โ”‚ 352 โ”‚ โ”‚ try: โ”‚ โ”‚ โฑ 353 โ”‚ โ”‚ โ”‚ obj, end = self.scan_once(s, idx) โ”‚ โ”‚ 354 โ”‚ โ”‚ except StopIteration as err: โ”‚ โ”‚ 355 โ”‚ โ”‚ โ”‚ raise JSONDecodeError("Expecting value", s, err.value) from None โ”‚ โ”‚ 356 โ”‚ โ”‚ return obj, end โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ JSONDecodeError: Unterminated string starting at: line 85 column 20 (char 60051) ``` However, when I rename the `train.json.zip` to other names (like `training.json.zip`, or even to `train.json`), everything works fine; when I unzip the file to `train.json`, it works as well. ## Environment info ``` - `datasets` version: 2.1.0 - Platform: Linux-4.4.0-131-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyArrow version: 7.0.0 - Pandas version: 1.4.2 ```
false
1,231,099,934
https://api.github.com/repos/huggingface/datasets/issues/4305
https://github.com/huggingface/datasets/pull/4305
4,305
Fixes FrugalScore
open
2
2022-05-10T12:44:06
2022-09-22T16:42:06
null
moussaKam
[ "transfer-to-evaluate" ]
There are two minor modifications in this PR: 1) `predictions` and `references` are swapped. Basically Frugalscore is commutative, however some tiny differences can occur if we swap the references and the predictions. I decided to swap them just to obtain the exact results as reported in the paper. 2) I switched to dynamic padding that was was used in the training, forcing the padding to `max_length` introduces errors for some reason that I ignore. @lhoestq
true
1,231,047,051
https://api.github.com/repos/huggingface/datasets/issues/4304
https://github.com/huggingface/datasets/issues/4304
4,304
Language code search does direct matches
open
1
2022-05-10T11:59:16
2022-05-10T12:38:42
null
leondz
[ "bug" ]
## Describe the bug Hi. Searching for bcp47 tags that are just the language prefix (e.g. `sq` or `da`) excludes datasets that have added extra information in their language metadata (e.g. `sq-AL` or `da-bornholm`). The example codes given in the [tagging app](https://huggingface.co/spaces/huggingface/datasets-tagging) encourages addition of the additional codes ("_expected format is BCP47 tags separated for ';' e.g. 'en-US;fr-FR'_") but this would lead to those datasets being hidden in datasets search. ## Steps to reproduce the bug 1. Add a dataset using a variant tag (e.g. [`sq-AL`](https://huggingface.co/datasets?languages=languages:sq-AL)) 2. Look for datasets using the full code 3. Note that they're missing when just the language is searched for (e.g. [`sq`](https://huggingface.co/datasets?languages=languages:sq)) Some datasets are already affected by this - e.g. `AmazonScience/massive` is listed under `sq-AL` but not `sq`. One workaround is for dataset creators to add an additional root language tag to dataset YAML metadata, but it's unclear how to communicate this. It might be possible to index the search on `languagecode.split('-')[0]` but I wanted to float this issue before trying to write any code :) ## Expected results Datasets using longer bcp47 tags also appear under searches for just the language code; e.g. Quebecois datasets (`fr-CA`) would come up when looking for French datasets with no region specification (`fr`), or US English (`en-US`) datasets would come up when searching for English datasets (`en`). ## Actual results The language codes seem to be directly string matched, excluding datasets with specific language tags from non-specific searches. ## Environment info (web app)
false
1,230,867,728
https://api.github.com/repos/huggingface/datasets/issues/4303
https://github.com/huggingface/datasets/pull/4303
4,303
Fix: Add missing comma
closed
1
2022-05-10T09:21:38
2022-05-11T08:50:15
2022-05-11T08:50:14
mrm8488
[]
null
true
1,230,651,117
https://api.github.com/repos/huggingface/datasets/issues/4302
https://github.com/huggingface/datasets/pull/4302
4,302
Remove hacking license tags when mirroring datasets on the Hub
closed
9
2022-05-10T05:52:46
2022-05-20T09:48:30
2022-05-20T09:40:20
albertvillanova
[]
Currently, when mirroring datasets on the Hub, the license tags are hacked: removed of characters "." and "$". On the contrary, this hacking is not applied to community datasets on the Hub. This generates multiple variants of the same tag on the Hub. I guess this hacking is no longer necessary: - it is not applied to community datasets - all canonical datasets are validated by maintainers before being merged: CI + maintainers make sure license tags are the right ones Fix #4298.
true
1,230,401,256
https://api.github.com/repos/huggingface/datasets/issues/4301
https://github.com/huggingface/datasets/pull/4301
4,301
Add ImageNet-Sketch dataset
closed
2
2022-05-09T23:38:45
2022-05-23T18:14:14
2022-05-23T18:05:29
nateraw
[]
This PR adds the ImageNet-Sketch dataset and resolves #3953 .
true
1,230,272,761
https://api.github.com/repos/huggingface/datasets/issues/4300
https://github.com/huggingface/datasets/pull/4300
4,300
Add API code examples for loading methods
closed
1
2022-05-09T21:30:26
2022-05-25T16:23:15
2022-05-25T09:20:13
stevhliu
[ "documentation" ]
This PR adds API code examples for loading methods, let me know if I've missed any important parameters we should showcase :) I was a bit confused about `inspect_dataset` and `inspect_metric`. The `path` parameter says it will accept a dataset identifier from the Hub. But when I try the identifier `rotten_tomatoes`, it gives me: ```py from datasets import inspect_dataset inspect_dataset('rotten_tomatoes', local_path='/content/rotten_tomatoes') FileNotFoundError: Couldn't find a dataset script at /content/rotten_tomatoes/rotten_tomatoes.py or any data file in the same directory. ``` Does the user need to have an existing copy of `rotten_tomatoes.py` on their local drive (in which case, it seems like the same option as the first option in `path`)?
true
1,230,236,782
https://api.github.com/repos/huggingface/datasets/issues/4299
https://github.com/huggingface/datasets/pull/4299
4,299
Remove manual download from imagenet-1k
closed
3
2022-05-09T20:49:18
2022-05-25T14:54:59
2022-05-25T14:46:16
mariosasko
[]
Remove the manual download code from `imagenet-1k` to make it a regular dataset.
true
1,229,748,006
https://api.github.com/repos/huggingface/datasets/issues/4298
https://github.com/huggingface/datasets/issues/4298
4,298
Normalise license names
closed
2
2022-05-09T13:51:32
2022-05-20T09:51:50
2022-05-20T09:51:50
leondz
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** When browsing datasets, the Licenses tag cloud (bottom left of e.g. https://huggingface.co/datasets) has multiple variants of the same license. This means the options exclude datasets arbitrarily, giving users artificially low recall. The cause of the dupes is probably due to a bit of variation in metadata. **Describe the solution you'd like** I'd like the licenses in metadata to follow the same standard as much as possible, to remove this problem. I'd like to go ahead and normalise the dataset metadata to follow the format & values given in [src/datasets/utils/resources/licenses.json](https://github.com/huggingface/datasets/blob/master/src/datasets/utils/resources/licenses.json) . **Describe alternatives you've considered** None **Additional context** None **Priority** Low
false
1,229,735,498
https://api.github.com/repos/huggingface/datasets/issues/4297
https://github.com/huggingface/datasets/issues/4297
4,297
Datasets YAML tagging space is down
closed
3
2022-05-09T13:45:05
2022-05-09T14:44:25
2022-05-09T14:44:25
leondz
[ "bug" ]
## Describe the bug The neat hf spaces app for generating YAML tags for dataset `README.md`s is down ## Steps to reproduce the bug 1. Visit https://huggingface.co/spaces/huggingface/datasets-tagging ## Expected results There'll be a HF spaces web app for generating dataset metadata YAML ## Actual results There's an error message; here's the step where it breaks: ``` Step 18/29 : RUN pip install -r requirements.txt ---> Running in e88bfe7e7e0c Defaulting to user installation because normal site-packages is not writeable Collecting git+https://github.com/huggingface/datasets.git@update-task-list (from -r requirements.txt (line 4)) Cloning https://github.com/huggingface/datasets.git (to revision update-task-list) to /tmp/pip-req-build-bm8t0r0k Running command git clone --filter=blob:none --quiet https://github.com/huggingface/datasets.git /tmp/pip-req-build-bm8t0r0k WARNING: Did not find branch or tag 'update-task-list', assuming revision or ref. Running command git checkout -q update-task-list error: pathspec 'update-task-list' did not match any file(s) known to git error: subprocess-exited-with-error ร— git checkout -q update-task-list did not run successfully. โ”‚ exit code: 1 โ•ฐโ”€> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error ร— git checkout -q update-task-list did not run successfully. โ”‚ exit code: 1 โ•ฐโ”€> See above for output. ``` ## Environment info - Platform: Linux / Brave
false
1,229,554,645
https://api.github.com/repos/huggingface/datasets/issues/4296
https://github.com/huggingface/datasets/pull/4296
4,296
Fix URL query parameters in compression hop path when streaming
open
1
2022-05-09T11:18:22
2022-07-06T15:19:53
null
albertvillanova
[]
Fix #3488.
true
1,229,527,283
https://api.github.com/repos/huggingface/datasets/issues/4295
https://github.com/huggingface/datasets/pull/4295
4,295
Fix missing lz4 dependency for tests
closed
1
2022-05-09T10:53:20
2022-05-09T11:21:22
2022-05-09T11:13:44
albertvillanova
[]
Currently, `lz4` is not defined as a dependency for tests. Therefore, all tests marked with `@require_lz4` are skipped.
true
1,229,455,582
https://api.github.com/repos/huggingface/datasets/issues/4294
https://github.com/huggingface/datasets/pull/4294
4,294
Fix CLI run_beam save_infos
closed
1
2022-05-09T09:47:43
2022-05-10T07:04:04
2022-05-10T06:56:10
albertvillanova
[]
Currently, it raises TypeError: ``` TypeError: _download_and_prepare() got an unexpected keyword argument 'save_infos' ```
true
1,228,815,477
https://api.github.com/repos/huggingface/datasets/issues/4293
https://github.com/huggingface/datasets/pull/4293
4,293
Fix wrong map parameter name in cache docs
closed
1
2022-05-08T07:27:46
2022-06-14T16:49:00
2022-06-14T16:07:00
h4iku
[]
The `load_from_cache` parameter of `map` should be `load_from_cache_file`.
true
1,228,216,788
https://api.github.com/repos/huggingface/datasets/issues/4292
https://github.com/huggingface/datasets/pull/4292
4,292
Add API code examples for remaining main classes
closed
1
2022-05-06T18:15:31
2022-05-25T18:05:13
2022-05-25T17:56:36
stevhliu
[ "documentation" ]
This PR adds API code examples for the remaining functions in the Main classes. I wasn't too familiar with some of the functions (`decode_batch`, `decode_column`, `decode_example`, etc.) so please feel free to add an example of usage and I can fill in the rest :)
true
1,227,777,500
https://api.github.com/repos/huggingface/datasets/issues/4291
https://github.com/huggingface/datasets/issues/4291
4,291
Dataset Viewer issue for strombergnlp/ipm_nel : preview is empty, no error message
closed
2
2022-05-06T12:03:27
2022-05-09T08:25:58
2022-05-09T08:25:58
leondz
[ "dataset-viewer" ]
### Link https://huggingface.co/datasets/strombergnlp/ipm_nel/viewer/ipm_nel/train ### Description The viewer is blank. I tried my best to emulate a dataset with a working viewer, but this one just doesn't seem to want to come up. What did I miss? ### Owner Yes
false
1,227,592,826
https://api.github.com/repos/huggingface/datasets/issues/4290
https://github.com/huggingface/datasets/pull/4290
4,290
Update paper link in medmcqa dataset card
closed
2
2022-05-06T08:52:51
2022-09-30T11:51:28
2022-09-30T11:49:07
monk1337
[ "dataset contribution" ]
Updating readme in medmcqa dataset.
true
1,226,821,732
https://api.github.com/repos/huggingface/datasets/issues/4288
https://github.com/huggingface/datasets/pull/4288
4,288
Add missing `faiss` import to fix https://github.com/huggingface/datasets/issues/4287
closed
0
2022-05-05T15:21:49
2022-05-10T12:55:06
2022-05-10T12:09:48
alvarobartt
[]
This PR fixes the issue recently mentioned in https://github.com/huggingface/datasets/issues/4287 ๐Ÿค—
true
1,226,806,652
https://api.github.com/repos/huggingface/datasets/issues/4287
https://github.com/huggingface/datasets/issues/4287
4,287
"NameError: name 'faiss' is not defined" on `.add_faiss_index` when `device` is not None
closed
3
2022-05-05T15:09:45
2022-05-10T13:53:19
2022-05-10T13:53:19
alvarobartt
[ "bug" ]
## Describe the bug When using `datasets` to calculate the FAISS indices of a dataset, the exception `NameError: name 'faiss' is not defined` is triggered when trying to calculate those on a device (GPU), so `.add_faiss_index(..., device=0)` fails with that exception. All that assuming that `datasets` is properly installed and `faiss-gpu` too, as well as all the CUDA drivers required. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from transformers import DPRContextEncoder, DPRContextEncoderTokenizer import torch torch.set_grad_enabled(False) ctx_encoder = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base") ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base") from datasets import load_dataset ds = load_dataset('crime_and_punish', split='train[:100]') ds_with_embeddings = ds.map(lambda example: {'embeddings': ctx_encoder(**ctx_tokenizer(example["line"], return_tensors="pt"))[0][0].numpy()}) ds_with_embeddings.add_faiss_index(column='embeddings', device=0) # default `device=None` ``` ## Expected results A new column named `embeddings` in the dataset that we're adding the index to. ## Actual results An exception is triggered with the following message `NameError: name 'faiss' is not defined`. ## Environment info - `datasets` version: 2.1.0 - Platform: Linux-5.13.0-1022-azure-x86_64-with-glibc2.31 - Python version: 3.9.12 - PyArrow version: 7.0.0 - Pandas version: 1.4.2
false
1,226,758,621
https://api.github.com/repos/huggingface/datasets/issues/4286
https://github.com/huggingface/datasets/pull/4286
4,286
Add Lahnda language tag
closed
1
2022-05-05T14:34:20
2022-05-10T12:10:04
2022-05-10T12:02:38
mariosasko
[]
This language is present in [Wikimedia's WIT](https://huggingface.co/datasets/wikimedia/wit_base) dataset.
true
1,226,374,831
https://api.github.com/repos/huggingface/datasets/issues/4285
https://github.com/huggingface/datasets/pull/4285
4,285
Update LexGLUE README.md
closed
1
2022-05-05T08:36:50
2022-05-05T13:39:04
2022-05-05T13:33:35
iliaschalkidis
[]
Update the leaderboard based on the latest results presented in the ACL 2022 version of the article.
true
1,226,200,727
https://api.github.com/repos/huggingface/datasets/issues/4284
https://github.com/huggingface/datasets/issues/4284
4,284
Issues in processing very large datasets
closed
2
2022-05-05T05:01:09
2023-07-25T15:12:38
2023-07-25T15:12:38
sajastu
[ "bug" ]
## Describe the bug I'm trying to add a feature called "subgraph" to CNN/DM dataset (modifications on run_summarization.py of Huggingface Transformers script) --- I'm not quite sure if I'm doing it the right way, though--- but the main problem appears when the training starts where the error ` [OSError: [Errno 12] Cannot allocate memory]` appears. I suppose this problem roots in RAM issues and how the dataset is loaded during training, but I have no clue of what I can do to fix it. Observing the dataset's cache directory, I see that it takes ~600GB of memory and that's why I believe special care is needed when loading it into the memory. Here are my modifications to `run_summarization.py` code. ``` # loading pre-computed dictionary where keys are 'id' of article and values are corresponding subgraph graph_data_train = get_graph_data('train') graph_data_validation = get_graph_data('val') ... ... with training_args.main_process_first(desc="train dataset map pre-processing"): train_dataset = train_dataset.map( preprocess_function_train, batched=True, num_proc=data_args.preprocessing_num_workers, remove_columns=column_names, load_from_cache_file=not data_args.overwrite_cache, desc="Running tokenizer on train dataset", ) ``` And here is the modified preprocessed function: ``` def preprocess_function_train(examples): inputs, targets, sub_graphs, ids = [], [], [], [] for i in range(len(examples[text_column])): if examples[text_column][i] is not None and examples[summary_column][i] is not None: # if examples['doc_id'][i] in graph_data.keys(): inputs.append(examples[text_column][i]) targets.append(examples[summary_column][i]) sub_graphs.append(graph_data_train[examples['id'][i]]) ids.append(examples['id'][i]) inputs = [prefix + inp for inp in inputs] model_inputs = tokenizer(inputs, max_length=data_args.max_source_length, padding=padding, truncation=True, sub_graphs=sub_graphs, ids=ids) # Setup the tokenizer for targets with tokenizer.as_target_tokenizer(): labels = tokenizer(targets, max_length=max_target_length, padding=padding, truncation=True) # If we are padding here, replace all tokenizer.pad_token_id in the labels by -100 when we want to ignore # padding in the loss. if padding == "max_length" and data_args.ignore_pad_token_for_loss: labels["input_ids"] = [ [(l if l != tokenizer.pad_token_id else -100) for l in label] for label in labels["input_ids"] ] model_inputs["labels"] = labels["input_ids"] return model_inputs ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.1.0 - Platform: Linux Ubuntu - Python version: 3.6 - PyArrow version: 6.0.1
false
1,225,686,988
https://api.github.com/repos/huggingface/datasets/issues/4283
https://github.com/huggingface/datasets/pull/4283
4,283
Fix filesystem docstring
closed
1
2022-05-04T17:42:42
2022-05-06T16:32:02
2022-05-06T06:22:17
stevhliu
[]
This PR untangles the `S3FileSystem` docstring so the [parameters](https://huggingface.co/docs/datasets/master/en/package_reference/main_classes#parameters) are properly displayed.
true
1,225,616,545
https://api.github.com/repos/huggingface/datasets/issues/4282
https://github.com/huggingface/datasets/pull/4282
4,282
Don't do unnecessary list type casting to avoid replacing None values by empty lists
closed
3
2022-05-04T16:37:01
2022-05-06T10:43:58
2022-05-06T10:37:00
lhoestq
[]
In certain cases, `None` values are replaced by empty lists when casting feature types. It happens every time you cast an array of nested lists like [None, [0, 1, 2, 3]] to a different type (to change the integer precision for example). In this case you'd get [[], [0, 1, 2, 3]] for example. This issue comes from PyArrow, see the discussion in https://github.com/huggingface/datasets/issues/3676 This issue also happens when no type casting is needed, because casting is supposed to be a no-op in this case. But as https://github.com/huggingface/datasets/issues/3676 shown, it's not the case and `None` are replaced by empty lists even if we cast to the exact same type. In this PR I just workaround this bug in the case where no type casting is needed. In particular, I only call `pa.ListArray.from_arrays` only when necessary. I also added a warning when some `None` are effectively replaced by empty lists. I wanted to raise an error in this case, but maybe we should wait a major update to do so This PR fixes this particular case, that is occurring in `run_qa.py` in `transformers`: ```python from datasets import Dataset ds = Dataset.from_dict({"a": range(4)}) ds = ds.map(lambda x: {"b": [[None, [0]]]}, batched=True, batch_size=1, remove_columns=["a"]) print(ds.to_pandas()) # before: # b # 0 [None, [0]] # 1 [[], [0]] # 2 [[], [0]] # 3 [[], [0]] # # now: # b # 0 [None, [0]] # 1 [None, [0]] # 2 [None, [0]] # 3 [None, [0]] ``` cc @sgugger
true
1,225,556,939
https://api.github.com/repos/huggingface/datasets/issues/4281
https://github.com/huggingface/datasets/pull/4281
4,281
Remove a copy-paste sentence in dataset cards
closed
2
2022-05-04T15:41:55
2022-05-06T08:38:03
2022-05-04T18:33:16
albertvillanova
[]
Remove the following copy-paste sentence from dataset cards: ``` We show detailed information for up to 5 configurations of the dataset. ```
true
1,225,446,844
https://api.github.com/repos/huggingface/datasets/issues/4280
https://github.com/huggingface/datasets/pull/4280
4,280
Add missing features to commonsense_qa dataset
closed
3
2022-05-04T14:24:26
2022-05-06T14:23:57
2022-05-06T14:16:46
albertvillanova
[]
Fix partially #4275.
true
1,225,300,273
https://api.github.com/repos/huggingface/datasets/issues/4279
https://github.com/huggingface/datasets/pull/4279
4,279
Update minimal PyArrow version warning
closed
1
2022-05-04T12:26:09
2022-05-05T08:50:58
2022-05-05T08:43:47
mariosasko
[]
Update the minimal PyArrow version warning (should've been part of #4250).
true
1,225,122,123
https://api.github.com/repos/huggingface/datasets/issues/4278
https://github.com/huggingface/datasets/pull/4278
4,278
Add missing features to openbookqa dataset for additional config
closed
2
2022-05-04T09:22:50
2022-05-06T13:13:20
2022-05-06T13:06:01
albertvillanova
[]
Fix partially #4276.
true
1,225,002,286
https://api.github.com/repos/huggingface/datasets/issues/4277
https://github.com/huggingface/datasets/pull/4277
4,277
Enable label alignment for token classification datasets
closed
3
2022-05-04T07:15:16
2022-05-06T15:42:15
2022-05-06T15:36:31
lewtun
[]
This PR extends the `Dataset.align_labels_with_mapping()` method to support alignment of label mappings between datasets and models for token classification (e.g. NER). Example of usage: ```python from datasets import load_dataset ner_ds = load_dataset("conll2003", split="train") # returns [3, 0, 7, 0, 0, 0, 7, 0, 0] ner_ds[0]["ner_tags"] # hypothetical model mapping with O <--> B-LOC label2id = { "B-LOC": "0", "B-MISC": "7", "B-ORG": "3", "B-PER": "1", "I-LOC": "6", "I-MISC": "8", "I-ORG": "4", "I-PER": "2", "O": "5" } ner_aligned_ds = ner_ds.align_labels_with_mapping(label2id, "ner_tags") # returns [3, 5, 7, 5, 5, 5, 7, 5, 5] ner_aligned_ds[0]["ner_tags"] ``` Context: we need this in AutoTrain to automatically align datasets / models during evaluation. cc @abhishekkrthakur
true
1,224,949,252
https://api.github.com/repos/huggingface/datasets/issues/4276
https://github.com/huggingface/datasets/issues/4276
4,276
OpenBookQA has missing and inconsistent field names
closed
11
2022-05-04T05:51:52
2022-10-11T17:11:53
2022-10-05T13:50:03
vblagoje
[ "dataset bug" ]
## Describe the bug OpenBookQA implementation is inconsistent with the original dataset. We need to: 1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format. 2. Add missing additional fields: - 'fact1': row['fact1'], - 'humanScore': row['humanScore'], - 'clarity': row['clarity'], - 'turkIdAnonymized': row['turkIdAnonymized'] 3. Ensure the structure and every data item in the original OpenBookQA matches our OpenBookQA version. ## Expected results The structure and every data item in the original OpenBookQA matches our OpenBookQA version. ## Actual results TBD ## Environment info - `datasets` version: 2.1.0 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.13 - PyArrow version: 7.0.0 - Pandas version: 1.4.2
false
1,224,943,414
https://api.github.com/repos/huggingface/datasets/issues/4275
https://github.com/huggingface/datasets/issues/4275
4,275
CommonSenseQA has missing and inconsistent field names
open
1
2022-05-04T05:38:59
2022-05-04T11:41:18
null
vblagoje
[ "dataset bug" ]
## Describe the bug In short, CommonSenseQA implementation is inconsistent with the original dataset. More precisely, we need to: 1. Add the dataset matching "id" field. The current dataset, instead, regenerates monotonically increasing id. 2. The [โ€œquestionโ€][โ€œstemโ€] field is flattened into "question". We should match the original dataset and unflatten it 3. Add the missing "question_concept" field in the question tree node 4. Anything else? Go over the data structure of the newly repaired CommonSenseQA and make sure it matches the original ## Expected results Every data item of the CommonSenseQA should structurally and data-wise match the original CommonSenseQA dataset. ## Actual results TBD ## Environment info - `datasets` version: 2.1.0 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.13 - PyArrow version: 7.0.0 - Pandas version: 1.4.2
false
1,224,740,303
https://api.github.com/repos/huggingface/datasets/issues/4274
https://github.com/huggingface/datasets/pull/4274
4,274
Add API code examples for IterableDataset
closed
1
2022-05-03T22:44:17
2022-05-04T16:29:32
2022-05-04T16:22:04
stevhliu
[ "documentation" ]
This PR adds API code examples for `IterableDataset` and `IterableDatasetDicts`.
true
1,224,681,036
https://api.github.com/repos/huggingface/datasets/issues/4273
https://github.com/huggingface/datasets/pull/4273
4,273
leadboard info added for TNE
closed
1
2022-05-03T21:35:41
2022-05-05T13:25:24
2022-05-05T13:18:13
yanaiela
[]
null
true
1,224,635,660
https://api.github.com/repos/huggingface/datasets/issues/4272
https://github.com/huggingface/datasets/pull/4272
4,272
Fix typo in logging docs
closed
4
2022-05-03T20:47:57
2022-05-04T15:42:27
2022-05-04T06:58:36
stevhliu
[]
This PR fixes #4271.
true
1,224,404,403
https://api.github.com/repos/huggingface/datasets/issues/4271
https://github.com/huggingface/datasets/issues/4271
4,271
A typo in docs of datasets.disable_progress_bar
closed
1
2022-05-03T17:44:56
2022-05-04T06:58:35
2022-05-04T06:58:35
jiangwangyi
[ "bug" ]
## Describe the bug in the docs of V2.1.0 datasets.disable_progress_bar, we should replace "enable" with "disable".
false
1,224,244,460
https://api.github.com/repos/huggingface/datasets/issues/4270
https://github.com/huggingface/datasets/pull/4270
4,270
Fix style in openbookqa dataset
closed
1
2022-05-03T15:21:34
2022-05-06T08:38:06
2022-05-03T16:20:52
albertvillanova
[]
CI in PR: - #4259 was green, but after merging it to master, a code quality error appeared.
true
1,223,865,145
https://api.github.com/repos/huggingface/datasets/issues/4269
https://github.com/huggingface/datasets/pull/4269
4,269
Add license and point of contact to big_patent dataset
closed
1
2022-05-03T09:24:07
2022-05-06T08:38:09
2022-05-03T11:16:19
albertvillanova
[]
Update metadata of big_patent dataset with: - license - point of contact
true
1,223,331,964
https://api.github.com/repos/huggingface/datasets/issues/4268
https://github.com/huggingface/datasets/issues/4268
4,268
error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered
closed
10
2022-05-02T20:34:25
2022-05-06T15:53:30
2022-05-03T11:23:48
i-am-neo
[ "dataset bug" ]
## Describe the bug Error generated when attempting to download dataset ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered") ``` ## Expected results A clear and concise description of the expected results. ## Actual results ``` ExpectedMoreDownloadedFiles Traceback (most recent call last) [<ipython-input-62-4ac5cf959477>](https://localhost:8080/#) in <module>() 1 from datasets import load_dataset 2 ----> 3 dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered") 3 frames [/usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py](https://localhost:8080/#) in verify_checksums(expected_checksums, recorded_checksums, verification_name) 31 return 32 if len(set(expected_checksums) - set(recorded_checksums)) > 0: ---> 33 raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums))) 34 if len(set(recorded_checksums) - set(expected_checksums)) > 0: 35 raise UnexpectedDownloadedFile(str(set(recorded_checksums) - set(expected_checksums))) ExpectedMoreDownloadedFiles: {'/home/leandro/catalogue_data/datasets/lm_en_wiktionary_filtered/data/file-01.jsonl.gz', '/home/leandro/catalogue_data/datasets/lm_en_wiktionary_filtered/data/file-01.jsonl.gz.lock'} ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1
false
1,223,214,275
https://api.github.com/repos/huggingface/datasets/issues/4267
https://github.com/huggingface/datasets/pull/4267
4,267
Replace data URL in SAMSum dataset within the same repository
closed
1
2022-05-02T18:38:08
2022-05-06T08:38:13
2022-05-02T19:03:49
albertvillanova
[]
Replace data URL with one in the same repository.
true
1,223,116,436
https://api.github.com/repos/huggingface/datasets/issues/4266
https://github.com/huggingface/datasets/pull/4266
4,266
Add HF Speech Bench to Librispeech Dataset Card
closed
1
2022-05-02T16:59:31
2022-05-05T08:47:20
2022-05-05T08:40:09
sanchit-gandhi
[]
Adds the HF Speech Bench to Librispeech Dataset Card in place of the Papers With Code Leaderboard. Should improve usage and visibility of this leaderboard! Wondering whether this can also be done for [Common Voice 7](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) and [8](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) through someone with permissions? cc @patrickvonplaten: more leaderboard promotion!
true
1,222,723,083
https://api.github.com/repos/huggingface/datasets/issues/4263
https://github.com/huggingface/datasets/pull/4263
4,263
Rename imagenet2012 -> imagenet-1k
closed
4
2022-05-02T10:26:21
2022-05-02T17:50:46
2022-05-02T16:32:57
lhoestq
[]
On the Hugging Face Hub, users refer to imagenet2012 (from #4178 ) as imagenet-1k in their model tags. To correctly link models to imagenet, we should rename this dataset `imagenet-1k`. Later we can add `imagenet-21k` as a new dataset if we want. Once this one is merged we can delete the `imagenet2012` dataset repository on the Hub. EDIT: to complete the rationale on why we should name it `imagenet-1k`: If users specifically added the tag `imagenet-1k` , then it could be for two reasons (not sure which one is predominant), either they - wanted to make it explicit that itโ€™s not 21k -> the distinction is important for the community - or they have been following this convention from other models -> the convention implicitly exists already
true
1,222,130,749
https://api.github.com/repos/huggingface/datasets/issues/4262
https://github.com/huggingface/datasets/pull/4262
4,262
Add YAML tags to Dataset Card rotten tomatoes
closed
1
2022-05-01T11:59:08
2022-05-03T14:27:33
2022-05-03T14:20:35
mo6zes
[]
The dataset card for the rotten tomatoes / MR movie review dataset had some missing YAML tags. Hopefully, this also improves the visibility of this dataset now that paperswithcode and huggingface link to eachother.
true
1,221,883,779
https://api.github.com/repos/huggingface/datasets/issues/4261
https://github.com/huggingface/datasets/issues/4261
4,261
data leakage in `webis/conclugen` dataset
closed
5
2022-04-30T17:43:37
2022-05-03T06:04:26
2022-05-03T06:04:26
xflashxx
[ "dataset bug" ]
## Describe the bug Some samples (argument-conclusion pairs) in the *training* split of the `webis/conclugen` dataset are present in both the *validation* and *test* splits, creating data leakage and distorting model results. Furthermore, all splits contain duplicate samples. ## Steps to reproduce the bug ```python from datasets import load_dataset training = load_dataset("webis/conclugen", "base", split="train") validation = load_dataset("webis/conclugen", "base", split="validation") testing = load_dataset("webis/conclugen", "base", split="test") # collect which sample id's are present in the training split ids_validation = list() ids_testing = list() for train_sample in training: train_argument = train_sample["argument"] train_conclusion = train_sample["conclusion"] train_id = train_sample["id"] # test if current sample is in validation split if train_argument in validation["argument"]: for validation_sample in validation: validation_argument = validation_sample["argument"] validation_conclusion = validation_sample["conclusion"] validation_id = validation_sample["id"] if train_argument == validation_argument and train_conclusion == validation_conclusion: ids_validation.append(validation_id) # test if current sample is in test split if train_argument in testing["argument"]: for testing_sample in testing: testing_argument = testing_sample["argument"] testing_conclusion = testing_sample["conclusion"] testing_id = testing_sample["id"] if train_argument == testing_argument and train_conclusion == testing_conclusion: ids_testing.append(testing_id) ``` ## Expected results Length of both lists `ids_validation` and `ids_testing` should be zero. ## Actual results Length of `ids_validation` = `2556` Length of `ids_testing` = `287` Furthermore, there seems to be duplicate samples in (at least) the *training* split, since: `print(len(set(ids_validation)))` = `950` `print(len(set(ids_testing)))` = `101` All in all, around 7% of the samples of each the *validation* and *test* split seems to be present in the *training* split. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.4 - Platform: macOS-12.3.1-arm64-arm-64bit - Python version: 3.9.10 - PyArrow version: 7.0.0
false
1,221,830,292
https://api.github.com/repos/huggingface/datasets/issues/4260
https://github.com/huggingface/datasets/pull/4260
4,260
Add mr_polarity movie review sentiment classification
closed
1
2022-04-30T13:19:33
2022-04-30T14:16:25
2022-04-30T14:16:25
mo6zes
[]
Add the MR (Movie Review) dataset. The original dataset contains sentences from Rotten Tomatoes labeled as either "positive" or "negative". Homepage: [https://www.cs.cornell.edu/people/pabo/movie-review-data/](https://www.cs.cornell.edu/people/pabo/movie-review-data/) paperswithcode: [https://paperswithcode.com/dataset/mr](https://paperswithcode.com/dataset/mr) - [ ] I was not able to generate dummy data, the original dataset files have ".pos" and ".neg" as file extensions so the auto-generator does not work. Is it fine like this or should dummy data be added?
true
1,221,768,025
https://api.github.com/repos/huggingface/datasets/issues/4259
https://github.com/huggingface/datasets/pull/4259
4,259
Fix bug in choices labels in openbookqa dataset
closed
1
2022-04-30T07:41:39
2022-05-04T06:31:31
2022-05-03T15:14:21
manandey
[]
This PR fixes the Bug in the openbookqa dataset as mentioned in this issue #3550. Fix #3550. cc. @lhoestq @mariosasko
true
1,221,637,727
https://api.github.com/repos/huggingface/datasets/issues/4258
https://github.com/huggingface/datasets/pull/4258
4,258
Fix/start token mask issue and update documentation
closed
2
2022-04-29T22:42:44
2022-05-02T16:33:20
2022-05-02T16:26:12
TristanThrush
[]
This pr fixes a couple bugs: 1) the perplexity was calculated with a 0 in the attention mask for the start token, which was causing high perplexity scores that were not correct 2) the documentation was not updated
true
1,221,393,137
https://api.github.com/repos/huggingface/datasets/issues/4257
https://github.com/huggingface/datasets/pull/4257
4,257
Create metric card for Mahalanobis Distance
closed
1
2022-04-29T18:37:27
2022-05-02T14:50:18
2022-05-02T14:43:24
sashavor
[]
proposing a metric card to better explain how Mahalanobis distance works (last one for now :sweat_smile:
true
1,221,379,625
https://api.github.com/repos/huggingface/datasets/issues/4256
https://github.com/huggingface/datasets/pull/4256
4,256
Create metric card for MSE
closed
1
2022-04-29T18:21:22
2022-05-02T14:55:42
2022-05-02T14:48:47
sashavor
[]
Proposing a metric card for Mean Squared Error
true
1,221,142,899
https://api.github.com/repos/huggingface/datasets/issues/4255
https://github.com/huggingface/datasets/pull/4255
4,255
No google drive URL for pubmed_qa
closed
2
2022-04-29T15:55:46
2022-04-29T16:24:55
2022-04-29T16:18:56
lhoestq
[]
I hosted the data files in https://huggingface.co/datasets/pubmed_qa. This is allowed because the data is under the MIT license. cc @stas00
true
1,220,204,395
https://api.github.com/repos/huggingface/datasets/issues/4254
https://github.com/huggingface/datasets/pull/4254
4,254
Replace data URL in SAMSum dataset and support streaming
closed
1
2022-04-29T08:21:43
2022-05-06T08:38:16
2022-04-29T16:26:09
albertvillanova
[]
This PR replaces data URL in SAMSum dataset: - original host (arxiv.org) does not allow HTTP Range requests - we have hosted the data on the Hub (license: CC BY-NC-ND 4.0) Moreover, it implements support for streaming. Fix #4146. Related to: #4236. CC: @severo
true
1,219,286,408
https://api.github.com/repos/huggingface/datasets/issues/4253
https://github.com/huggingface/datasets/pull/4253
4,253
Create metric cards for mean IOU
closed
1
2022-04-28T20:58:27
2022-04-29T17:44:47
2022-04-29T17:38:06
sashavor
[]
Proposing a metric card for mIoU :rocket: sorry for spamming you with review requests, @albertvillanova ! :hugs:
true
1,219,151,100
https://api.github.com/repos/huggingface/datasets/issues/4252
https://github.com/huggingface/datasets/pull/4252
4,252
Creating metric card for MAE
closed
1
2022-04-28T19:04:33
2022-04-29T16:59:11
2022-04-29T16:52:30
sashavor
[]
Initial proposal for MAE metric card
true
1,219,116,354
https://api.github.com/repos/huggingface/datasets/issues/4251
https://github.com/huggingface/datasets/pull/4251
4,251
Metric card for the XTREME-S dataset
closed
1
2022-04-28T18:32:19
2022-04-29T16:46:11
2022-04-29T16:38:46
sashavor
[]
Proposing a metric card for the XTREME-S dataset :hugs:
true
1,219,093,830
https://api.github.com/repos/huggingface/datasets/issues/4250
https://github.com/huggingface/datasets/pull/4250
4,250
Bump PyArrow Version to 6
closed
4
2022-04-28T18:10:50
2022-05-04T09:36:52
2022-05-04T09:29:46
dnaveenr
[]
Fixes #4152 This PR updates the PyArrow version to 6 in setup.py, CI job files .circleci/config.yaml and .github/workflows/benchmarks.yaml files. This will fix ArrayND error which exists in pyarrow 5.
true
1,218,524,424
https://api.github.com/repos/huggingface/datasets/issues/4249
https://github.com/huggingface/datasets/pull/4249
4,249
Support streaming XGLUE dataset
closed
1
2022-04-28T10:27:23
2022-05-06T08:38:21
2022-04-28T16:08:03
albertvillanova
[]
Support streaming XGLUE dataset. Fix #4247. CC: @severo
true
1,218,460,444
https://api.github.com/repos/huggingface/datasets/issues/4248
https://github.com/huggingface/datasets/issues/4248
4,248
conll2003 dataset loads original data.
closed
1
2022-04-28T09:33:31
2022-07-18T07:15:48
2022-07-18T07:15:48
sue991
[ "bug" ]
## Describe the bug I load `conll2003` dataset to use refined data like [this](https://huggingface.co/datasets/conll2003/viewer/conll2003/train) preview, but it is original data that contains `'-DOCSTART- -X- -X- O'` text. Is this a bug or should I use another dataset_name like `lhoestq/conll2003` ? ## Steps to reproduce the bug ```python import datasets from datasets import load_dataset dataset = load_dataset("conll2003") ``` ## Expected results { "chunk_tags": [11, 12, 12, 21, 13, 11, 11, 21, 13, 11, 12, 13, 11, 21, 22, 11, 12, 17, 11, 21, 17, 11, 12, 12, 21, 22, 22, 13, 11, 0], "id": "0", "ner_tags": [0, 3, 4, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "pos_tags": [12, 22, 22, 38, 15, 22, 28, 38, 15, 16, 21, 35, 24, 35, 37, 16, 21, 15, 24, 41, 15, 16, 21, 21, 20, 37, 40, 35, 21, 7], "tokens": ["The", "European", "Commission", "said", "on", "Thursday", "it", "disagreed", "with", "German", "advice", "to", "consumers", "to", "shun", "British", "lamb", "until", "scientists", "determine", "whether", "mad", "cow", "disease", "can", "be", "transmitted", "to", "sheep", "."] } ## Actual results ```python print(dataset) DatasetDict({ train: Dataset({ features: ['text'], num_rows: 219554 }) test: Dataset({ features: ['text'], num_rows: 50350 }) validation: Dataset({ features: ['text'], num_rows: 55044 }) }) ``` ```python for i in range(20): print(dataset['train'][i]) {'text': '-DOCSTART- -X- -X- O'} {'text': ''} {'text': 'EU NNP B-NP B-ORG'} {'text': 'rejects VBZ B-VP O'} {'text': 'German JJ B-NP B-MISC'} {'text': 'call NN I-NP O'} {'text': 'to TO B-VP O'} {'text': 'boycott VB I-VP O'} {'text': 'British JJ B-NP B-MISC'} {'text': 'lamb NN I-NP O'} {'text': '. . O O'} {'text': ''} {'text': 'Peter NNP B-NP B-PER'} {'text': 'Blackburn NNP I-NP I-PER'} {'text': ''} {'text': 'BRUSSELS NNP B-NP B-LOC'} {'text': '1996-08-22 CD I-NP O'} {'text': ''} {'text': 'The DT B-NP O'} {'text': 'European NNP I-NP B-ORG'} ```
false
1,218,320,882
https://api.github.com/repos/huggingface/datasets/issues/4247
https://github.com/huggingface/datasets/issues/4247
4,247
The data preview of XGLUE
closed
3
2022-04-28T07:30:50
2022-04-29T08:23:28
2022-04-28T16:08:03
czq1999
[]
It seems that something wrong with the data previvew of XGLUE
false
1,218,320,293
https://api.github.com/repos/huggingface/datasets/issues/4246
https://github.com/huggingface/datasets/pull/4246
4,246
Support to load dataset with TSV files by passing only dataset name
closed
1
2022-04-28T07:30:15
2022-05-06T08:38:28
2022-05-06T08:14:07
albertvillanova
[]
This PR implements support to load a dataset (w/o script) containing TSV files by passing only the dataset name (no need to pass `sep='\t'`): ```python ds = load_dataset("dataset/name") ``` The refactoring allows for future builder kwargs customizations based on file extension. Related to #4238.
true
1,217,959,400
https://api.github.com/repos/huggingface/datasets/issues/4245
https://github.com/huggingface/datasets/pull/4245
4,245
Add code examples for DatasetDict
closed
1
2022-04-27T22:52:22
2022-04-29T18:19:34
2022-04-29T18:13:03
stevhliu
[ "documentation" ]
This PR adds code examples for `DatasetDict` in the API reference :)
true
1,217,732,221
https://api.github.com/repos/huggingface/datasets/issues/4244
https://github.com/huggingface/datasets/pull/4244
4,244
task id update
closed
2
2022-04-27T18:28:14
2022-05-04T10:43:53
2022-05-04T10:36:37
nazneenrajani
[]
changed multi input text classification as task id instead of category
true
1,217,689,909
https://api.github.com/repos/huggingface/datasets/issues/4243
https://github.com/huggingface/datasets/pull/4243
4,243
WIP: Initial shades loading script and readme
closed
1
2022-04-27T17:45:43
2022-10-03T09:36:35
2022-10-03T09:36:35
shayne-longpre
[ "dataset contribution" ]
null
true
1,217,665,960
https://api.github.com/repos/huggingface/datasets/issues/4242
https://github.com/huggingface/datasets/pull/4242
4,242
Update auth when mirroring datasets on the hub
closed
1
2022-04-27T17:22:31
2022-04-27T17:37:04
2022-04-27T17:30:42
lhoestq
[]
We don't need to use extraHeaders anymore for rate limits anymore. Anyway extraHeaders was not working with git LFS because it was passing the wrong auth to S3.
true
1,217,423,686
https://api.github.com/repos/huggingface/datasets/issues/4241
https://github.com/huggingface/datasets/issues/4241
4,241
NonMatchingChecksumError when attempting to download GLUE
closed
2
2022-04-27T14:14:21
2022-04-28T07:45:27
2022-04-28T07:45:27
drussellmrichie
[ "bug" ]
## Describe the bug I am trying to download the GLUE dataset from the NLP module but get an error (see below). ## Steps to reproduce the bug ```python import nlp nlp.__version__ # '0.2.0' nlp.load_dataset('glue', name="rte", download_mode="force_redownload") ``` ## Expected results I expect the dataset to download without an error. ## Actual results ``` INFO:nlp.load:Checking /home/richier/.cache/huggingface/datasets/5fe6ab0df8a32a3371b2e6a969d31d855a19563724fb0d0f163748c270c0ac60.2ea96febf19981fae5f13f0a43d4e2aa58bc619bc23acf06de66675f425a5538.py for additional imports. INFO:nlp.load:Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/glue/glue.py at /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue INFO:nlp.load:Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/glue/glue.py at /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4 INFO:nlp.load:Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/glue/glue.py to /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4/glue.py INFO:nlp.load:Found dataset infos file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/glue/dataset_infos.json to /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4/dataset_infos.json INFO:nlp.load:Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/glue/glue.py at /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4/glue.json INFO:nlp.info:Loading Dataset Infos from /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4 INFO:nlp.builder:Generating dataset glue (/home/richier/.cache/huggingface/datasets/glue/rte/1.0.0) INFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source INFO:nlp.utils.file_utils:Couldn't get ETag version for url https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FRTE.zip?alt=media&token=5efa7e85-a0bb-4f19-8ea2-9e1840f077fb INFO:nlp.utils.file_utils:https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FRTE.zip?alt=media&token=5efa7e85-a0bb-4f19-8ea2-9e1840f077fb not found in cache or force_download set to True, downloading to /home/richier/.cache/huggingface/datasets/downloads/tmpldt3n805 Downloading and preparing dataset glue/rte (download: 680.81 KiB, generated: 1.83 MiB, total: 2.49 MiB) to /home/richier/.cache/huggingface/datasets/glue/rte/1.0.0... Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 73.0/73.0 [00:00<00:00, 73.9kB/s] INFO:nlp.utils.file_utils:storing https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FRTE.zip?alt=media&token=5efa7e85-a0bb-4f19-8ea2-9e1840f077fb in cache at /home/richier/.cache/huggingface/datasets/downloads/e8b62ee44e6f8b6aea761935928579ffe1aa55d161808c482e0725abbdcf9c64 INFO:nlp.utils.file_utils:creating metadata file for /home/richier/.cache/huggingface/datasets/downloads/e8b62ee44e6f8b6aea761935928579ffe1aa55d161808c482e0725abbdcf9c64 --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) <ipython-input-7-669a8343dcc1> in <module> ----> 1 nlp.load_dataset('glue', name="rte", download_mode="force_redownload") ~/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 518 download_mode=download_mode, 519 ignore_verifications=ignore_verifications, --> 520 save_infos=save_infos, 521 ) 522 ~/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 418 verify_infos = not save_infos and not ignore_verifications 419 self._download_and_prepare( --> 420 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 421 ) 422 # Sync info ~/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 458 # Checksums verification 459 if verify_infos: --> 460 verify_checksums(self.info.download_checksums, dl_manager.get_recorded_sizes_checksums()) 461 for split_generator in split_generators: 462 if str(split_generator.split_info.name).lower() == "all": ~/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums) 34 bad_urls = [url for url in expected_checksums if expected_checksums[url] != recorded_checksums[url]] 35 if len(bad_urls) > 0: ---> 36 raise NonMatchingChecksumError(str(bad_urls)) 37 logger.info("All the checksums matched successfully.") 38 NonMatchingChecksumError: ['https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FRTE.zip?alt=media&token=5efa7e85-a0bb-4f19-8ea2-9e1840f077fb'] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Linux-4.18.0-348.20.1.el8_5.x86_64-x86_64-with-redhat-8.5-Ootpa - Python version: 3.6.13 - PyArrow version: 6.0.1 - Pandas version: 1.1.5
false
1,217,287,594
https://api.github.com/repos/huggingface/datasets/issues/4240
https://github.com/huggingface/datasets/pull/4240
4,240
Fix yield for crd3
closed
2
2022-04-27T12:31:36
2022-04-29T12:41:41
2022-04-29T12:41:41
shanyas10
[]
Modified the `_generate_examples` function to consider all the turns for a chunk id as a single example Modified the features accordingly ``` "turns": [ { "names": datasets.features.Sequence(datasets.Value("string")), "utterances": datasets.features.Sequence(datasets.Value("string")), "number": datasets.Value("int32"), } ], } ``` I wasn't able to run `datasets-cli dummy_data datasets` command. Is there a workaround for this?
true
1,217,269,689
https://api.github.com/repos/huggingface/datasets/issues/4239
https://github.com/huggingface/datasets/pull/4239
4,239
Small fixes in ROC AUC docs
closed
1
2022-04-27T12:15:50
2022-05-02T13:28:57
2022-05-02T13:22:03
wschella
[]
The list of use cases did not render on GitHub with the prepended spacing. Additionally, some typo's we're fixed.
true
1,217,168,123
https://api.github.com/repos/huggingface/datasets/issues/4238
https://github.com/huggingface/datasets/issues/4238
4,238
Dataset caching policy
closed
3
2022-04-27T10:42:11
2022-04-27T16:29:25
2022-04-27T16:28:50
loretoparisi
[ "bug" ]
## Describe the bug I cannot clean cache of my datasets files, despite I have updated the `csv` files on the repository [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences). The original file had a line with bad characters, causing the following error ``` [/usr/local/lib/python3.7/dist-packages/datasets/features/features.py](https://localhost:8080/#) in str2int(self, values) 852 if value not in self._str2int: 853 value = str(value).strip() --> 854 output.append(self._str2int[str(value)]) 855 else: 856 # No names provided, try to integerize KeyError: '\\N' ``` The file now is cleanup up, but I still get the error. This happens even if I inspect the local cached contents, and cleanup the files locally: ```python from datasets import load_dataset_builder dataset_builder = load_dataset_builder("loretoparisi/tatoeba-sentences") print(dataset_builder.cache_dir) print(dataset_builder.info.features) print(dataset_builder.info.splits) ``` ``` Using custom data configuration loretoparisi--tatoeba-sentences-e59b8ad92f1bb8dd /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-e59b8ad92f1bb8dd/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519 None None ``` and removing files located at `/root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-*`. Is there any remote file caching policy in place? If so, is it possibile to programmatically disable it? Currently it seems that the file `test.csv` on the repo [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences/blob/main/test.csv) is cached remotely. In fact I download locally the file from raw link, the file is up-to-date; but If I use it within `datasets` as shown above, it gives to me always the first revision of the file, not the last. Thank you. ## Steps to reproduce the bug ```python from datasets import load_dataset,Features,Value,ClassLabel class_names = ["cmn","deu","rus","fra","eng","jpn","spa","ita","kor","vie","nld","epo","por","tur","heb","hun","ell","ind","ara","arz","fin","bul","yue","swe","ukr","bel","que","ces","swh","nno","wuu","nob","zsm","est","kat","pol","lat","urd","sqi","isl","fry","afr","ron","fao","san","bre","tat","yid","uig","uzb","srp","qya","dan","pes","slk","eus","cycl","acm","tgl","lvs","kaz","hye","hin","lit","ben","cat","bos","hrv","tha","orv","cha","mon","lzh","scn","gle","mkd","slv","frm","glg","vol","ain","jbo","tok","ina","nds","mal","tlh","roh","ltz","oss","ido","gla","mlt","sco","ast","jav","oci","ile","ota","xal","tel","sjn","nov","khm","tpi","ang","aze","tgk","tuk","chv","hsb","dsb","bod","sme","cym","mri","ksh","kmr","ewe","kab","ber","tpw","udm","lld","pms","lad","grn","mlg","xho","pnb","grc","hat","lao","npi","cor","nah","avk","mar","guj","pan","kir","myv","prg","sux","crs","ckt","bak","zlm","hil","cbk","chr","nav","lkt","enm","arq","lin","abk","pcd","rom","gsw","tam","zul","awa","wln","amh","bar","hbo","mhr","bho","mrj","ckb","osx","pfl","mgm","sna","mah","hau","kan","nog","sin","glv","dng","kal","liv","vro","apc","jdt","fur","che","haw","yor","crh","pdc","ppl","kin","shs","mnw","tet","sah","kum","ngt","nya","pus","hif","mya","moh","wol","tir","ton","lzz","oar","lug","brx","non","mww","hak","nlv","ngu","bua","aym","vec","ibo","tkl","bam","kha","ceb","lou","fuc","smo","gag","lfn","arg","umb","tyv","kjh","oji","cyo","urh","kzj","pam","srd","lmo","swg","mdf","gil","snd","tso","sot","zza","tsn","pau","som","egl","ady","asm","ori","dtp","cho","max","kam","niu","sag","ilo","kaa","fuv","nch","hoc","iba","gbm","sun","war","mvv","pap","ary","kxi","csb","pag","cos","rif","kek","krc","aii","ban","ssw","tvl","mfe","tah","bvy","bcl","hnj","nau","nst","afb","quc","min","tmw","mad","bjn","mai","cjy","got","hsn","gan","tzl","dws","ldn","afh","sgs","krl","vep","rue","tly","mic","ext","izh","sma","jam","cmo","mwl","kpv","koi","bis","ike","run","evn","ryu","mnc","aoz","otk","kas","aln","akl","yua","shy","fkv","gos","fij","thv","zgh","gcf","cay","xmf","tig","div","lij","rap","hrx","cpi","tts","gaa","tmr","iii","ltg","bzt","syc","emx","gom","chg","osp","stq","frr","fro","nys","toi","new","phn","jpa","rel","drt","chn","pli","laa","bal","hdn","hax","mik","ajp","xqa","pal","crk","mni","lut","ayl","ood","sdh","ofs","nus","kiu","diq","qxq","alt","bfz","klj","mus","srn","guc","lim","zea","shi","mnr","bom","sat","szl"] features = Features({ 'label': ClassLabel(names=class_names), 'text': Value('string')}) num_labels = features['label'].num_classes data_files = { "train": "train.csv", "test": "test.csv" } sentences = load_dataset( "loretoparisi/tatoeba-sentences", data_files=data_files, delimiter='\t', column_names=['label', 'text'], ) # You can make this part faster with num_proc=<some int> sentences = sentences.map(lambda ex: {"label" : features["label"].str2int(ex["label"]) if ex["label"] is not None else None}, features=features) sentences = sentences.shuffle() ``` ## Expected results Properly tokenize dataset file `test.csv` without issues. ## Actual results Specify the actual results or traceback. ``` Downloading data files: 100% 2/2 [00:16<00:00, 7.34s/it] Downloading data: 100% 391M/391M [00:12<00:00, 36.6MB/s] Downloading data: 100% 92.4M/92.4M [00:02<00:00, 40.0MB/s] Extracting data files: 100% 2/2 [00:00<00:00, 47.66it/s] Dataset csv downloaded and prepared to /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-efeff8965c730a2c/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519. Subsequent calls will reuse this data. 100% 2/2 [00:00<00:00, 25.94it/s] 11% 942339/8256449 [01:55<13:11, 9245.85ex/s] --------------------------------------------------------------------------- KeyError Traceback (most recent call last) [<ipython-input-3-6a9867fad8d6>](https://localhost:8080/#) in <module>() 12 ) 13 # You can make this part faster with num_proc=<some int> ---> 14 sentences = sentences.map(lambda ex: {"label" : features["label"].str2int(ex["label"]) if ex["label"] is not None else None}, features=features) 15 sentences = sentences.shuffle() 10 frames [/usr/local/lib/python3.7/dist-packages/datasets/features/features.py](https://localhost:8080/#) in str2int(self, values) 852 if value not in self._str2int: 853 value = str(value).strip() --> 854 output.append(self._str2int[str(value)]) 855 else: 856 # No names provided, try to integerize KeyError: '\\N' ``` ## Environment info ``` - `datasets` version: 2.1.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5 - ``` ``` - `transformers` version: 4.18.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.11.0+cu113 (True) - Tensorflow version (GPU?): 2.8.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> - ```
false
1,217,121,044
https://api.github.com/repos/huggingface/datasets/issues/4237
https://github.com/huggingface/datasets/issues/4237
4,237
Common Voice 8 doesn't show datasets viewer
closed
9
2022-04-27T10:05:20
2022-05-10T12:17:05
2022-05-10T12:17:04
patrickvonplaten
[ "dataset-viewer" ]
https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0
false
1,217,115,691
https://api.github.com/repos/huggingface/datasets/issues/4236
https://github.com/huggingface/datasets/pull/4236
4,236
Replace data URL in big_patent dataset and support streaming
closed
5
2022-04-27T10:01:13
2022-06-10T08:10:55
2022-05-02T18:21:15
albertvillanova
[]
This PR replaces the Google Drive URL with our Hub one, once the data owners have approved to host their data on the Hub. Moreover, this PR makes the dataset streamable. Fix #4217.
true
1,216,952,640
https://api.github.com/repos/huggingface/datasets/issues/4235
https://github.com/huggingface/datasets/issues/4235
4,235
How to load VERY LARGE dataset?
closed
1
2022-04-27T07:50:13
2023-07-25T15:07:57
2023-07-25T15:07:57
CaoYiqingT
[ "bug" ]
### System Info ```shell I am using transformer trainer while meeting the issue. The trainer requests torch.utils.data.Dataset as input, which loads the whole dataset into the memory at once. Therefore, when the dataset is too large to load, there's nothing I can do except using IterDataset, which loads samples of data seperately, and results in low efficiency. I wonder if there are any tricks like Sharding in huggingface trainer. Looking forward to your reply. ``` ### Who can help? Trainer: @sgugger ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction None ### Expected behavior ```shell I wonder if there are any tricks like fairseq Sharding very large datasets https://fairseq.readthedocs.io/en/latest/getting_started.html. Thanks a lot! ```
false
1,216,818,846
https://api.github.com/repos/huggingface/datasets/issues/4234
https://github.com/huggingface/datasets/pull/4234
4,234
Autoeval config
closed
15
2022-04-27T05:32:10
2022-05-06T13:20:31
2022-05-05T18:20:58
nazneenrajani
[]
Added autoeval config to imdb as pilot
true
1,216,665,044
https://api.github.com/repos/huggingface/datasets/issues/4233
https://github.com/huggingface/datasets/pull/4233
4,233
Autoeval
closed
1
2022-04-27T01:32:09
2022-04-27T05:29:30
2022-04-27T01:32:23
nazneenrajani
[]
null
true
1,216,659,444
https://api.github.com/repos/huggingface/datasets/issues/4232
https://github.com/huggingface/datasets/pull/4232
4,232
adding new tag to tasks.json and modified for existing datasets
closed
2
2022-04-27T01:21:09
2022-05-03T14:23:56
2022-05-03T14:16:39
nazneenrajani
[]
null
true
1,216,651,960
https://api.github.com/repos/huggingface/datasets/issues/4231
https://github.com/huggingface/datasets/pull/4231
4,231
Fix invalid url to CC-Aligned dataset
closed
1
2022-04-27T01:07:01
2022-05-16T17:01:13
2022-05-16T16:53:12
juntang-zhuang
[]
The CC-Aligned dataset url has changed to https://data.statmt.org/cc-aligned/, the old address http://www.statmt.org/cc-aligned/ is no longer valid
true
1,216,643,661
https://api.github.com/repos/huggingface/datasets/issues/4230
https://github.com/huggingface/datasets/issues/4230
4,230
Why the `conll2003` dataset on huggingface only contains the `en` subset? Where is the German data?
closed
3
2022-04-27T00:53:52
2023-07-25T15:10:15
2023-07-25T15:10:15
beyondguo
[ "enhancement" ]
![image](https://user-images.githubusercontent.com/37113676/165416606-96b5db18-b16c-4b6b-928c-de8620fd943e.png) But on huggingface datasets: ![image](https://user-images.githubusercontent.com/37113676/165416649-8fd77980-ca0d-43f0-935e-f398ba8323a4.png) Where is the German data?
false
1,216,638,968
https://api.github.com/repos/huggingface/datasets/issues/4229
https://github.com/huggingface/datasets/pull/4229
4,229
new task tag
closed
0
2022-04-27T00:47:08
2022-04-27T00:48:28
2022-04-27T00:48:17
nazneenrajani
[]
multi-input-text-classification tag for classification datasets that take more than one input
true
1,216,523,043
https://api.github.com/repos/huggingface/datasets/issues/4228
https://github.com/huggingface/datasets/pull/4228
4,228
new task tag
closed
0
2022-04-26T22:00:33
2022-04-27T00:48:31
2022-04-27T00:46:31
nazneenrajani
[]
multi-input-text-classification tag for classification datasets that take more than one input
true
1,216,455,316
https://api.github.com/repos/huggingface/datasets/issues/4227
https://github.com/huggingface/datasets/pull/4227
4,227
Add f1 metric card, update docstring in py file
closed
1
2022-04-26T20:41:03
2022-05-03T12:50:23
2022-05-03T12:43:33
emibaylor
[]
null
true
1,216,331,073
https://api.github.com/repos/huggingface/datasets/issues/4226
https://github.com/huggingface/datasets/pull/4226
4,226
Add pearsonr mc, update functionality to match the original docs
closed
2
2022-04-26T18:30:46
2022-05-03T17:09:24
2022-05-03T17:02:28
emibaylor
[]
- adds pearsonr metric card - adds ability to return p-value - p-value was mentioned in the original docs as a return value, but there was no option to return it. I updated the _compute function slightly to have an option to return the p-value.
true
1,216,213,464
https://api.github.com/repos/huggingface/datasets/issues/4225
https://github.com/huggingface/datasets/pull/4225
4,225
autoeval config
closed
0
2022-04-26T16:38:34
2022-04-27T00:48:31
2022-04-26T22:00:26
nazneenrajani
[]
add train eval index for autoeval
true
1,216,209,667
https://api.github.com/repos/huggingface/datasets/issues/4224
https://github.com/huggingface/datasets/pull/4224
4,224
autoeval config
closed
0
2022-04-26T16:35:19
2022-04-26T16:36:45
2022-04-26T16:36:45
nazneenrajani
[]
add train eval index for autoeval
true
1,216,107,082
https://api.github.com/repos/huggingface/datasets/issues/4223
https://github.com/huggingface/datasets/pull/4223
4,223
Add Accuracy Metric Card
closed
1
2022-04-26T15:10:46
2022-05-03T14:27:45
2022-05-03T14:20:47
emibaylor
[]
- adds accuracy metric card - updates docstring in accuracy.py - adds .json file with metric card and docstring information
true
1,216,056,439
https://api.github.com/repos/huggingface/datasets/issues/4222
https://github.com/huggingface/datasets/pull/4222
4,222
Fix description links in dataset cards
closed
2
2022-04-26T14:36:25
2022-05-06T08:38:38
2022-04-26T16:52:29
albertvillanova
[]
I noticed many links were not properly displayed (only text, no link) on the Hub because of wrong syntax, e.g.: https://huggingface.co/datasets/big_patent This PR fixes all description links in dataset cards.
true
1,215,911,182
https://api.github.com/repos/huggingface/datasets/issues/4221
https://github.com/huggingface/datasets/issues/4221
4,221
Dictionary Feature
closed
2
2022-04-26T12:50:18
2022-04-29T14:52:19
2022-04-28T17:04:58
jordiae
[ "question" ]
Hi, I'm trying to create the loading script for a dataset in which one feature is a list of dictionaries, which afaik doesn't fit very well the values and structures supported by Value and Sequence. Is there any suggested workaround, am I missing something? Thank you in advance.
false
1,215,225,802
https://api.github.com/repos/huggingface/datasets/issues/4220
https://github.com/huggingface/datasets/pull/4220
4,220
Altered faiss installation comment
closed
3
2022-04-26T01:20:43
2022-05-09T17:29:34
2022-05-09T17:22:09
vishalsrao
[]
null
true
1,214,934,025
https://api.github.com/repos/huggingface/datasets/issues/4219
https://github.com/huggingface/datasets/pull/4219
4,219
Add F1 Metric Card
closed
1
2022-04-25T19:14:56
2022-04-26T20:44:18
2022-04-26T20:37:46
emibaylor
[]
null
true
1,214,748,226
https://api.github.com/repos/huggingface/datasets/issues/4218
https://github.com/huggingface/datasets/pull/4218
4,218
Make code for image downloading from image urls cacheable
closed
1
2022-04-25T16:17:59
2022-04-26T17:00:24
2022-04-26T13:38:26
mariosasko
[]
Fix #4199
true
1,214,688,141
https://api.github.com/repos/huggingface/datasets/issues/4217
https://github.com/huggingface/datasets/issues/4217
4,217
Big_Patent dataset broken
closed
3
2022-04-25T15:31:45
2022-05-26T06:29:43
2022-05-02T18:21:15
Matthew-Larsen
[ "hosted-on-google-drive" ]
## Dataset viewer issue for '*big_patent*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/big_patent/viewer/all/train)* *Unable to view because it says FileNotFound, also cannot download it through the python API* Am I the one who added this dataset ? No
false
1,214,614,029
https://api.github.com/repos/huggingface/datasets/issues/4216
https://github.com/huggingface/datasets/pull/4216
4,216
Avoid recursion error in map if example is returned as dict value
closed
1
2022-04-25T14:40:32
2022-05-04T17:20:06
2022-05-04T17:12:52
mariosasko
[]
I noticed this bug while answering [this question](https://discuss.huggingface.co/t/correct-way-to-create-a-dataset-from-a-csv-file/15686/11?u=mariosasko). This code replicates the bug: ```python from datasets import Dataset dset = Dataset.from_dict({"en": ["aa", "bb"], "fr": ["cc", "dd"]}) dset.map(lambda ex: {"translation": ex}) ``` and this is the fix for it (before this PR): ```python from datasets import Dataset dset = Dataset.from_dict({"en": ["aa", "bb"], "fr": ["cc", "dd"]}) dset.map(lambda ex: {"translation": dict(ex)}) ``` Internally, this can be fixed by merging two dicts via dict unpacking (instead of `dict.update) `in `Dataset.map`, which avoids creating recursive dictionaries. P.S. `{**a, **b}` is slightly more performant than `a.update(b)` in my bencmarks.
true
1,214,579,162
https://api.github.com/repos/huggingface/datasets/issues/4215
https://github.com/huggingface/datasets/pull/4215
4,215
Add `drop_last_batch` to `IterableDataset.map`
closed
1
2022-04-25T14:15:19
2022-05-03T15:56:07
2022-05-03T15:48:54
mariosasko
[]
Addresses this comment: https://github.com/huggingface/datasets/pull/3801#pullrequestreview-901736921
true
1,214,572,430
https://api.github.com/repos/huggingface/datasets/issues/4214
https://github.com/huggingface/datasets/pull/4214
4,214
Skip checksum computation in Imagefolder by default
closed
1
2022-04-25T14:10:41
2022-05-03T15:28:32
2022-05-03T15:21:29
mariosasko
[]
Avoids having to set `ignore_verifications=True` in `load_dataset("imagefolder", ...)` to skip checksum verification and speed up loading. The user can still pass `DownloadConfig(record_checksums=True)` to not skip this part.
true