url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.83B
| node_id
stringlengths 18
32
| number
int64 1
6.09k
| title
stringlengths 1
290
| labels
list | state
stringclasses 2
values | locked
bool 1
class | milestone
dict | comments
int64 0
54
| created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
⌀ | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes | comments_text
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/2178 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2178/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2178/comments | https://api.github.com/repos/huggingface/datasets/issues/2178/events | https://github.com/huggingface/datasets/pull/2178 | 852,215,058 | MDExOlB1bGxSZXF1ZXN0NjEwNTA1Mjg1 | 2,178 | Fix cast memory usage by using map on subtables | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"closed_at": "2021-04-20T16:50:46Z",
"closed_issues": 4,
"created_at": "2021-04-09T13:07:51Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-04-16T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/1",
"id": 6644198,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/1/labels",
"node_id": "MDk6TWlsZXN0b25lNjY0NDE5OA==",
"number": 1,
"open_issues": 0,
"state": "closed",
"title": "1.6",
"updated_at": "2021-04-20T16:50:46Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/1"
} | 3 | 2021-04-07T09:30:50Z | 2021-04-20T14:20:44Z | 2021-04-13T09:28:16Z | null | The `cast` operation on a pyarrow Table may create new arrays in memory.
This is an issue since users expect memory mapped datasets to not fill up the RAM.
To fix that I used `map` to write a new arrow file on disk when cast is used.
To make things more convenient I introduced the `arrow` formatting of a dataset, to make it return pyarrow tables instead of python dicts. This way one can use pyarrow transforms directly when using `map`.
edit: we'll use the same mechanism for `filter` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2178/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2178/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2178.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2178",
"merged_at": "2021-04-13T09:28:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2178.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2178"
} | true | [
"I addressed your comments about the docstrings and the output validation :)",
"I updated the bleurt mocking method and bleurt test is passing now.\r\nI also ran the slow tests and they are passing for bleurt.",
"Thanks @lhoestq and @albertvillanova !"
] |
https://api.github.com/repos/huggingface/datasets/issues/2922 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2922/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2922/comments | https://api.github.com/repos/huggingface/datasets/issues/2922/events | https://github.com/huggingface/datasets/pull/2922 | 997,332,662 | PR_kwDODunzps4ry6-s | 2,922 | Fix conversion of multidim arrays in list to arrow | [] | closed | false | null | 0 | 2021-09-15T17:21:36Z | 2021-09-15T17:22:52Z | 2021-09-15T17:21:45Z | null | Arrow only supports 1-dim arrays. Previously we were converting all the numpy arrays to python list before instantiating arrow arrays to workaround this limitation.
However in #2361 we started to keep numpy arrays in order to keep their dtypes.
It works when we pass any multi-dim numpy array (the conversion to arrow has been added on our side), but not for lists of multi-dim numpy arrays.
In this PR I added two strategies:
- one that takes a list of multi-dim numpy arrays on returns an arrow array in an optimized way (more common case)
- one that takes a list of possibly very nested data (lists, dicts, tuples) containing multi-dim arrays. This one is less optimized since it converts all the multi-dim numpy arrays into lists of 1-d arrays for compatibility with arrow. This strategy is simpler that just trying to create the arrow array from a possibly very nested data structure, but in the future we can improve it if needed.
Fix https://github.com/huggingface/datasets/issues/2921 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2922/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2922/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2922.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2922",
"merged_at": "2021-09-15T17:21:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2922.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2922"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3199 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3199/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3199/comments | https://api.github.com/repos/huggingface/datasets/issues/3199/events | https://github.com/huggingface/datasets/pull/3199 | 1,042,860,935 | PR_kwDODunzps4uAVzQ | 3,199 | Bump huggingface_hub | [] | closed | false | null | 0 | 2021-11-02T21:29:10Z | 2021-11-14T01:48:11Z | 2021-11-02T21:41:40Z | null | huggingface_hub just released its first minor version, so we need to update the dependency
It was supposed to be part of 1.15.0 but I'm adding it for 1.15.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3199/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3199/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3199.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3199",
"merged_at": "2021-11-02T21:41:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3199.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3199"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/664 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/664/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/664/comments | https://api.github.com/repos/huggingface/datasets/issues/664/events | https://github.com/huggingface/datasets/issues/664 | 707,017,791 | MDU6SXNzdWU3MDcwMTc3OTE= | 664 | load_dataset from local squad.py, raise error: TypeError: 'NoneType' object is not callable | [] | closed | false | null | 4 | 2020-09-23T03:53:36Z | 2023-04-17T09:31:20Z | 2020-10-20T09:06:13Z | null |
version: 1.0.2
```
train_dataset = datasets.load_dataset('squad')
```
The above code can works. However, when I download the squad.py from your server, and saved as `my_squad.py` to local. I run followings raise errors.
```
train_dataset = datasets.load_dataset('./my_squad.py')
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-28-25a84b4d1581> in <module>
----> 1 train_dataset = nlp.load_dataset('./my_squad.py')
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
602 hash=hash,
603 features=features,
--> 604 **config_kwargs,
605 )
606
TypeError: 'NoneType' object is not callable
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/664/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/664/timeline | null | completed | null | null | false | [
"Hi !\r\nThanks for reporting.\r\nIt looks like no object inherits from `datasets.GeneratorBasedBuilder` (or more generally from `datasets.DatasetBuilder`) in your script.\r\n\r\nCould you check that there exist at least one dataset builder class ?",
"Hi @xixiaoyao did you manage to fix your issue ?",
"No activity, closing",
"It happened when try to change the old project which use 'nlp' to new project which use 'datasets'. You should check you old 'my_squad.py' file, change the inherit class from `nlp.xxx` to `datasets.xxx`. Otherwise datasets - load.py - import_main_class() `if inspect.isclass(obj) and issubclass(obj, main_cls_type):` can not find the main_cls."
] |
https://api.github.com/repos/huggingface/datasets/issues/4310 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4310/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4310/comments | https://api.github.com/repos/huggingface/datasets/issues/4310/events | https://github.com/huggingface/datasets/issues/4310 | 1,231,319,815 | I_kwDODunzps5JZHMH | 4,310 | Loading dataset with streaming: '_io.BufferedReader' object has no attribute 'loc' | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2022-05-10T15:12:53Z | 2022-05-11T16:46:31Z | 2022-05-11T16:46:31Z | null | ## Describe the bug
Loading a datasets with `load_dataset` and `streaming=True` returns `AttributeError: '_io.BufferedReader' object has no attribute 'loc'`. Notice that loading with `streaming=False` works fine.
In the following steps we load parquet files but the same happens with pickle files. The problem seems to come from `fsspec` lib, I put in the environment info also `s3fs` and `fsspec` versions since I'm loading from an s3 bucket.
## Steps to reproduce the bug
```python
from datasets import load_dataset
# path is the path to parquet files
data_files = {"train": path + "meta_train.parquet.gzip", "test": path + "meta_test.parquet.gzip"}
dataset = load_dataset("parquet", data_files=data_files, streaming=True)
```
## Expected results
A dataset object `datasets.dataset_dict.DatasetDict`
## Actual results
```
AttributeError Traceback (most recent call last)
<command-562086> in <module>
11
12 data_files = {"train": path + "meta_train.parquet.gzip", "test": path + "meta_test.parquet.gzip"}
---> 13 dataset = load_dataset("parquet", data_files=data_files, streaming=True)
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1679 if streaming:
1680 extend_dataset_builder_for_streaming(builder_instance, use_auth_token=use_auth_token)
-> 1681 return builder_instance.as_streaming_dataset(
1682 split=split,
1683 use_auth_token=use_auth_token,
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/builder.py in as_streaming_dataset(self, split, base_path, use_auth_token)
904 )
905 self._check_manual_download(dl_manager)
--> 906 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}
907 # By default, return all splits
908 if split is None:
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/packaged_modules/parquet/parquet.py in _split_generators(self, dl_manager)
30 if not self.config.data_files:
31 raise ValueError(f"At least one data file must be specified, but got data_files={self.config.data_files}")
---> 32 data_files = dl_manager.download_and_extract(self.config.data_files)
33 if isinstance(data_files, (str, list, tuple)):
34 files = data_files
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in download_and_extract(self, url_or_urls)
798
799 def download_and_extract(self, url_or_urls):
--> 800 return self.extract(self.download(url_or_urls))
801
802 def iter_archive(self, urlpath_or_buf: Union[str, io.BufferedReader]) -> Iterable[Tuple]:
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in extract(self, path_or_paths)
776
777 def extract(self, path_or_paths):
--> 778 urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)
779 return urlpaths
780
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm, desc)
312 num_proc = 1
313 if num_proc <= 1 or len(iterable) <= num_proc:
--> 314 mapped = [
315 _single_map_nested((function, obj, types, None, True, None))
316 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0)
313 if num_proc <= 1 or len(iterable) <= num_proc:
314 mapped = [
--> 315 _single_map_nested((function, obj, types, None, True, None))
316 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
317 ]
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args)
267 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
268 else:
--> 269 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
270 if isinstance(data_struct, list):
271 return mapped
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0)
267 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
268 else:
--> 269 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
270 if isinstance(data_struct, list):
271 return mapped
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args)
249 # Singleton first to spare some computation
250 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 251 return function(data_struct)
252
253 # Reduce logging to keep things readable in multiprocessing with tqdm
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _extract(self, urlpath)
781 def _extract(self, urlpath: str) -> str:
782 urlpath = str(urlpath)
--> 783 protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)
784 if protocol is None:
785 # no extraction
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _get_extraction_protocol(urlpath, use_auth_token)
371 urlpath, kwargs = urlpath, {}
372 with fsspec.open(urlpath, **kwargs) as f:
--> 373 return _get_extraction_protocol_with_magic_number(f)
374
375
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _get_extraction_protocol_with_magic_number(f)
335 def _get_extraction_protocol_with_magic_number(f) -> Optional[str]:
336 """read the magic number from a file-like object and return the compression protocol"""
--> 337 prev_loc = f.loc
338 magic_number = f.read(MAGIC_NUMBER_MAX_LENGTH)
339 f.seek(prev_loc)
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/fsspec/implementations/local.py in __getattr__(self, item)
337
338 def __getattr__(self, item):
--> 339 return getattr(self.f, item)
340
341 def __enter__(self):
AttributeError: '_io.BufferedReader' object has no attribute 'loc'
```
## Environment info
- `datasets` version: 2.1.0
- Platform: Linux-5.4.0-1071-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
- `fsspec` version: 2021.08.1
- `s3fs` version: 2021.08.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4310/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4310/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/4775 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4775/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4775/comments | https://api.github.com/repos/huggingface/datasets/issues/4775/events | https://github.com/huggingface/datasets/issues/4775 | 1,324,136,486 | I_kwDODunzps5O7Lgm | 4,775 | Streaming not supported in Theivaprakasham/wildreceipt | [
{
"color": "fef2c0",
"default": false,
"description": "",
"id": 3287858981,
"name": "streaming",
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming"
}
] | closed | false | null | 1 | 2022-08-01T09:46:17Z | 2022-08-01T10:30:29Z | 2022-08-01T10:30:29Z | null | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4775/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4775/timeline | null | completed | null | null | false | [
"Thanks for reporting @NitishkKarra.\r\n\r\nThe root source of the issue is that streaming mode is not supported out-of-the-box for that dataset, because it contains a TAR file.\r\n\r\nWe have opened a discussion in the corresponding Hub dataset page, pointing out this issue: https://huggingface.co/datasets/Theivaprakasham/wildreceipt/discussions/1\r\n\r\nI'm closing this issue here, so this discussion is transferred there instead."
] |
https://api.github.com/repos/huggingface/datasets/issues/429 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/429/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/429/comments | https://api.github.com/repos/huggingface/datasets/issues/429/events | https://github.com/huggingface/datasets/pull/429 | 664,412,137 | MDExOlB1bGxSZXF1ZXN0NDU1NjU2MDk5 | 429 | mlsum | [] | closed | false | null | 6 | 2020-07-23T11:52:39Z | 2020-07-31T11:46:20Z | 2020-07-31T11:46:20Z | null | Hello,
The tests for the load_real_data fail, as there is no default language subset to download it looks for a file that does not exist. This bug does not happen when using the load_dataset function, as it asks you to specify a language if you do not, so I submit this PR anyway. The dataset is avalaible on : https://gitlab.lip6.fr/scialom/mlsum_data | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/429/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/429/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/429.diff",
"html_url": "https://github.com/huggingface/datasets/pull/429",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/429.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/429"
} | true | [
"Thanks @RachelKer for this PR.\r\n\r\nI think the dummy_data structure does not also match. In the `_split_generator` you have something like `os.path.join(downloaded_files[\"validation\"], lang+'_val.jsonl')` but in you dummy_data you have `os.path.join(downloaded_files[\"validation\"], lang+\"_val.zip\", lang+'_val.jsonl')`. I think ` jsonl` files should be directly in the `dummy_data` folder without the sub-folder \r\n\r\n@lhoestq ",
"Hi @RachelKer :)\r\nThanks for adding MLSUM !\r\n\r\nTo fix the CI I think you just have to rebase from master",
"Great, I think it is working now. Thanks :)",
"It looks like your PR does tons of changes in other datasets. \r\nMaybe this is because of the merge from master ?",
"Hmm, I see, sorry I messed up somewhere. Maybe it's easier if we close the pull request and I do another one ?",
"Yea if it's easier for you feel free to re-open a PR"
] |
https://api.github.com/repos/huggingface/datasets/issues/3632 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3632/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3632/comments | https://api.github.com/repos/huggingface/datasets/issues/3632/events | https://github.com/huggingface/datasets/issues/3632 | 1,115,027,185 | I_kwDODunzps5Cdfbx | 3,632 | Adding CC-100: Monolingual Datasets from Web Crawl Data (Datasets links are invalid) | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2022-01-26T13:35:37Z | 2022-02-10T06:58:11Z | 2022-02-10T06:58:11Z | null | ## Describe the bug
The dataset links are no longer valid for CC-100. It seems that the website which was keeping these files are no longer accessible and therefore this dataset became unusable.
Check out the dataset [homepage](http://data.statmt.org/cc-100/) which isn't accessible.
Also the URLs for dataset file per language isn't accessible: http://data.statmt.org/cc-100/<language code here>.txt.xz (language codes: am, sr, ka, etc.)
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("cc100", "ka")
```
It throws 503 error.
## Expected results
It should successfully download and load dataset but it throws an exception because the dataset files are no longer accessible.
## Environment info
Run from google colab. Just installed the library using pip:
```!pip install -U datasets```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3632/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3632/timeline | null | completed | null | null | false | [
"Hi @AnzorGozalishvili,\r\n\r\nMaybe their site was temporarily down, but it seems to work fine now.\r\n\r\nCould you please try again and confirm if the problem persists? ",
"Hi @albertvillanova \r\nI checked and it works. \r\nIt seems that it was really temporarily down.\r\nThanks!"
] |
https://api.github.com/repos/huggingface/datasets/issues/2280 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2280/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2280/comments | https://api.github.com/repos/huggingface/datasets/issues/2280/events | https://github.com/huggingface/datasets/pull/2280 | 870,780,431 | MDExOlB1bGxSZXF1ZXN0NjI1OTE2Mzcy | 2,280 | Fixed typo seperate->separate | [] | closed | false | null | 2 | 2021-04-29T08:55:46Z | 2021-04-29T16:41:22Z | 2021-04-29T16:41:16Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2280/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2280/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2280.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2280",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2280.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2280"
} | true | [
"Hi ! Thanks for the fix :)\r\nThe CI fail isn't related to your PR. I opened a PR #2286 to fix the CI.\r\nWe'll wait for #2286 to be merged to master first if you don't mind",
"The PR has been merged ! Feel free to merge master into your branch to fix the CI"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/5022 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5022/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5022/comments | https://api.github.com/repos/huggingface/datasets/issues/5022/events | https://github.com/huggingface/datasets/pull/5022 | 1,385,432,859 | PR_kwDODunzps4_kxYe | 5,022 | Fix languages of X-CSQA configs in xcsr dataset | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | 4 | 2022-09-26T05:13:39Z | 2022-09-26T12:27:20Z | 2022-09-26T10:57:30Z | null | Fix #5017.
CC: @yangxqiao, @yuchenlin | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5022/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5022/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5022.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5022",
"merged_at": "2022-09-26T10:57:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5022.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5022"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks @lhoestq, I had missed that... ",
"thx for the super fast work @albertvillanova ! any estimate for when the relevant release will happen?\r\n\r\nThanks again ",
"@thesofakillers after a recent change in our library (see #4059), now fixes in all datasets are immediately accessible. You can try it:\r\n```python\r\nfrench = datasets.load_dataset(\"xcsr\", \"X-CSQA-fr\")\r\n```\r\n\r\nPlease note there is an additional fix to that dataset in progress (to be merged today):\r\n- #5024"
] |
https://api.github.com/repos/huggingface/datasets/issues/2015 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2015/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2015/comments | https://api.github.com/repos/huggingface/datasets/issues/2015/events | https://github.com/huggingface/datasets/pull/2015 | 825,942,108 | MDExOlB1bGxSZXF1ZXN0NTg3OTg4NTQ0 | 2,015 | Fix ipython function creation in tests | [] | closed | false | null | 0 | 2021-03-09T13:36:59Z | 2021-03-09T14:06:04Z | 2021-03-09T14:06:03Z | null | The test at `tests/test_caching.py::RecurseDumpTest::test_dump_ipython_function` was failing in python 3.8 because the ipython function was not properly created.
Fix #2010 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2015/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2015/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2015.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2015",
"merged_at": "2021-03-09T14:06:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2015.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2015"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5283 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5283/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5283/comments | https://api.github.com/repos/huggingface/datasets/issues/5283/events | https://github.com/huggingface/datasets/pull/5283 | 1,460,291,003 | PR_kwDODunzps5De5M1 | 5,283 | Release: 2.6.2 | [] | closed | false | null | 1 | 2022-11-22T17:36:24Z | 2022-11-22T17:50:12Z | 2022-11-22T17:47:02Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5283/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5283/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5283.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5283",
"merged_at": "2022-11-22T17:47:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5283.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5283"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/5412 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5412/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5412/comments | https://api.github.com/repos/huggingface/datasets/issues/5412/events | https://github.com/huggingface/datasets/issues/5412 | 1,524,250,269 | I_kwDODunzps5a2jad | 5,412 | load_dataset() cannot find dataset_info.json with multiple training runs in parallel | [] | closed | false | null | 4 | 2023-01-08T00:44:32Z | 2023-01-19T20:28:43Z | 2023-01-19T20:28:43Z | null | ### Describe the bug
I have a custom local dataset in JSON form. I am trying to do multiple training runs in parallel. The first training run runs with no issue. However, when I start another run on another GPU, the following code throws this error.
If there is a workaround to ignore the cache I think that would solve my problem too.
I am using datasets version 2.8.0.
### Steps to reproduce the bug
1. Start training run of GPU 0 loading dataset from
```
load_dataset(
"json",
data_files=tr_dataset_path,
split=f"train",
download_mode="force_redownload",
)
```
2. While GPU 0 is training, start an identical run on GPU 1. GPU 1 will produce the following error:
```
Traceback (most recent call last):
File "/local-scratch1/data/mt/code/qq/train.py", line 198, in <module>
main()
File "/home/username/.local/lib/python3.8/site-packages/click/core.py", line 1130, in __call__
return self.main(*args, **kwargs)
File "/home/username/.local/lib/python3.8/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "/home/username/.local/lib/python3.8/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/username/.local/lib/python3.8/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "/local-scratch1/data/mt/code/qq/train.py", line 113, in main
load_dataset(
File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/datasets/load.py", line 1734, in load_dataset
builder_instance = load_dataset_builder(
File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/datasets/load.py", line 1518, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/datasets/builder.py", line 366, in __init__
self.info = DatasetInfo.from_directory(self._cache_dir)
File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/datasets/info.py", line 313, in from_directory
with fs.open(path_join(dataset_info_dir, config.DATASET_INFO_FILENAME), "r", encoding="utf-8") as f:
File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/fsspec/spec.py", line 1094, in open
self.open(
File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/fsspec/spec.py", line 1106, in open
f = self._open(
File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/fsspec/implementations/local.py", line 175, in _open
return LocalFileOpener(path, mode, fs=self, **kwargs)
File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/fsspec/implementations/local.py", line 273, in __init__
self._open()
File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/fsspec/implementations/local.py", line 278, in _open
self.f = open(self.path, mode=self.mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/username/.cache/huggingface/datasets/json/default-43d06a4aedb25e6d/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/dataset_info.json'
```
### Expected behavior
Expected behavior: 2nd GPU training run should run the same as 1st GPU training run.
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.8.0
- Platform: Linux-5.4.0-120-generic-x86_64-with-glibc2.10
- Python version: 3.8.15
- PyArrow version: 9.0.0
- Pandas version: 1.5.2 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5412/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5412/timeline | null | completed | null | null | false | [
"Hi ! It fails because the dataset is already being prepared by your first run. I'd encourage you to prepare your dataset before using it for multiple trainings.\r\n\r\nYou can also specify another cache directory by passing `cache_dir=` to `load_dataset()`.",
"Thank you! What do you mean by prepare it beforehand? I am unclear how to conduct dataset preparation outside of using the `load_dataset` function.",
"You can have a separate script that does load_dataset + map + save_to_disk to save your prepared dataset somewhere. Then in your training script you can reload the dataset with load_from_disk",
"Thank you! I believe I was running additional map steps after loading, resulting in the cache conflict. "
] |
https://api.github.com/repos/huggingface/datasets/issues/5199 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5199/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5199/comments | https://api.github.com/repos/huggingface/datasets/issues/5199/events | https://github.com/huggingface/datasets/pull/5199 | 1,434,818,836 | PR_kwDODunzps5CJSv1 | 5,199 | Deprecate dummy data generation command | [] | closed | false | null | 1 | 2022-11-03T15:05:54Z | 2022-11-04T14:01:50Z | 2022-11-04T13:59:47Z | null | Deprecate the `dummy_data` CLI command. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5199/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5199/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5199.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5199",
"merged_at": "2022-11-04T13:59:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5199.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5199"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/367 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/367/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/367/comments | https://api.github.com/repos/huggingface/datasets/issues/367/events | https://github.com/huggingface/datasets/pull/367 | 654,012,984 | MDExOlB1bGxSZXF1ZXN0NDQ2ODIxNTAz | 367 | Update Xtreme to add PAWS-X es | [] | closed | false | null | 0 | 2020-07-09T12:14:37Z | 2020-07-09T12:37:11Z | 2020-07-09T12:37:10Z | null | This PR adds the `PAWS-X.es` in the Xtreme dataset #362 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/367/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/367/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/367.diff",
"html_url": "https://github.com/huggingface/datasets/pull/367",
"merged_at": "2020-07-09T12:37:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/367.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/367"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/648 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/648/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/648/comments | https://api.github.com/repos/huggingface/datasets/issues/648/events | https://github.com/huggingface/datasets/issues/648 | 704,753,123 | MDU6SXNzdWU3MDQ3NTMxMjM= | 648 | offset overflow when multiprocessing batched map on large datasets. | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2020-09-19T02:15:11Z | 2020-09-19T16:47:07Z | 2020-09-19T16:46:31Z | null | It only happened when "multiprocessing" + "batched" + "large dataset" at the same time.
```
def bprocess(examples):
examples['len'] = []
for text in examples['text']:
examples['len'].append(len(text))
return examples
wiki.map(brpocess, batched=True, num_proc=8)
```
```
---------------------------------------------------------------------------
RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/yisiang/miniconda3/envs/ml/lib/python3.7/multiprocessing/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/home/yisiang/datasets/src/datasets/arrow_dataset.py", line 153, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/yisiang/datasets/src/datasets/fingerprint.py", line 163, in wrapper
out = func(self, *args, **kwargs)
File "/home/yisiang/datasets/src/datasets/arrow_dataset.py", line 1486, in _map_single
batch = self[i : i + batch_size]
File "/home/yisiang/datasets/src/datasets/arrow_dataset.py", line 1071, in __getitem__
format_kwargs=self._format_kwargs,
File "/home/yisiang/datasets/src/datasets/arrow_dataset.py", line 972, in _getitem
data_subset = self._data.take(indices_array)
File "pyarrow/table.pxi", line 1145, in pyarrow.lib.Table.take
File "/home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/pyarrow/compute.py", line 268, in take
return call_function('take', [data, indices], options)
File "pyarrow/_compute.pyx", line 298, in pyarrow._compute.call_function
File "pyarrow/_compute.pyx", line 192, in pyarrow._compute.Function.call
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays
"""
The above exception was the direct cause of the following exception:
ArrowInvalid Traceback (most recent call last)
in
30 owt = datasets.load_dataset('/home/yisiang/datasets/datasets/openwebtext/openwebtext.py', cache_dir='./datasets')['train']
31 print('load/create data from OpenWebText Corpus for ELECTRA')
---> 32 e_owt = ELECTRAProcessor(owt, apply_cleaning=False).map(cache_file_name=f"electra_owt_{c.max_length}.arrow")
33 dsets.append(e_owt)
34
~/Reexamine_Attention/electra_pytorch/_utils/utils.py in map(self, **kwargs)
126 writer_batch_size=10**4,
127 num_proc=num_proc,
--> 128 **kwargs
129 )
130
~/hugdatafast/hugdatafast/transform.py in my_map(self, *args, **kwargs)
21 if not cache_file_name.endswith('.arrow'): cache_file_name += '.arrow'
22 if '/' not in cache_file_name: cache_file_name = os.path.join(self.cache_directory(), cache_file_name)
---> 23 return self.map(*args, cache_file_name=cache_file_name, **kwargs)
24
25 @patch
~/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1285 logger.info("Spawning {} processes".format(num_proc))
1286 results = [pool.apply_async(self.__class__._map_single, kwds=kwds) for kwds in kwds_per_shard]
-> 1287 transformed_shards = [r.get() for r in results]
1288 logger.info("Concatenating {} shards from multiprocessing".format(num_proc))
1289 result = concatenate_datasets(transformed_shards)
~/datasets/src/datasets/arrow_dataset.py in (.0)
1285 logger.info("Spawning {} processes".format(num_proc))
1286 results = [pool.apply_async(self.__class__._map_single, kwds=kwds) for kwds in kwds_per_shard]
-> 1287 transformed_shards = [r.get() for r in results]
1288 logger.info("Concatenating {} shards from multiprocessing".format(num_proc))
1289 result = concatenate_datasets(transformed_shards)
~/miniconda3/envs/ml/lib/python3.7/multiprocessing/pool.py in get(self, timeout)
655 return self._value
656 else:
--> 657 raise self._value
658
659 def _set(self, i, obj):
ArrowInvalid: offset overflow while concatenating arrays
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/648/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/648/timeline | null | completed | null | null | false | [
"This should be fixed with #645 ",
"Feel free to re-open if it still occurs"
] |
https://api.github.com/repos/huggingface/datasets/issues/1840 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1840/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1840/comments | https://api.github.com/repos/huggingface/datasets/issues/1840/events | https://github.com/huggingface/datasets/issues/1840 | 803,560,039 | MDU6SXNzdWU4MDM1NjAwMzk= | 1,840 | Add common voice | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",
"default": false,
"description": "",
"id": 2725241052,
"name": "speech",
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech"
}
] | closed | false | null | 11 | 2021-02-08T13:21:05Z | 2022-03-20T15:23:40Z | 2021-03-15T05:56:21Z | null | ## Adding a Dataset
- **Name:** *common voice*
- **Description:** *Mozilla Common Voice Dataset*
- **Paper:** Homepage: https://voice.mozilla.org/en/datasets
- **Data:** https://voice.mozilla.org/en/datasets
- **Motivation:** Important speech dataset
- **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/common_voice
If interested in tackling this issue, feel free to tag @patrickvonplaten
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1840/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1840/timeline | null | completed | null | null | false | [
"I have started working on adding this dataset.",
"Hey @BirgerMoell - awesome that you started working on Common Voice. Common Voice is a bit special since, there is no direct download link to download the data. In these cases we usually consider two options:\r\n\r\n1) Find a hacky solution to extract the download link somehow from the XLM tree of the website \r\n2) If this doesn't work we force the user to download the data himself and add a `\"data_dir\"` as an input parameter. E.g. you can take a look at how it is done for [this](https://github.com/huggingface/datasets/blob/66f2a7eece98d2778bd22bb5034cb7c2376032d4/datasets/arxiv_dataset/arxiv_dataset.py#L66) \r\n\r\nAlso the documentation here: https://huggingface.co/docs/datasets/add_dataset.html?highlight=data_dir#downloading-data-files-and-organizing-splits (especially the \"note\") might be helpful.",
"Let me know if you have any other questions",
"I added a Work in Progress pull request (hope that is ok). I've made a card for the dataset and filled out the common_voice.py file with information about the datset (not completely).\r\n\r\nI didn't manage to get the tagging tool working locally on my machine but will look into that later.\r\n\r\nLeft to do.\r\n\r\n- Tag the dataset\r\n- Add missing information and update common_voice.py\r\n\r\nhttps://github.com/huggingface/datasets/pull/1886",
"Awesome! I left a longer comment on the PR :-)",
"I saw that this current datasets package holds common voice version 6.1, how to add the new version 7.0 that is already available?",
"Will me merged next week - we're working on it :-)",
"Common voice still appears to be a 6.1. Is the plan still to upgrade to 7.0?",
"We actually already have the code and everything ready to add Common Voice 7.0 to `datasets` but are still waiting for the common voice authors to give us the green light :-) \r\n\r\nAlso gently pinging @phirework and @milupo here",
"Common Voice 7.0 is available here now: https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0",
"For anyone else stumbling upon this thread, the 8.0 version is also available now: https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0"
] |
https://api.github.com/repos/huggingface/datasets/issues/3484 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3484/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3484/comments | https://api.github.com/repos/huggingface/datasets/issues/3484/events | https://github.com/huggingface/datasets/issues/3484 | 1,088,910,402 | I_kwDODunzps5A53RC | 3,484 | make shape verification to use ArrayXD instead of nested lists for map | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 1 | 2021-12-27T02:16:02Z | 2022-01-05T13:54:03Z | null | null | As describe in https://github.com/huggingface/datasets/issues/2005#issuecomment-793716753 and mentioned by @mariosasko in [image feature example](https://colab.research.google.com/drive/1mIrTnqTVkWLJWoBzT1ABSe-LFelIep1c#scrollTo=ow3XHDvf2I0B&line=1&uniqifier=1), IMO make shape verifcaiton to use ArrayXD instead of nested lists for map can help user reduce unnecessary cast. I notice datasets have done something special for `input_ids` and `attention_mask` which is also unnecessary after this feature added. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3484/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3484/timeline | null | null | null | null | false | [
"Hi! \r\n\r\nYes, this makes sense for numeric values, but first I have to finish https://github.com/huggingface/datasets/pull/3336 because currently ArrayXD only allows the first dimension to be dynamic."
] |
https://api.github.com/repos/huggingface/datasets/issues/2039 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2039/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2039/comments | https://api.github.com/repos/huggingface/datasets/issues/2039/events | https://github.com/huggingface/datasets/pull/2039 | 830,047,652 | MDExOlB1bGxSZXF1ZXN0NTkxNjE3ODY3 | 2,039 | Doc2dial rc | [] | closed | false | null | 0 | 2021-03-12T11:56:28Z | 2021-03-12T15:32:36Z | 2021-03-12T15:32:36Z | null | Added fix to handle the last turn that is a user turn. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2039/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2039/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/2039.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2039",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2039.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2039"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1853 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1853/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1853/comments | https://api.github.com/repos/huggingface/datasets/issues/1853/events | https://github.com/huggingface/datasets/pull/1853 | 804,791,166 | MDExOlB1bGxSZXF1ZXN0NTcwNTAwMjc4 | 1,853 | Configure library root logger at the module level | [] | closed | false | null | 0 | 2021-02-09T18:11:12Z | 2021-02-10T12:32:34Z | 2021-02-10T12:32:34Z | null | Configure library root logger at the datasets.logging module level (singleton-like).
By doing it this way:
- we are sure configuration is done only once: module level code is only runned once
- no need of global variable
- no need of threading lock | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1853/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1853/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1853.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1853",
"merged_at": "2021-02-10T12:32:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1853.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1853"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1910 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1910/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1910/comments | https://api.github.com/repos/huggingface/datasets/issues/1910/events | https://github.com/huggingface/datasets/pull/1910 | 811,697,108 | MDExOlB1bGxSZXF1ZXN0NTc2MTg0MDQ3 | 1,910 | Adding CoNLLpp dataset. | [] | closed | false | null | 1 | 2021-02-19T05:12:30Z | 2021-03-04T22:02:47Z | 2021-03-04T22:02:47Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1910/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1910/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1910.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1910",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1910.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1910"
} | true | [
"It looks like this PR now includes changes to many other files than the ones for CoNLLpp.\r\n\r\nTo fix that feel free to create another branch and another PR.\r\n\r\nThis was probably caused by a git rebase. You can avoid this issue by using git merge if you've already pushed your branch."
] |
|
https://api.github.com/repos/huggingface/datasets/issues/4996 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4996/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4996/comments | https://api.github.com/repos/huggingface/datasets/issues/4996/events | https://github.com/huggingface/datasets/issues/4996 | 1,379,345,161 | I_kwDODunzps5SNyMJ | 4,996 | Dataset Viewer issue for Jean-Baptiste/wikiner_fr | [] | closed | false | null | 2 | 2022-09-20T12:32:07Z | 2022-09-27T12:35:44Z | 2022-09-27T12:35:44Z | null | ### Link
https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr
### Description
```
Error code: StreamingRowsError
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: 'zip:/data/train::https:/huggingface.co/datasets/Jean-Baptiste/wikiner_fr/resolve/main/data.zip/state.json'
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/responses/first_rows.py", line 337, in get_first_rows_response
rows = get_rows(dataset, config, split, streaming=True, rows_max_number=rows_max_number, hf_token=hf_token)
File "/src/services/worker/src/worker/utils.py", line 123, in decorator
return func(*args, **kwargs)
File "/src/services/worker/src/worker/responses/first_rows.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 718, in __iter__
for key, example in self._iter():
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 708, in _iter
yield from ex_iterable
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 112, in __iter__
yield from self.generate_examples_fn(**self.kwargs)
File "/tmp/modules-cache/datasets_modules/datasets/Jean-Baptiste--wikiner_fr/683a580ba6ec769d508f7dfc603a651667b0ed3817b1ae5bfd45f97cc024923f/wikiner_fr.py", line 165, in _generate_examples
dataset = Dataset.load_from_disk(filepath)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 1210, in load_from_disk
with open(Path(dataset_path, config.DATASET_STATE_JSON_FILENAME).as_posix(), encoding="utf-8") as state_file:
FileNotFoundError: [Errno 2] No such file or directory: 'zip:/data/train::https:/huggingface.co/datasets/Jean-Baptiste/wikiner_fr/resolve/main/data.zip/state.json'
```
Is it an error with the dataset script, or the data itself, @huggingface/datasets?
https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr/tree/main
### Owner
No | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4996/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4996/timeline | null | completed | null | null | false | [
"The script uses `Dataset.load_from_disk`, which as you can expect, doesn't work in streaming mode.\r\n\r\nIt would probably be more practical to load the dataset locally using `Dataset.load_from_disk` first and then `push_to_hub` to upload it in Parquet on the Hub",
"I've transferred this issue to the Hub repo: https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr/discussions/3\r\n\r\nI'm closing this."
] |
https://api.github.com/repos/huggingface/datasets/issues/2013 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2013/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2013/comments | https://api.github.com/repos/huggingface/datasets/issues/2013/events | https://github.com/huggingface/datasets/pull/2013 | 825,694,305 | MDExOlB1bGxSZXF1ZXN0NTg3NzYzMTgx | 2,013 | Add Cryptonite dataset | [] | closed | false | null | 0 | 2021-03-09T10:32:11Z | 2021-03-09T19:27:07Z | 2021-03-09T19:27:06Z | null | cc @aviaefrat who's the original author of the dataset & paper, see https://github.com/aviaefrat/cryptonite | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2013/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2013/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2013.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2013",
"merged_at": "2021-03-09T19:27:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2013.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2013"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3183 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3183/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3183/comments | https://api.github.com/repos/huggingface/datasets/issues/3183/events | https://github.com/huggingface/datasets/pull/3183 | 1,039,761,120 | PR_kwDODunzps4t3Dag | 3,183 | Add missing docstring to DownloadConfig | [] | closed | false | null | 0 | 2021-10-29T16:56:35Z | 2021-11-02T10:25:38Z | 2021-11-02T10:25:37Z | null | Document the `use_etag` and `num_proc` attributes in `DownloadConig`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3183/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3183/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3183.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3183",
"merged_at": "2021-11-02T10:25:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3183.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3183"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/62 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/62/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/62/comments | https://api.github.com/repos/huggingface/datasets/issues/62/events | https://github.com/huggingface/datasets/pull/62 | 614,630,830 | MDExOlB1bGxSZXF1ZXN0NDE1MTQ1NDAx | 62 | [Cached Path] Better error message | [] | closed | false | null | 0 | 2020-05-08T09:39:47Z | 2020-05-08T09:45:47Z | 2020-05-08T09:45:47Z | null | IMO returning `None` in this function only leads to confusion and is never helpful. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/62/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/62/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/62.diff",
"html_url": "https://github.com/huggingface/datasets/pull/62",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/62.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/62"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/624 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/624/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/624/comments | https://api.github.com/repos/huggingface/datasets/issues/624/events | https://github.com/huggingface/datasets/issues/624 | 700,541,628 | MDU6SXNzdWU3MDA1NDE2Mjg= | 624 | Add learningq dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | open | false | null | 0 | 2020-09-13T10:20:27Z | 2020-09-14T09:50:02Z | null | null | Hi,
Thank you again for this amazing repo.
Would it be possible for y'all to add the LearningQ dataset - https://github.com/AngusGLChen/LearningQ ?
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/624/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/624/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/1058 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1058/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1058/comments | https://api.github.com/repos/huggingface/datasets/issues/1058/events | https://github.com/huggingface/datasets/pull/1058 | 756,332,704 | MDExOlB1bGxSZXF1ZXN0NTMxODk0Mjc0 | 1,058 | added paws-x dataset | [] | closed | false | null | 0 | 2020-12-03T16:06:01Z | 2020-12-04T13:46:05Z | 2020-12-04T13:46:05Z | null | Added paws-x dataset. Updating README and tags in the dataset card in a while | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1058/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1058/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1058.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1058",
"merged_at": "2020-12-04T13:46:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1058.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1058"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5001 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5001/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5001/comments | https://api.github.com/repos/huggingface/datasets/issues/5001/events | https://github.com/huggingface/datasets/pull/5001 | 1,379,844,820 | PR_kwDODunzps4_TBWa | 5,001 | Support loading XML datasets | [] | open | false | null | 3 | 2022-09-20T18:42:58Z | 2022-11-01T12:44:42Z | null | null | CC: @davanstrien | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 2,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5001/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5001/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/5001.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5001",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5001.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5001"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5001). All of your documentation changes will be reflected on that endpoint.",
"> CC: @davanstrien\r\n\r\nI should have some time to look at this on Friday :) ",
"@albertvillanova I've tried this with a few different XML datasets. One issue I've run into is getting a `KeyError` when the attributes of a field differ from the first parsed row. Unfortunately, this can come up in the ALTO XML format, for example, if you want to parse the 'string' field, which contains the text in the ALTO XML files. \r\n\r\nWhen parsing a file, this instance has no 'STYLE' attribute: \r\n\r\n```xml\r\n<TextLine HEIGHT=\"39\" WIDTH=\"295\" VPOS=\"926\" HPOS=\"247\"><String WC=\"0.4600000083\" CONTENT=\"jufqu’en\" HEIGHT=\"39\" WIDTH=\"117\" VPOS=\"926\" HPOS=\"247\"/><SP WIDTH=\"14\" VPOS=\"928\" HPOS=\"365\"/><String WC=\"0.6075000167\" CONTENT=\"l’an\" HEIGHT=\"26\" WIDTH=\"50\" VPOS=\"928\" HPOS=\"380\"/><SP WIDTH=\"24\" VPOS=\"936\" HPOS=\"431\"/><String WC=\"0.4300000072\" CONTENT=\"1\" HEIGHT=\"16\" WIDTH=\"9\" VPOS=\"936\" HPOS=\"456\"/><String STYLE=\"italics\" WC=\"0.5774999857\" CONTENT=\"361.\" HEIGHT=\"25\" WIDTH=\"68\" VPOS=\"933\" HPOS=\"474\"/></TextLine>\r\n```\r\n\r\nWhereas this one which appears later in the file, does have this field: \r\n\r\n```xml\r\n<TextLine HEIGHT=\"39\" WIDTH=\"712\" VPOS=\"966\" HPOS=\"297\"><String STYLE=\"italics\" WC=\"0.6999999881\" CONTENT=\"I\" HEIGHT=\"17\" WIDTH=\"9\" VPOS=\"977\" HPOS=\"297\"/><String WC=\"0.5\" CONTENT=\"I.\" HEIGHT=\"18\" WIDTH=\"25\" VPOS=\"976\" HPOS=\"318\"/><SP WIDTH=\"24\" VPOS=\"971\" HPOS=\"344\"/><String STYLE=\"italics\" WC=\"0.3359999955\" CONTENT=\"Crade\" HEIGHT=\"26\" WIDTH=\"91\" VPOS=\"967\" HPOS=\"369\"/><SP WIDTH=\"31\" VPOS=\"971\" HPOS=\"461\"/><String STYLE=\"italics\" WC=\"0.6060000062\" CONTENT=\"Pétri\" HEIGHT=\"26\" WIDTH=\"71\" VPOS=\"968\" HPOS=\"493\"/><SP WIDTH=\"23\" VPOS=\"968\" HPOS=\"565\"/><String STYLE=\"italics\" WC=\"0.612857163\" CONTENT=\"Candidi\" HEIGHT=\"27\" WIDTH=\"111\" VPOS=\"967\" HPOS=\"589\"/><SP WIDTH=\"19\" VPOS=\"967\" HPOS=\"701\"/><String STYLE=\"italics\" WC=\"0.4088888764\" CONTENT=\"Decembrii\" HEIGHT=\"28\" WIDTH=\"144\" VPOS=\"966\" HPOS=\"721\"/><SP WIDTH=\"10\" VPOS=\"968\" HPOS=\"866\"/><String STYLE=\"italics\" WC=\"0.4600000083\" CONTENT=\"in\" HEIGHT=\"25\" WIDTH=\"27\" VPOS=\"968\" HPOS=\"877\"/><SP WIDTH=\"9\" VPOS=\"967\" HPOS=\"905\"/><String STYLE=\"italics\" WC=\"0.5099999905\" CONTENT=\"funere\" HEIGHT=\"38\" WIDTH=\"94\" VPOS=\"967\" HPOS=\"915\"/></TextLine>\r\n```\r\n\r\nSince the first-seen fields define what is passed to `arrow_writer`, this causes a KeyError when the version with the extra attributes is encountered because it doesn't expect this column. \r\n\r\nSince it's important to support streaming, I'm not sure there is a nice way to detect attributes for the whole file easily in an automatic way. The two potential ways I can see of doing it.\r\n\r\n- Do an initial pass on a batch of data to have a higher chance of encountering variations in attributes before doing the arrow write. \r\n- Do a full pass on one file (and assume that this won't change across files) \r\n\r\nI think the other way of doing this would be to allow users to define expected/wanted attributes as another loading argument. This could then be used to extract the described attributes (and make them None if not found). This requires a bit more work from the user but could be helpful. For example, in the XML above, likely, most users will only want the `WC` and `CONTENT` attributes. So they could specify this upfront and avoid loading extra data they don't need or want. I suspect this option would make more sense than making this operation automatic for the case where attributes might change. WDYT? \r\n\r\n\r\n\r\n\r\n\r\n\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/1500 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1500/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1500/comments | https://api.github.com/repos/huggingface/datasets/issues/1500/events | https://github.com/huggingface/datasets/pull/1500 | 763,479,305 | MDExOlB1bGxSZXF1ZXN0NTM3OTM0OTI1 | 1,500 | adding polsum | [] | closed | false | null | 1 | 2020-12-12T09:05:29Z | 2020-12-18T09:43:43Z | 2020-12-18T09:43:43Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1500/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1500/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1500.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1500",
"merged_at": "2020-12-18T09:43:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1500.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1500"
} | true | [
"@lhoestq thanks for the comments! Should be fixed in the latest commit, I assume the CI errors are unrelated."
] |
|
https://api.github.com/repos/huggingface/datasets/issues/467 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/467/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/467/comments | https://api.github.com/repos/huggingface/datasets/issues/467/events | https://github.com/huggingface/datasets/pull/467 | 671,580,010 | MDExOlB1bGxSZXF1ZXN0NDYxNzgwMzUy | 467 | DOCS: Fix typo | [] | closed | false | null | 1 | 2020-08-02T08:59:37Z | 2020-08-02T13:52:27Z | 2020-08-02T09:18:54Z | null | Fix typo from dictionnary -> dictionary | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/467/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/467/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/467.diff",
"html_url": "https://github.com/huggingface/datasets/pull/467",
"merged_at": "2020-08-02T09:18:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/467.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/467"
} | true | [
"Thanks!"
] |
https://api.github.com/repos/huggingface/datasets/issues/2669 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2669/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2669/comments | https://api.github.com/repos/huggingface/datasets/issues/2669/events | https://github.com/huggingface/datasets/issues/2669 | 946,982,998 | MDU6SXNzdWU5NDY5ODI5OTg= | 2,669 | Metric kwargs are not passed to underlying external metric f1_score | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-07-18T08:32:31Z | 2021-07-18T18:36:05Z | 2021-07-18T11:19:04Z | null | ## Describe the bug
When I want to use F1 score with average="min", this keyword argument does not seem to be passed through to the underlying sklearn metric. This is evident because [sklearn](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html) throws an error telling me so.
## Steps to reproduce the bug
```python
import datasets
f1 = datasets.load_metric("f1", keep_in_memory=True, average="min")
f1.add_batch(predictions=[0,2,3], references=[1, 2, 3])
f1.compute()
```
## Expected results
No error, because `average="min"` should be passed correctly to f1_score in sklearn.
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\datasets\metric.py", line 402, in compute
output = self._compute(predictions=predictions, references=references, **kwargs)
File "C:\Users\bramv\.cache\huggingface\modules\datasets_modules\metrics\f1\82177930a325d4c28342bba0f116d73f6d92fb0c44cd67be32a07c1262b61cfe\f1.py", line 97, in _compute
"f1": f1_score(
File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f
return f(*args, **kwargs)
File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1071, in f1_score
return fbeta_score(y_true, y_pred, beta=1, labels=labels,
File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f
return f(*args, **kwargs)
File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1195, in fbeta_score
_, _, f, _ = precision_recall_fscore_support(y_true, y_pred,
File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f
return f(*args, **kwargs)
File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1464, in precision_recall_fscore_support
labels = _check_set_wise_labels(y_true, y_pred, average, labels,
File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1294, in _check_set_wise_labels
raise ValueError("Target is %s but average='binary'. Please "
ValueError: Target is multiclass but average='binary'. Please choose another average setting, one of [None, 'micro', 'macro', 'weighted'].
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.9.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.9.2
- PyArrow version: 4.0.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2669/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2669/timeline | null | completed | null | null | false | [
"Hi @BramVanroy, thanks for reporting.\r\n\r\nFirst, note that `\"min\"` is not an allowed value for `average`. According to scikit-learn [documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html), `average` can only take the values: `{\"micro\", \"macro\", \"samples\", \"weighted\", \"binary\"} or None, default=\"binary\"`.\r\n\r\nSecond, you should take into account that all additional metric-specific argument should be passed in the method `compute` (and not in the method `load_metric`). You can find more information in our documentation: https://huggingface.co/docs/datasets/using_metrics.html#computing-the-metric-scores\r\n\r\nSo for example, if you would like to calculate the macro-averaged F1 score, you should use:\r\n```python\r\nimport datasets\r\n\r\nf1 = datasets.load_metric(\"f1\", keep_in_memory=True)\r\nf1.add_batch(predictions=[0,2,3], references=[1, 2, 3])\r\nf1.compute(average=\"macro\")\r\n```",
"Thanks, that was it. A bit strange though, since `load_metric` had an argument `metric_init_kwargs`. I assume that that's for specific initialisation arguments whereas `average` is for the function itself."
] |
https://api.github.com/repos/huggingface/datasets/issues/3408 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3408/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3408/comments | https://api.github.com/repos/huggingface/datasets/issues/3408/events | https://github.com/huggingface/datasets/issues/3408 | 1,075,642,915 | I_kwDODunzps5AHQIj | 3,408 | Typo in Dataset viewer error message | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 1 | 2021-12-09T14:34:02Z | 2021-12-22T11:02:53Z | 2021-12-22T11:02:53Z | null | ## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
When creating an empty dataset repo, the Dataset Preview provides a helpful message that no files were found. There is a tiny typo in that message: "ressource" should be "resource"

Am I the one who added this dataset ?
N/A
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3408/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3408/timeline | null | completed | null | null | false | [
"Fixed, thanks\r\n<img width=\"661\" alt=\"Capture d’écran 2021-12-22 à 12 02 30\" src=\"https://user-images.githubusercontent.com/1676121/147082881-cf700e8d-0511-4431-b214-d6cf8137db10.png\">\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/4285 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4285/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4285/comments | https://api.github.com/repos/huggingface/datasets/issues/4285/events | https://github.com/huggingface/datasets/pull/4285 | 1,226,374,831 | PR_kwDODunzps43VtEa | 4,285 | Update LexGLUE README.md | [] | closed | false | null | 1 | 2022-05-05T08:36:50Z | 2022-05-05T13:39:04Z | 2022-05-05T13:33:35Z | null | Update the leaderboard based on the latest results presented in the ACL 2022 version of the article. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4285/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4285/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4285.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4285",
"merged_at": "2022-05-05T13:33:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4285.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4285"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/1439 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1439/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1439/comments | https://api.github.com/repos/huggingface/datasets/issues/1439/events | https://github.com/huggingface/datasets/pull/1439 | 760,968,410 | MDExOlB1bGxSZXF1ZXN0NTM1NzA4NDU1 | 1,439 | Update README.md | [] | closed | false | null | 0 | 2020-12-10T06:57:01Z | 2020-12-11T15:22:53Z | 2020-12-11T15:22:53Z | null | 1k-10k -> 1k-1M
3 separate configs are available with min. 1K and max. 211.3k examples | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1439/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1439/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1439.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1439",
"merged_at": "2020-12-11T15:22:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1439.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1439"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/6014 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6014/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6014/comments | https://api.github.com/repos/huggingface/datasets/issues/6014/events | https://github.com/huggingface/datasets/issues/6014 | 1,798,213,816 | I_kwDODunzps5rLpC4 | 6,014 | Request to Share/Update Dataset Viewer Code | [] | open | false | null | 6 | 2023-07-11T06:36:09Z | 2023-07-12T14:18:49Z | null | null |
Overview:
The repository (huggingface/datasets-viewer) was recently archived and when I tried to run the code, there was the error message "AttributeError: module 'datasets.load' has no attribute 'prepare_module'". I could not resolve the issue myself due to lack of documentation of that attribute.
Request:
I kindly request the sharing of the code responsible for the dataset preview functionality or help with resolving the error. The dataset viewer on the Hugging Face website is incredibly useful since it is compatible with different types of inputs. It allows users to find datasets that meet their needs more efficiently. If needed, I am willing to contribute to the project by testing, documenting, and providing feedback on the dataset viewer code.
Thank you for considering this request, and I look forward to your response. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6014/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6014/timeline | null | null | null | null | false | [
"Hi ! The huggingface/dataset-viewer code was not maintained anymore because we switched to a new dataset viewer that is deployed available for each dataset the Hugging Face website.\r\n\r\nWhat are you using this old repository for ?",
"I think these parts are outdated:\r\n\r\n* https://github.com/huggingface/datasets-viewer/blob/8efad8eae313a891f713469983bf4c744786f26e/run.py#L126-L131\r\n* https://github.com/huggingface/datasets-viewer/blob/8efad8eae313a891f713469983bf4c744786f26e/run.py#L145-L150\r\n\r\nTo make the viewer work, the first one should be replaced with the following:\r\n```python\r\ndataset_module = datasets.load.dataset_module_factory(path)\r\nbuilder_cls = datasets.load.import_main_class(dataset_module.module_path)\r\nconfs = builder_cls.BUILDER_CONFIGS\r\n```\r\nAnd the second one:\r\n```python\r\ndataset_module = datasets.load.dataset_module_factory(path)\r\nbuilder_cls = datasets.load.import_main_class(dataset_module.module_path)\r\nif conf:\r\n builder_instance = builder_cls(name=conf, cache_dir=path if path_to_datasets is not None else None)\r\nelse:\r\n builder_instance = builder_cls(cache_dir=path if path_to_datasets is not None else None)\r\n```\r\n\r\nBut as @lhoestq suggested, it's better to use the `datasets-server` API nowadays to [fetch the rows](https://huggingface.co/docs/datasets-server/rows).",
"> The dataset viewer on the Hugging Face website is incredibly useful\r\n\r\n@mariosasko i think @lilyorlilypad wants to run the new dataset-viewer, not the old one",
"> wants to run the new dataset-viewer, not the old one\r\n\r\nThanks for the clarification for me. I do want to run the new dataset-viewer. ",
"It should be possible to run it locally using the HF datasets-server API (docs [here](https://huggingface.co/docs/datasets-server)) but the front end part is not open source (yet ?)\r\n\r\nThe back-end is open source though if you're interested: https://github.com/huggingface/datasets-server\r\nIt automatically converts datasets on HF to Parquet, which is the format we use to power the viewer.",
"the new frontend would probably be hard to open source, as is, as it's quite intertwined with the Hub's code.\r\n\r\nHowever, at some point it would be amazing to have a community-driven open source implementation of a frontend to datasets-server! "
] |
https://api.github.com/repos/huggingface/datasets/issues/3021 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3021/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3021/comments | https://api.github.com/repos/huggingface/datasets/issues/3021/events | https://github.com/huggingface/datasets/pull/3021 | 1,015,444,094 | PR_kwDODunzps4spzJU | 3,021 | Support loading dataset from multiple zipped CSV data files | [] | closed | false | null | 0 | 2021-10-04T17:33:57Z | 2021-10-06T08:36:46Z | 2021-10-06T08:36:45Z | null | Fix partially #3018.
CC: @lewtun | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3021/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3021/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3021.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3021",
"merged_at": "2021-10-06T08:36:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3021.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3021"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5987 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5987/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5987/comments | https://api.github.com/repos/huggingface/datasets/issues/5987/events | https://github.com/huggingface/datasets/issues/5987 | 1,773,047,909 | I_kwDODunzps5prpBl | 5,987 | Why max_shard_size is not supported in load_dataset and passed to download_and_prepare | [] | closed | false | null | 5 | 2023-06-25T04:19:13Z | 2023-06-29T16:06:08Z | 2023-06-29T16:06:08Z | null | ### Describe the bug
https://github.com/huggingface/datasets/blob/a8a797cc92e860c8d0df71e0aa826f4d2690713e/src/datasets/load.py#L1809
What I can to is break the `load_dataset` and use `load_datset_builder` + `download_and_prepare` instead.
### Steps to reproduce the bug
https://github.com/huggingface/datasets/blob/a8a797cc92e860c8d0df71e0aa826f4d2690713e/src/datasets/load.py#L1809
### Expected behavior
Users can define the max shard size.
### Environment info
datasets==2.13.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5987/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5987/timeline | null | completed | null | null | false | [
"Can you explain your use case for `max_shard_size`? \r\n\r\nOn some systems, there is a limit to the size of a memory-mapped file, so we could consider exposing this parameter in `load_dataset`.",
"In my use case, users may choose a proper size to balance the cost and benefit of using large shard size. (On azure blob or hdfs which may automatically download the shard from background)",
"But `load_dataset` doesn't support caching (and reading) Arrow datasets from remote storage. \r\n\r\n`load_datset_builder` + `download_and_prepare` is not equal to `load_dataset`. The latter has one more step, `builder.as_dataset`, that memory-maps Arrow files, which only works for local files.",
"Thanks. So if I want to use `IterableDataset` and control the size of single arrow file, how should I organize the data loader? Maybe `load_dataset_build` + `download_and_prepare` + `builder.as_dataset` + `dataset.to_iterable_dataset`?",
"Yes, this should work.\r\n\r\nI think we can expose `max_shard_size` in `load_dataset`, so feel free to open a PR."
] |
https://api.github.com/repos/huggingface/datasets/issues/5659 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5659/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5659/comments | https://api.github.com/repos/huggingface/datasets/issues/5659/events | https://github.com/huggingface/datasets/issues/5659 | 1,635,447,540 | I_kwDODunzps5hevL0 | 5,659 | [Audio] Soundfile/libsndfile requirements too stringent for decoding mp3 files | [] | closed | false | null | 9 | 2023-03-22T10:07:33Z | 2023-04-28T03:25:39Z | 2023-04-07T08:51:28Z | null | ### Describe the bug
I'm encountering several issues trying to load mp3 audio files using `datasets` on a TPU v4.
The PR https://github.com/huggingface/datasets/pull/5573 updated the audio loading logic to rely solely on the `soundfile`/`libsndfile` libraries for loading audio samples, regardless of their file type.
The installation guide suggests that `libsndfile` is bundled in when `soundfile` is pip installed:
https://github.com/huggingface/datasets/blob/e1af108015e43f9df8734a1faeeaeb9eafce3971/docs/source/installation.md?plain=1#L70-L71
However, just pip installing `soundfile==0.12.1` throws an error that `libsndfile` is missing:
```
pip install soundfile==0.12.1
```
Then:
```python
>>> soundfile
>>> soundfile.__libsndfile_version__
```
<details>
<summary> Traceback (most recent call last): </summary>
```
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/soundfile.py", line 161, in <module>
import _soundfile_data # ImportError if this doesn't exist
ModuleNotFoundError: No module named '_soundfile_data'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/soundfile.py", line 170, in <module>
raise OSError('sndfile library not found using ctypes.util.find_library')
OSError: sndfile library not found using ctypes.util.find_library
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/soundfile.py", line 192, in <module>
_snd = _ffi.dlopen(_explicit_libname)
OSError: cannot load library 'libsndfile.so': libsndfile.so: cannot open shared object file: No such file or directory
```
</details>
Thus, I've followed the official instructions for installing the `soundfile` package from https://github.com/bastibe/python-soundfile#installation, which states that `libsndfile` needs to be installed separately as:
```
pip install --upgrade soundfile
sudo apt install libsndfile1
```
We can now import `soundfile`:
```python
>>> import soundfile
>>> soundfile.__version__
'0.12.1'
>>> soundfile.__libsndfile_version__
'1.0.28'
```
We see that we have `soundfile==0.12.1`, which matches the `datasets[audio]` package constraints:
https://github.com/huggingface/datasets/blob/e1af108015e43f9df8734a1faeeaeb9eafce3971/setup.py#L144-L147
But we have `libsndfile==1.0.28`, which is too low for decoding mp3 files:
https://github.com/huggingface/datasets/blob/e1af108015e43f9df8734a1faeeaeb9eafce3971/src/datasets/config.py#L136-L138
Updating/upgrading the `libsndfile` doesn't change this:
```
sudo apt-get update
sudo apt-get upgrade
```
Is there any other suggestion for how to get a compatible `libsndfile` version? Currently, the version bundled with Ubuntu `apt-get` is too low for decoding mp3 files.
Maybe we could add this under `setup.py` such that we install the correct `libsndfile` version when we do `pip install datasets[audio]`? IMO this would help circumvent such version issues.
### Steps to reproduce the bug
Environment described above. Loading mp3 files:
```python
from datasets import load_dataset
common_voice_es = load_dataset("common_voice", "es", split="validation", streaming=True)
print(next(iter(common_voice_es)))
```
```python
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[4], line 2
1 common_voice_es = load_dataset("common_voice", "es", split="validation", streaming=True)
----> 2 print(next(iter(common_voice_es)))
File ~/datasets/src/datasets/iterable_dataset.py:941, in IterableDataset.__iter__(self)
937 for key, example in ex_iterable:
938 if self.features:
939 # `IterableDataset` automatically fills missing columns with None.
940 # This is done with `_apply_feature_types_on_example`.
--> 941 yield _apply_feature_types_on_example(
942 example, self.features, token_per_repo_id=self._token_per_repo_id
943 )
944 else:
945 yield example
File ~/datasets/src/datasets/iterable_dataset.py:700, in _apply_feature_types_on_example(example, features, token_per_repo_id)
698 encoded_example = features.encode_example(example)
699 # Decode example for Audio feature, e.g.
--> 700 decoded_example = features.decode_example(encoded_example, token_per_repo_id=token_per_repo_id)
701 return decoded_example
File ~/datasets/src/datasets/features/features.py:1864, in Features.decode_example(self, example, token_per_repo_id)
1850 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):
1851 """Decode example with custom feature decoding.
1852
1853 Args:
(...)
1861 `dict[str, Any]`
1862 """
-> 1864 return {
1865 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1866 if self._column_requires_decoding[column_name]
1867 else value
1868 for column_name, (feature, value) in zip_dict(
1869 {key: value for key, value in self.items() if key in example}, example
1870 )
1871 }
File ~/datasets/src/datasets/features/features.py:1865, in <dictcomp>(.0)
1850 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):
1851 """Decode example with custom feature decoding.
1852
1853 Args:
(...)
1861 `dict[str, Any]`
1862 """
1864 return {
-> 1865 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1866 if self._column_requires_decoding[column_name]
1867 else value
1868 for column_name, (feature, value) in zip_dict(
1869 {key: value for key, value in self.items() if key in example}, example
1870 )
1871 }
File ~/datasets/src/datasets/features/features.py:1308, in decode_nested_example(schema, obj, token_per_repo_id)
1305 elif isinstance(schema, (Audio, Image)):
1306 # we pass the token to read and decode files from private repositories in streaming mode
1307 if obj is not None and schema.decode:
-> 1308 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
1309 return obj
File ~/datasets/src/datasets/features/audio.py:167, in Audio.decode_example(self, value, token_per_repo_id)
162 raise RuntimeError(
163 "Decoding 'opus' files requires system library 'libsndfile'>=1.0.31, "
164 'You can try to update `soundfile` python library: `pip install "soundfile>=0.12.1"`. '
165 )
166 elif not config.IS_MP3_SUPPORTED and audio_format == "mp3":
--> 167 raise RuntimeError(
168 "Decoding 'mp3' files requires system library 'libsndfile'>=1.1.0, "
169 'You can try to update `soundfile` python library: `pip install "soundfile>=0.12.1"`. '
170 )
172 if file is None:
173 token_per_repo_id = token_per_repo_id or {}
RuntimeError: Decoding 'mp3' files requires system library 'libsndfile'>=1.1.0, You can try to update `soundfile` python library: `pip install "soundfile>=0.12.1"`.
```
### Expected behavior
Load mp3 files!
### Environment info
- `datasets` version: 2.10.2.dev0
- Platform: Linux-5.13.0-1023-gcp-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.13.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
- Soundfile version: 0.12.1
- Libsndfile version: 1.0.28 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5659/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5659/timeline | null | completed | null | null | false | [
"cc @polinaeterna @lhoestq ",
"@sanchit-gandhi can you please also post the logs of `pip install soundfile==0.12.1`? To check what wheel is being installed or if it's being built from source (I think it's the latter case). \r\nRequired `libsndfile` binary **should** be bundeled with `soundfile` wheel but I assume it **might not** be the case for some non standard Linux distributions. \r\nThe only solution for using `soundfile` here is to build [`libsndfile`](https://github.com/libsndfile/libsndfile) from source:\r\n\r\n```bash\r\ngit clone https://github.com/libsndfile/libsndfile.git\r\ncd libsndfile/\r\nautoreconf -vif\r\n./configure --enable-werror \r\nmake\r\nmake install\r\n```\r\nfor this, some building libraries should be installed, for Debian/Ubuntu it's like:\r\n```bash\r\napt install autoconf autogen automake build-essential libasound2-dev \\\r\n libflac-dev libogg-dev libtool libvorbis-dev libopus-dev libmp3lame-dev \\\r\n libmpg123-dev pkg-config python\r\n```\r\nbut for other Linux distributions it might be different.\r\n\r\nWhen the binary is compiled, it should be put into location where `soundfile` would search for it (the directory is named `_soundfile_data`), it depends on where`libsdfile` (from the previous step) and `soundfile` were installed, might be something like this:\r\n\r\n```bash\r\ncp /usr/local/lib/libsndfile.so /usr/local/lib/python3.7/dist-packages/_soundfile_data/\r\ncp /usr/local/lib/libsndfile.la /usr/local/lib/python3.7/dist-packages/_soundfile_data/\r\n```\r\n\r\nAnother solution is to not use `soundfile` and apply custom processing function with `torchaudio` while setting `decode=False` in `Audio` feature and passing custom function to `.map`. ",
"Not sure if it may help, but you could also try updating `pip` before installing soundfile",
"@lhoestq @sanchit-gandhi. I encountered the same error (also on the TPU v4) when trying to run `datasets` from source.\r\n\r\nDowngrading soundfile with `pip install soundfile==0.12.0` seems to fix the issue for me.",
"Maybe let's open an issue at https://github.com/bastibe/python-soundfile/issues in case they might know why you get `OSError: cannot load library 'libsndfile.so'` ?",
"> @sanchit-gandhi can you please also post the logs of `pip install soundfile==0.12.1`? To check what wheel is being installed or if it's being built from source (I think it's the latter case). Required `libsndfile` binary **should** be bundeled with `soundfile` wheel but I assume it **might not** be the case for some non standard Linux distributions. The only solution for using `soundfile` here is to build [`libsndfile`](https://github.com/libsndfile/libsndfile) from source:\r\n> \r\n> ```shell\r\n> git clone https://github.com/libsndfile/libsndfile.git\r\n> cd libsndfile/\r\n> autoreconf -vif\r\n> ./configure --enable-werror \r\n> make\r\n> make install\r\n> ```\r\n\r\nThis fixed the issue for me. After installing libsndfile as described above, I had to uninstall soundfile and re-install it with this command. `pip install \"soundfile>=0.12.1\"`",
"Thank you so much for the comprehensive instructions @polinaeterna! Also confirming that they worked for me 🤗 In my case, I had to run several of these commands under \"sudo\" for privileges, but otherwise this workaround gave a successful `libsndfile` install:\r\n\r\n1. Grab source code:\r\n```\r\ngit clone https://github.com/libsndfile/libsndfile.git\r\n```\r\n\r\n2. Set up a build environment:\r\n```\r\nsudo apt install autoconf autogen automake build-essential libasound2-dev \\\r\n libflac-dev libogg-dev libtool libvorbis-dev libopus-dev libmp3lame-dev \\\r\n libmpg123-dev pkg-config python\r\n```\r\n\r\n3. Build and test `libsndfile`:\r\n\r\n```\r\nautoreconf -vif\r\n./configure --enable-werror\r\nsudo make\r\nsudo make check\r\n```\r\n\r\n4. Create `_soundfile_data` submodule (if it does not exist already):\r\n```\r\nsudo mkdir /usr/local/lib/python3.8/dist-packages/_soundfile_data/\r\n```\r\n\r\n5. Copy `libsndfile` files into submodule:\r\n```\r\nsudo cp /usr/local/lib/libsndfile.* /usr/local/lib/python3.8/dist-packages/_soundfile_data/\r\n```",
"On a different machine, I also tried separately by first upgrading pip, then installing soundfile. This worked too! Thanks @lhoestq 🙌",
"> @sanchit-gandhi can you please also post the logs of `pip install soundfile==0.12.1`? To check what wheel is being installed or if it's being built from source (I think it's the latter case). Required `libsndfile` binary **should** be bundeled with `soundfile` wheel but I assume it **might not** be the case for some non standard Linux distributions. The only solution for using `soundfile` here is to build [`libsndfile`](https://github.com/libsndfile/libsndfile) from source:\r\n> \r\n> ```shell\r\n> git clone https://github.com/libsndfile/libsndfile.git\r\n> cd libsndfile/\r\n> autoreconf -vif\r\n> ./configure --enable-werror \r\n> make\r\n> make install\r\n> ```\r\n> \r\n> for this, some building libraries should be installed, for Debian/Ubuntu it's like:\r\n> \r\n> ```shell\r\n> apt install autoconf autogen automake build-essential libasound2-dev \\\r\n> libflac-dev libogg-dev libtool libvorbis-dev libopus-dev libmp3lame-dev \\\r\n> libmpg123-dev pkg-config python\r\n> ```\r\n> \r\n> but for other Linux distributions it might be different.\r\n> \r\n> When the binary is compiled, it should be put into location where `soundfile` would search for it (the directory is named `_soundfile_data`), it depends on where`libsdfile` (from the previous step) and `soundfile` were installed, might be something like this:\r\n> \r\n> ```shell\r\n> cp /usr/local/lib/libsndfile.so /usr/local/lib/python3.7/dist-packages/_soundfile_data/\r\n> cp /usr/local/lib/libsndfile.la /usr/local/lib/python3.7/dist-packages/_soundfile_data/\r\n> ```\r\n> \r\n> Another solution is to not use `soundfile` and apply custom processing function with `torchaudio` while setting `decode=False` in `Audio` feature and passing custom function to `.map`.\r\n\r\nThanks, the solution solved my problem. \r\n\r\n1. Purge uninstall libsndfile, uninstall python-soundfile.\r\n2. Build libsndfile from source code and install.\r\n3. Build python-soundfile from source code and install\r\n4. Well done."
] |
https://api.github.com/repos/huggingface/datasets/issues/4061 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4061/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4061/comments | https://api.github.com/repos/huggingface/datasets/issues/4061/events | https://github.com/huggingface/datasets/issues/4061 | 1,186,317,071 | I_kwDODunzps5GtcMP | 4,061 | Loading cnn_dailymail dataset failed | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
}
] | closed | false | null | 1 | 2022-03-30T11:29:02Z | 2022-03-30T13:36:14Z | 2022-03-30T13:36:14Z | null | ## Describe the bug
I wanted to load cnn_dailymail dataset from huggingface datasets on jupyter lab, but I am getting an error ` NotADirectoryError:[Errno20] Not a directory ` while loading it.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('cnn_dailymail', '3.0.0')
```
## Expected results
load `cnn_dailymail` dataset succesfully
## Actual results
failed to load and get error
> NotADirectoryError: [Errno 20] Not a directory
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` 1.8.0:
- Platform: Ubuntu-20.04
- Python version: 3.9.10
- PyArrow version: 3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4061/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4061/timeline | null | completed | null | null | false | [
"Hi @Arij-Aladel, thanks for reporting.\r\n\r\nThis issue was already reported \r\n- #3784\r\n\r\nand its root cause is a change in the Google Drive service. See:\r\n- #3786 \r\n\r\nWe have already fixed it in our 2.0.0 release. See:\r\n- #3787 \r\n\r\nPlease, update your `datasets` version:\r\n```\r\npip install -U datasets\r\n```\r\nand retry loading the dataset by forcing its redownload:\r\n```python\r\ndataset = load_dataset(\"cnn_dailymail\", \"3.0.0\", download_mode=\"force_redownload\")\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/6016 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6016/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6016/comments | https://api.github.com/repos/huggingface/datasets/issues/6016/events | https://github.com/huggingface/datasets/pull/6016 | 1,798,968,033 | PR_kwDODunzps5VNEvn | 6,016 | Dataset string representation enhancement | [] | open | false | null | 2 | 2023-07-11T13:38:25Z | 2023-07-16T10:26:18Z | null | null | my attempt at #6010
not sure if this is the right way to go about it, I will wait for your feedback | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6016/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6016/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6016.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6016",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6016.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6016"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6016). All of your documentation changes will be reflected on that endpoint.",
"It we could have something similar to Polars, that would be great.\r\n\r\nThis is what Polars outputs: \r\n* `__repr__`/`__str__` :\r\n```\r\nshape: (67_349, 3)\r\n┌───────┬───────────────────────────────────┬───────┐\r\n│ idx ┆ sentence ┆ label │\r\n│ --- ┆ --- ┆ --- │\r\n│ i32 ┆ str ┆ i64 │\r\n╞═══════╪═══════════════════════════════════╪═══════╡\r\n│ 0 ┆ hide new secretions from the par… ┆ 0 │\r\n│ 1 ┆ contains no wit , only labored g… ┆ 0 │\r\n│ 2 ┆ that loves its characters and co… ┆ 1 │\r\n│ 3 ┆ remains utterly satisfied to rem… ┆ 0 │\r\n│ … ┆ … ┆ … │\r\n│ 67345 ┆ anguish , anger and frustration ┆ 0 │\r\n│ 67346 ┆ at achieving the modest , crowd-… ┆ 1 │\r\n│ 67347 ┆ a patient viewer ┆ 1 │\r\n│ 67348 ┆ this new jangle of noise , mayhe… ┆ 0 │\r\n└───────┴───────────────────────────────────┴───────┘\r\n```\r\n\r\n* `_repr_html_`:\r\n<img width=\"251\" alt=\"Screenshot 2023-07-12 at 18 25 58\" src=\"https://github.com/huggingface/datasets/assets/47462742/5d04519d-f302-4411-9fbc-7445bdf53b23\">\r\n\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/2646 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2646/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2646/comments | https://api.github.com/repos/huggingface/datasets/issues/2646/events | https://github.com/huggingface/datasets/issues/2646 | 944,379,954 | MDU6SXNzdWU5NDQzNzk5NTQ= | 2,646 | downloading of yahoo_answers_topics dataset failed | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-07-14T12:31:05Z | 2022-08-04T08:28:24Z | 2022-08-04T08:28:24Z | null | ## Describe the bug
I get an error datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files when I try to download the yahoo_answers_topics dataset
## Steps to reproduce the bug
self.dataset = load_dataset(
'yahoo_answers_topics', cache_dir=self.config['yahoo_cache_dir'], split='train[:90%]')
# Sample code to reproduce the bug
self.dataset = load_dataset(
'yahoo_answers_topics', cache_dir=self.config['yahoo_cache_dir'], split='train[:90%]')
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2646/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2646/timeline | null | completed | null | null | false | [
"Hi ! I just tested and it worked fine today for me.\r\n\r\nI think this is because the dataset is stored on Google Drive which has a quota limit for the number of downloads per day, see this similar issue https://github.com/huggingface/datasets/issues/996 \r\n\r\nFeel free to try again today, now that the quota was reset",
"Fixed once data URL was replaced:\r\n- #4023"
] |
https://api.github.com/repos/huggingface/datasets/issues/1447 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1447/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1447/comments | https://api.github.com/repos/huggingface/datasets/issues/1447/events | https://github.com/huggingface/datasets/pull/1447 | 761,067,955 | MDExOlB1bGxSZXF1ZXN0NTM1NzkxODk1 | 1,447 | Update step-by-step guide for windows | [] | closed | false | null | 1 | 2020-12-10T09:30:59Z | 2020-12-10T12:18:47Z | 2020-12-10T09:31:14Z | null | Update step-by-step guide for windows to give an alternative to `make style`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1447/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1447/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1447.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1447",
"merged_at": "2020-12-10T09:31:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1447.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1447"
} | true | [
"Hi @thomwolf, for simplification purposes, I think you could remove the \"`pip install ...`\" steps from this commit, 'cause these deps (black, isort, flake8) are already installed on `pip install -e \".[dev]\"` on the [Start by preparing your environment](https://github.com/huggingface/datasets/blob/704107f924e74445f6f0fbde69a218b72178b588/ADD_NEW_DATASET.md#start-by-preparing-your-environment)\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/902 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/902/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/902/comments | https://api.github.com/repos/huggingface/datasets/issues/902/events | https://github.com/huggingface/datasets/pull/902 | 752,345,739 | MDExOlB1bGxSZXF1ZXN0NTI4Njg3NTYw | 902 | Follow cache_dir parameter to gcs downloader | [] | closed | false | null | 0 | 2020-11-27T16:02:06Z | 2020-11-29T22:48:54Z | 2020-11-29T22:48:53Z | null | As noticed in #900 the cache_dir parameter was not followed to the downloader in the case of an already processed dataset hosted on our google storage (one of them is natural questions).
Fix #900 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/902/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/902/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/902.diff",
"html_url": "https://github.com/huggingface/datasets/pull/902",
"merged_at": "2020-11-29T22:48:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/902.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/902"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5891 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5891/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5891/comments | https://api.github.com/repos/huggingface/datasets/issues/5891/events | https://github.com/huggingface/datasets/pull/5891 | 1,722,384,135 | PR_kwDODunzps5RKchn | 5,891 | Make split slicing consisten with list slicing | [] | open | false | null | 2 | 2023-05-23T16:04:33Z | 2023-05-23T16:11:12Z | null | null | Fix #1774, fix #5875
TODO: a test | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5891/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5891/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/5891.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5891",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5891.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5891"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5891). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006916 / 0.011353 (-0.004437) | 0.004749 / 0.011008 (-0.006259) | 0.096086 / 0.038508 (0.057578) | 0.035448 / 0.023109 (0.012338) | 0.299645 / 0.275898 (0.023747) | 0.331279 / 0.323480 (0.007799) | 0.006018 / 0.007986 (-0.001968) | 0.004210 / 0.004328 (-0.000118) | 0.072998 / 0.004250 (0.068747) | 0.050082 / 0.037052 (0.013030) | 0.297714 / 0.258489 (0.039225) | 0.365523 / 0.293841 (0.071682) | 0.028081 / 0.128546 (-0.100465) | 0.009072 / 0.075646 (-0.066574) | 0.327628 / 0.419271 (-0.091643) | 0.051165 / 0.043533 (0.007633) | 0.295091 / 0.255139 (0.039952) | 0.320052 / 0.283200 (0.036852) | 0.109841 / 0.141683 (-0.031842) | 1.467867 / 1.452155 (0.015712) | 1.572600 / 1.492716 (0.079884) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.281490 / 0.018006 (0.263484) | 0.499259 / 0.000490 (0.498770) | 0.000691 / 0.000200 (0.000491) | 0.000062 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027548 / 0.037411 (-0.009863) | 0.106592 / 0.014526 (0.092066) | 0.118654 / 0.176557 (-0.057902) | 0.174313 / 0.737135 (-0.562822) | 0.124491 / 0.296338 (-0.171848) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.399674 / 0.215209 (0.184465) | 3.984092 / 2.077655 (1.906437) | 1.790935 / 1.504120 (0.286815) | 1.593612 / 1.541195 (0.052417) | 1.694595 / 1.468490 (0.226105) | 0.517588 / 4.584777 (-4.067189) | 3.724353 / 3.745712 (-0.021359) | 3.244807 / 5.269862 (-2.025054) | 1.602929 / 4.565676 (-2.962748) | 0.065334 / 0.424275 (-0.358941) | 0.012259 / 0.007607 (0.004652) | 0.501355 / 0.226044 (0.275311) | 4.996546 / 2.268929 (2.727618) | 2.279333 / 55.444624 (-53.165291) | 1.940126 / 6.876477 (-4.936351) | 2.122945 / 2.142072 (-0.019128) | 0.626104 / 4.805227 (-4.179123) | 0.141278 / 6.500664 (-6.359386) | 0.064522 / 0.075469 (-0.010947) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.195351 / 1.841788 (-0.646436) | 15.258932 / 8.074308 (7.184624) | 14.627623 / 10.191392 (4.436231) | 0.266897 / 0.680424 (-0.413527) | 0.017557 / 0.534201 (-0.516644) | 0.392932 / 0.579283 (-0.186351) | 0.416409 / 0.434364 (-0.017955) | 0.469100 / 0.540337 (-0.071237) | 0.556247 / 1.386936 (-0.830689) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006880 / 0.011353 (-0.004473) | 0.004837 / 0.011008 (-0.006171) | 0.074518 / 0.038508 (0.036010) | 0.034204 / 0.023109 (0.011095) | 0.365100 / 0.275898 (0.089202) | 0.394976 / 0.323480 (0.071496) | 0.006364 / 0.007986 (-0.001621) | 0.004269 / 0.004328 (-0.000060) | 0.073531 / 0.004250 (0.069281) | 0.051334 / 0.037052 (0.014281) | 0.373904 / 0.258489 (0.115415) | 0.413662 / 0.293841 (0.119821) | 0.028779 / 0.128546 (-0.099767) | 0.009292 / 0.075646 (-0.066354) | 0.081574 / 0.419271 (-0.337698) | 0.046531 / 0.043533 (0.002998) | 0.368995 / 0.255139 (0.113856) | 0.376938 / 0.283200 (0.093739) | 0.112576 / 0.141683 (-0.029107) | 1.458880 / 1.452155 (0.006725) | 1.550918 / 1.492716 (0.058202) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.319521 / 0.018006 (0.301515) | 0.510146 / 0.000490 (0.509656) | 0.000438 / 0.000200 (0.000238) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033082 / 0.037411 (-0.004329) | 0.118009 / 0.014526 (0.103483) | 0.127108 / 0.176557 (-0.049448) | 0.176600 / 0.737135 (-0.560535) | 0.133790 / 0.296338 (-0.162549) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437360 / 0.215209 (0.222151) | 4.367426 / 2.077655 (2.289771) | 2.193646 / 1.504120 (0.689526) | 2.025002 / 1.541195 (0.483808) | 2.142347 / 1.468490 (0.673856) | 0.525497 / 4.584777 (-4.059280) | 3.751275 / 3.745712 (0.005563) | 1.912271 / 5.269862 (-3.357590) | 1.087286 / 4.565676 (-3.478390) | 0.066328 / 0.424275 (-0.357947) | 0.011904 / 0.007607 (0.004297) | 0.545870 / 0.226044 (0.319825) | 5.434481 / 2.268929 (3.165552) | 2.719745 / 55.444624 (-52.724880) | 2.445001 / 6.876477 (-4.431476) | 2.500205 / 2.142072 (0.358133) | 0.645735 / 4.805227 (-4.159492) | 0.144210 / 6.500664 (-6.356455) | 0.065688 / 0.075469 (-0.009781) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.273522 / 1.841788 (-0.568265) | 15.771778 / 8.074308 (7.697470) | 14.685261 / 10.191392 (4.493869) | 0.176523 / 0.680424 (-0.503900) | 0.017877 / 0.534201 (-0.516324) | 0.392687 / 0.579283 (-0.186596) | 0.449992 / 0.434364 (0.015628) | 0.462851 / 0.540337 (-0.077487) | 0.560178 / 1.386936 (-0.826758) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/1920 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1920/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1920/comments | https://api.github.com/repos/huggingface/datasets/issues/1920/events | https://github.com/huggingface/datasets/pull/1920 | 812,628,220 | MDExOlB1bGxSZXF1ZXN0NTc2OTQ5NzI2 | 1,920 | Fix save_to_disk issue | [] | closed | false | null | 2 | 2021-02-20T14:22:39Z | 2021-02-22T10:30:11Z | 2021-02-22T10:30:11Z | null | Fixes #1919
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1920/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1920/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1920.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1920",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1920.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1920"
} | true | [
"So I was curious why the issue reported at #1919 wasn't caught in [this test](https://github.com/huggingface/datasets/blob/248104c4bdb2e01c036b7578867199191fbff181/tests/test_arrow_dataset.py#L209), so I did some digging.\r\nI tried to save to a temporary directory (just like in the test), like this:\r\n```python\r\nwith tempfile.TemporaryDirectory() as requested_tempdir:\r\n squad.save_to_disk(requested_tempdir) # no error\r\n```\r\nand it executes succesfuly without problems.\r\nSo why does it work, but this doesn't?\r\n```python\r\nsquad.save_to_disk(\"./squad\") # error\r\n```\r\nIt's because `save_to_disk` also creates a temporary directory (let's call it `tempdir`), and since `tempdir` and `requested_tempdir` share the same parents, the `Path.joinpath` method [(here)](https://github.com/huggingface/datasets/blob/248104c4bdb2e01c036b7578867199191fbff181/src/datasets/arrow_dataset.py#L469) will keep `requested_tempdir` as it is and the *train* directory will be created under `requested_tempdir` and hence no errors will arise.\r\n\r\nBut in the second case (where we are saving to a local dir), the *train* directory is created under *squad* which in turn is created under `tempdir`, not under `.` (current dir).\r\n\r\nSo, all of this probably doesn't help solving the issue but it might help creating a better test, and it also makes me wonder why are we saving to a temporary dir in `save_to_disk` anyway? I mean, won't it be removed with all its contents upon execution completion? what's the point then? ",
"CLosing in favor of #1923"
] |
https://api.github.com/repos/huggingface/datasets/issues/2969 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2969/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2969/comments | https://api.github.com/repos/huggingface/datasets/issues/2969/events | https://github.com/huggingface/datasets/issues/2969 | 1,007,217,867 | I_kwDODunzps48COzL | 2,969 | medical-dialog error | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-09-25T23:08:44Z | 2021-10-11T07:46:42Z | 2021-10-11T07:46:42Z | null | ## Describe the bug
A clear and concise description of what the bug is.
When I attempt to download the huggingface datatset medical_dialog it errors out midway through
## Steps to reproduce the bug
```python
raw_datasets = load_dataset("medical_dialog", "en", split="train", download_mode="force_redownload", data_dir="./Medical-Dialogue-Dataset-English")
```
## Expected results
A clear and concise description of the expected results.
No error
## Actual results
```
3 frames
/usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py in verify_splits(expected_splits, recorded_splits)
72 ]
73 if len(bad_splits) > 0:
---> 74 raise NonMatchingSplitsSizesError(str(bad_splits))
75 logger.info("All the splits matched successfully.")
76
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='medical_dialog'), 'recorded': SplitInfo(name='train', num_bytes=295097913, num_examples=229674, dataset_name='medical_dialog')}]
```
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.21.1
- Platform: colab
- Python version: colab 3.7
- PyArrow version: N/A
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2969/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2969/timeline | null | completed | null | null | false | [
"Hi @smeyerhot, thanks for reporting.\r\n\r\nYou are right: there is an issue with the dataset metadata. I'm fixing it.\r\n\r\nIn the meantime, you can circumvent the issue by passing `ignore_verifications=True`:\r\n```python\r\nraw_datasets = load_dataset(\"medical_dialog\", \"en\", split=\"train\", download_mode=\"force_redownload\", data_dir=\"./Medical-Dialogue-Dataset-English\", ignore_verifications=True)\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/1250 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1250/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1250/comments | https://api.github.com/repos/huggingface/datasets/issues/1250/events | https://github.com/huggingface/datasets/pull/1250 | 758,491,704 | MDExOlB1bGxSZXF1ZXN0NTMzNjU2NTI4 | 1,250 | added Nergrit dataset | [] | closed | false | null | 0 | 2020-12-07T13:06:12Z | 2020-12-08T14:33:29Z | 2020-12-08T14:33:29Z | null | Nergrit Corpus is a dataset collection for Indonesian Named Entity Recognition, Statement Extraction, and Sentiment Analysis. This PR is only for the Named Entity Recognition. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1250/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1250/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1250.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1250",
"merged_at": "2020-12-08T14:33:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1250.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1250"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1242 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1242/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1242/comments | https://api.github.com/repos/huggingface/datasets/issues/1242/events | https://github.com/huggingface/datasets/pull/1242 | 758,370,579 | MDExOlB1bGxSZXF1ZXN0NTMzNTU0MzAx | 1,242 | adding bprec | [] | closed | false | null | 2 | 2020-12-07T10:15:49Z | 2020-12-08T14:33:49Z | 2020-12-08T14:33:48Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1242/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1242/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1242.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1242",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1242.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1242"
} | true | [
"looks like this PR includes changes to many files other than the ones related to bprec\r\nCan you create another branch and another PR please ?",
"> looks like this PR includes changes to many files other than the ones related to bprec\r\n> Can you create another branch and another PR please ?\r\n\r\nYes, I realized I messed this one up, learning my way :) I'll close this one and open another hopefully clean PR :) Thanks!"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/3820 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3820/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3820/comments | https://api.github.com/repos/huggingface/datasets/issues/3820/events | https://github.com/huggingface/datasets/issues/3820 | 1,159,106,603 | I_kwDODunzps5FFpAr | 3,820 | `pubmed_qa` checksum mismatch | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
}
] | closed | false | null | 1 | 2022-03-04T00:28:08Z | 2022-03-04T09:42:32Z | 2022-03-04T09:42:32Z | null | ## Describe the bug
Loading [`pubmed_qa`](https://huggingface.co/datasets/pubmed_qa) results in a mismatched checksum error.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
import datasets
try:
datasets.load_dataset("pubmed_qa", "pqa_labeled")
except Exception as e:
print(e)
try:
datasets.load_dataset("pubmed_qa", "pqa_unlabeled")
except Exception as e:
print(e)
try:
datasets.load_dataset("pubmed_qa", "pqa_artificial")
except Exception as e:
print(e)
```
## Expected results
Successful download.
## Actual results
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.9/site-packages/datasets/load.py", line 1702, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 594, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 665, in _download_and_prepare
verify_checksums(
File "/usr/local/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=1RsGLINVce-0GsDkCLDuLZmoLuzfmoCuQ', 'https://drive.google.com/uc?export=download&id=15v1x6aQDlZymaHGP7cZJZZYFfeJt2NdS']
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: macOS
- Python version: 3.8.1
- PyArrow version: 3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3820/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3820/timeline | null | completed | null | null | false | [
"Hi @jon-tow, thanks for reporting.\r\n\r\nThis issue was already reported and its root cause is a change in the Google Drive service. See:\r\n- #3786 \r\n\r\nWe have already fixed it. See:\r\n- #3787 \r\n\r\nWe are planning to make a patch release today.\r\n\r\nIn the meantime, you can get this fix by installing our library from the GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\nThen, if you had previously tried to load the data and got the checksum error, you should force the redownload of the data (before the fix, you just downloaded and cached the virus scan warning page, instead of the data file):\r\n```shell\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/196 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/196/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/196/comments | https://api.github.com/repos/huggingface/datasets/issues/196/events | https://github.com/huggingface/datasets/pull/196 | 624,901,266 | MDExOlB1bGxSZXF1ZXN0NDIzMjIwMjIw | 196 | Check invalid config name | [] | closed | false | null | 13 | 2020-05-26T13:52:51Z | 2020-05-26T21:04:56Z | 2020-05-26T21:04:55Z | null | As said in #194, we should raise an error if the config name has bad characters.
Bad characters are those that are not allowed for directory names on windows. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/196/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/196/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/196.diff",
"html_url": "https://github.com/huggingface/datasets/pull/196",
"merged_at": "2020-05-26T21:04:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/196.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/196"
} | true | [
"I think that's not related to the config name but the filenames in the dummy data. Mostly it occurs with files downloaded from drive. In that case the dummy file name is extracted from the google drive link and it corresponds to what comes after `https://drive.google.com/`\r\n\r\n",
"> I think that's not related to the config name but the filenames in the dummy data. Mostly it occurs with files downloaded from drive. In that case the dummy file name is extracted from the google drive link and it corresponds to what comes after `https://drive.google.com/`\r\n\r\nThe filenames of the dummy data are now encoded (see #173). So this is not a problem anymore.\r\n\r\nThe problem here is different and comes from the directory names where we save the arrow files (basically `dataset_name/config_name/version`). In this case we could have invalid directory names because of the config name\r\n",
"Okay great then.",
"I like the method, but I'm wondering whether it should just be a test method instead of a `__post_init__` function. From a logical point of view the only reason this error would be thrown is because of an invalid config name introduced when creating the dataset script / adding a new dataset => so I think it might be better to write a simple test for this in `test_dataset_common.py`...what do you think @lhoestq ?",
"`test_dataset_common.py` only tests canonical datasets no ? What if users wants to create their own script ?",
"> `test_dataset_common.py` only tests canonical datasets no ? What if users wants to create their own script ?\r\n\r\nIt tests all dataset that can be loaded either locally or on AWS (which includes all non-canonical datasets as well)...by their own script you mean like a private dataset script that they don't want to be public? I guess even then they could locally run the test functions to check...",
"We could have a bunch of simple consistency tests that run before uploading with the CLI (without loading data if we don't want to force the user to have dummy data)?",
"Let's say someone want to create his own private script. As the script is not meant to be shared, it's not going to be placed in `/datasets` right ? Maybe the script is going to be inside another project. If I'm not wrong in this case the `test_dataset_common.py` is not going to test his script.\r\n\r\nRaising an error in the post init is a sanity check that would tell the user immediately what's wrong.\r\nThe error is raised if he tried to load the script or if he uses `nlp-cli test`",
"> Let's say someone want to create his own private script. As the script is not meant to be shared, it's not going to be placed in `/datasets` right ? Maybe the script is going to be inside another project. If I'm not wrong in this case the `test_dataset_common.py` is not going to test his script.\r\n> \r\n> Raising an error in the post init is a sanity check that would tell the user immediately what's wrong.\r\n> The error is raised if he tried to load the script or if he uses `nlp-cli test`\r\n\r\nOK, fair point! I'm good with this then :-) ",
"I'm fine with this as well (even though I understand what you meant @patrickvonplaten, we can still change it later if needed)",
"> We could have a bunch of simple consistency tests that run before uploading with the CLI (without loading data if we don't want to force the user to have dummy data)?\r\n\r\nYes! I guess that's a big question whether we should force the user to add dummy data. It's probably too tedious for the user...so when uploading to circle ci should we just check \r\n- 1) All configs can be instantiated (if there are any)\r\n- 2) The BuilderClass can be instantiated ... \r\n- 3) ... maybe some more\r\n\r\nand maybe suggest to the user to add dummy data using the dummy data command?",
"I really like that we have a test with dummy data for canonical datasets. This is insurance that they'll keep working in the long run. \r\n\r\nOn the other hand I understand that we will probably not force this practice for scripts uploaded on S3 by a user under his namespace (non-canonical), as it is tedious. As I understand right now the test is done for all the datasets on aws, even the non-canonical ? We should think about different tests for non-canonical datasets.\r\n\r\nI also like the idea of a simple consistency test !",
"Merging this one for now, we can think about the test for non-canonical datasets later"
] |
https://api.github.com/repos/huggingface/datasets/issues/1675 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1675/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1675/comments | https://api.github.com/repos/huggingface/datasets/issues/1675/events | https://github.com/huggingface/datasets/issues/1675 | 777,367,320 | MDU6SXNzdWU3NzczNjczMjA= | 1,675 | Add the 800GB Pile dataset? | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 7 | 2021-01-01T22:58:12Z | 2021-12-01T15:29:07Z | 2021-12-01T15:29:07Z | null | ## Adding a Dataset
- **Name:** The Pile
- **Description:** The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. See [here](https://twitter.com/nabla_theta/status/1345130408170541056?s=20) for the Twitter announcement
- **Paper:** https://pile.eleuther.ai/paper.pdf
- **Data:** https://pile.eleuther.ai/
- **Motivation:** Enables hardcore (GPT-3 scale!) language modelling
## Remarks
Given the extreme size of this dataset, I'm not sure how feasible this will be to include in `datasets` 🤯 . I'm also unsure how many `datasets` users are pretraining LMs, so the usage of this dataset may not warrant the effort to integrate it.
| {
"+1": 5,
"-1": 0,
"confused": 1,
"eyes": 2,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 5,
"total_count": 13,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1675/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1675/timeline | null | completed | null | null | false | [
"The pile dataset would be very nice.\r\nBenchmarks show that pile trained models achieve better results than most of actually trained models",
"The pile can very easily be added and adapted using this [tfds implementation](https://github.com/EleutherAI/The-Pile/blob/master/the_pile/tfds_pile.py) from the repo. \r\n\r\nHowever, the question is whether you'd be ok with 800GB+ cached in your local disk, since the tfds implementation was designed to offload the storage to Google Cloud Storage.",
"With the dataset streaming feature (see #2375) it will be more convenient to play with such big datasets :)\r\nI'm currently adding C4 (see #2511 ) but I can probably start working on this afterwards",
"Hi folks! Just wanted to follow up on this -- would be really nice to get the Pile on HF Datasets... unclear if it would be easy to also add partitions of the Pile subject to the original 22 datasets used, but that would be nice too!",
"Hi folks, thanks to some awesome work by @lhoestq and @albertvillanova you can now stream the Pile as follows:\r\n\r\n```python\r\n# Install master branch of `datasets`\r\npip install git+https://github.com/huggingface/datasets.git#egg=datasets[streaming]\r\npip install zstandard\r\n\r\nfrom datasets import load_dataset\r\n\r\ndset = load_dataset(\"json\", data_files=\"https://the-eye.eu/public/AI/pile/train/00.jsonl.zst\", streaming=True, split=\"train\")\r\nnext(iter(dset))\r\n# {'meta': {'pile_set_name': 'Pile-CC'},\r\n# 'text': 'It is done, and submitted. You can play “Survival of the Tastiest” on Android, and on the web ... '}\r\n```\r\n\r\nNext step is to add the Pile as a \"canonical\" dataset that can be streamed without specifying the file names explicitly :)",
"> Hi folks! Just wanted to follow up on this -- would be really nice to get the Pile on HF Datasets... unclear if it would be easy to also add partitions of the Pile subject to the original 22 datasets used, but that would be nice too!\r\n\r\nHi @siddk thanks to a tip from @richarddwang it seems we can access some of the partitions that EleutherAI created for the Pile [here](https://the-eye.eu/public/AI/pile_preliminary_components/). What's missing are links to the preprocessed versions of pre-existing datasets like DeepMind Mathematics and OpenSubtitles, but worst case we do the processing ourselves and host these components on the Hub.\r\n\r\nMy current idea is that we could provide 23 configs: one for each of the 22 datasets and an `all` config that links to the train / dev / test splits that EleutherAI released [here](https://the-eye.eu/public/AI/pile/), e.g.\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n# Load a single component\r\nyoutube_subtitles = load_dataset(\"the_pile\", \"youtube_subtitles\")\r\n# Load the train / dev / test splits of the whole corpus\r\ndset = load_dataset(\"the_pile\", \"all\")\r\n```\r\n\r\nIdeally we'd like everything to be compatible with the streaming API and there's ongoing work by @albertvillanova to make this happen for the various compression algorithms.\r\n\r\ncc @lhoestq ",
"Ah I just saw that @lhoestq is already thinking about the specifying of one or more subsets in [this PR](https://github.com/huggingface/datasets/pull/2817#issuecomment-901874049) :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/1587 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1587/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1587/comments | https://api.github.com/repos/huggingface/datasets/issues/1587/events | https://github.com/huggingface/datasets/pull/1587 | 768,929,877 | MDExOlB1bGxSZXF1ZXN0NTQxMjAwMDk3 | 1,587 | Add nq_open question answering dataset | [] | closed | false | null | 1 | 2020-12-16T14:22:08Z | 2020-12-17T16:07:10Z | 2020-12-17T16:07:10Z | null | this is pr is a copy of #1506 due to messed up git history in that pr. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1587/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1587/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1587.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1587",
"merged_at": "2020-12-17T16:07:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1587.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1587"
} | true | [
"@SBrandeis all checks passing"
] |
https://api.github.com/repos/huggingface/datasets/issues/3782 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3782/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3782/comments | https://api.github.com/repos/huggingface/datasets/issues/3782/events | https://github.com/huggingface/datasets/pull/3782 | 1,148,994,022 | PR_kwDODunzps4zY-Xb | 3,782 | Error of writing with different schema, due to nonpreservation of nullability | [] | closed | false | null | 1 | 2022-02-24T08:23:07Z | 2022-03-03T14:54:39Z | 2022-03-03T14:54:39Z | null | ## 1. Case
```
dataset.map(
batched=True,
disable_nullable=True,
)
```
will get the following error at here https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/arrow_writer.py#L516
`pyarrow.lib.ArrowInvalid: Tried to write record batch with different schema`
## 2. Debugging
### 2.1 tracing
During `_map_single`, the following are called
https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/arrow_dataset.py#L2523
https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/arrow_writer.py#L511
### 2.2. Observation
The problem is, even after `table_cast`, `pa_table.schema != self._schema`
`pa_table.schema` (before/after `table_cast`)
```
input_ids: list<item: int32>
child 0, item: int32
```
`self._schema`
```
input_ids: list<item: int32> not null
child 0, item: int32
```
### 2.3. Reason
https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/table.py#L1121
Here we lose nullability stored in `schema` because it seems that `Features` is always nullable and don't store nullability.
https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/table.py#L1103
So, casting to schema from such `Features` loses nullability, and eventually causes error of writing with different schema
## 3. Solution
1. Let `Features` stores nullability.
2. Directly cast table with original schema but not schema from converted `Features`. (this PR)
3. Don't `cast_table` when `write_table` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3782/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3782/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3782.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3782",
"merged_at": "2022-03-03T14:54:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3782.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3782"
} | true | [
"Hi ! Thanks for reporting, indeed `disable_nullable` doesn't seem to be supported in this case. Maybe at one point we can have `disable_nullable` as a parameter of certain feature types"
] |
https://api.github.com/repos/huggingface/datasets/issues/814 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/814/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/814/comments | https://api.github.com/repos/huggingface/datasets/issues/814/events | https://github.com/huggingface/datasets/issues/814 | 738,500,443 | MDU6SXNzdWU3Mzg1MDA0NDM= | 814 | Joining multiple datasets | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 1 | 2020-11-08T16:19:30Z | 2020-11-08T19:38:48Z | 2020-11-08T19:38:48Z | null | Hi
I have multiple iterative datasets from your library with different size and I want to join them in a way that each datasets is sampled equally, so smaller datasets more, larger one less, could you tell me how to implement this in pytorch? thanks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/814/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/814/timeline | null | completed | null | null | false | [
"found a solution here https://discuss.pytorch.org/t/train-simultaneously-on-two-datasets/649/35, closed for now, thanks "
] |
https://api.github.com/repos/huggingface/datasets/issues/397 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/397/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/397/comments | https://api.github.com/repos/huggingface/datasets/issues/397/events | https://github.com/huggingface/datasets/pull/397 | 657,510,856 | MDExOlB1bGxSZXF1ZXN0NDQ5NjE1MDA4 | 397 | Add contiguous sharding | [] | closed | false | null | 0 | 2020-07-15T17:02:58Z | 2020-07-17T16:59:31Z | 2020-07-17T16:59:31Z | null | This makes dset.shard() play nice with nlp.concatenate_datasets(). When I originally wrote the shard() method, I was thinking about a distributed training scenario, but https://github.com/huggingface/nlp/pull/389 also uses it for splitting the dataset for distributed preprocessing.
Usage:
```
nlp.concatenate_datasets([dset.shard(n, i, contiguous=True) for i in range(n)])
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/397/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/397/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/397.diff",
"html_url": "https://github.com/huggingface/datasets/pull/397",
"merged_at": "2020-07-17T16:59:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/397.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/397"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4522 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4522/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4522/comments | https://api.github.com/repos/huggingface/datasets/issues/4522/events | https://github.com/huggingface/datasets/issues/4522 | 1,274,929,328 | I_kwDODunzps5L_eCw | 4,522 | Try to reduce the number of datasets that require manual download | [] | open | false | null | 0 | 2022-06-17T11:42:03Z | 2022-06-17T11:52:48Z | null | null | > Currently, 41 canonical datasets require manual download. I checked their scripts and I'm pretty sure this number can be reduced to ≈ 30 by not relying on bash scripts to download data, hosting data directly on the Hub when the license permits, etc. Then, we will mostly be left with datasets with restricted access, which we can ignore
from https://github.com/huggingface/datasets-server/issues/12#issuecomment-1026920432 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4522/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4522/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/4714 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4714/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4714/comments | https://api.github.com/repos/huggingface/datasets/issues/4714/events | https://github.com/huggingface/datasets/pull/4714 | 1,309,265,682 | PR_kwDODunzps47o0YG | 4,714 | Fix named split sorting and remove unnecessary casting | [] | closed | false | null | 3 | 2022-07-19T09:48:28Z | 2022-07-22T09:39:45Z | 2022-07-22T09:10:57Z | null | This PR:
- makes `NamedSplit` sortable: so that `sorted()` can be called on them
- removes unnecessary `sorted()` on `dict.keys()`: `dict_keys` view is already like a `set`
- removes unnecessary casting of `NamedSplit` to `str` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4714/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4714/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4714.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4714",
"merged_at": "2022-07-22T09:10:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4714.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4714"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"hahaha what a timing, I added my comment right after you merged x)\r\n\r\nyou can ignore my (nit), it's fine",
"Sorry, just too sync... :sweat_smile: "
] |
https://api.github.com/repos/huggingface/datasets/issues/5082 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5082/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5082/comments | https://api.github.com/repos/huggingface/datasets/issues/5082/events | https://github.com/huggingface/datasets/pull/5082 | 1,399,379,777 | PR_kwDODunzps5ATJv- | 5,082 | adding keep in memory | [] | closed | false | null | 2 | 2022-10-06T11:10:46Z | 2022-10-07T14:35:34Z | 2022-10-07T14:32:54Z | null | Fixing #514 .
Hello @mariosasko 👋, I have implemented what you have recommanded to fix the keep in memory problem for shuffle on the issue #514 . | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5082/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5082/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5082.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5082",
"merged_at": "2022-10-07T14:32:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5082.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5082"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @mariosasko , I have added a test for the `keep_in_memory` version. I have also removed the `Compatible with temp_seed` part in the scope of `dset_shuffled`, please verify if that makes sense."
] |
https://api.github.com/repos/huggingface/datasets/issues/3939 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3939/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3939/comments | https://api.github.com/repos/huggingface/datasets/issues/3939/events | https://github.com/huggingface/datasets/issues/3939 | 1,170,882,331 | I_kwDODunzps5Fyj8b | 3,939 | Source links broken | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 8 | 2022-03-16T11:17:47Z | 2022-03-19T04:41:32Z | 2022-03-19T04:41:32Z | null | ## Describe the bug
The source links of v2.0.0 docs are broken:
For exmaple, clicking the source button of this [class](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.ClassLabel) will direct users to `https://github.com/huggingface/datasets/blob/v2.0.0/src/datasets/features/features.py#L747`
here, the `v2.0.0` should be `2.0.0`.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
## Expected results
Redirecting to this link: `https://github.com/huggingface/datasets/blob/2.0.0/src/datasets/features/features.py#L747`
## Actual results
Described above.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3939/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3939/timeline | null | completed | null | null | false | [
"Thanks for reporting @qqaatw.\r\n\r\n@mishig25 @sgugger do you think this can be tweaked in the new doc framework?\r\n- From: https://github.com/huggingface/datasets/blob/v2.0.0/\r\n- To: https://github.com/huggingface/datasets/blob/2.0.0/",
"@qqaatw thanks a lot for notifying about this issue!\r\n\r\nin comparison, transformers tags start with `v` like [this one](https://github.com/huggingface/transformers/blob/v4.17.0/src/transformers/models/bert/configuration_bert.py#L54).\r\n\r\nTherefore, we have to do one of 2 options below:\r\n1. Make necessary changes on doc-builder side\r\nOR\r\n2. Make [datasets tags](https://github.com/huggingface/datasets/tags) start with `v`, just like [transformers](https://github.com/huggingface/transformers/tags) (so that tag naming can be consistent amongst hf repos)\r\n\r\nI'll let you decide @albertvillanova @lhoestq @sgugger ",
"I think option 2 is the easiest and would provide harmony in the HF ecosystem but we can also add a doc config parameter to decide whether the default version has a v or not if `datasets` folks prefer their tags without a v :-)",
"For me it is OK to conform to the rest of libraries and tag/release with a preceding \"v\", rather than adding an extra argument to the doc builder just for `datasets`.\r\n\r\nLet me know if it is also OK for you @lhoestq. ",
"https://github.com/huggingface/doc-build/commit/f41c1e8ff900724213af4c75d287d8b61ecf6141\r\n\r\nhotfix so that `datasets` docs source button works correctly on hf.co/docs/datasets",
"We could add a tag for each release without a 'v' but it could be confusing on github to see both tags `v2.0.0` and `2.0.0` IMO (not sure if many users check them though). Removing the tags without 'v' would break our versioning for github datasets: the library looks for dataset scripts at the URLs like `https://raw.githubusercontent.com/huggingface/datasets/{revision}/datasets/{path}/{name}` where `revision` is equal to `datasets.__version__` (which doesn't start with a 'v') for all released versions of `datasets`.\r\n\r\nI think we could just have a parameter for the documentation - and having different URLs schemes for the source links that the users don't even see (they simply click on a button) is probably fine",
"This is done in #3943 to go along with [doc-builder#146](https://github.com/huggingface/doc-builder/pull/146).\r\n\r\nNote that this will only work for future versions, so once those two are merged, the actual v2.0.0 doc should be fixed. The easiest is to cherry-pick this commit on the v2.0.0 release branch (or on a new branch created from the 2.0.0 tag, with a name that triggers the doc building job, for instance v2.0.0-release)",
"Thanks for fixing @sgugger."
] |
https://api.github.com/repos/huggingface/datasets/issues/3454 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3454/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3454/comments | https://api.github.com/repos/huggingface/datasets/issues/3454/events | https://github.com/huggingface/datasets/pull/3454 | 1,084,519,107 | PR_kwDODunzps4wENam | 3,454 | Fix iter_archive generator | [] | closed | false | null | 0 | 2021-12-20T08:50:15Z | 2021-12-20T10:05:00Z | 2021-12-20T10:04:59Z | null | This PR:
- Adds tests to DownloadManager and StreamingDownloadManager `iter_archive` for both path and file inputs
- Fixes bugs in `iter_archive` introduced in:
- #3443
Fix #3453. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3454/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3454/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3454.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3454",
"merged_at": "2021-12-20T10:04:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3454.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3454"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5965 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5965/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5965/comments | https://api.github.com/repos/huggingface/datasets/issues/5965/events | https://github.com/huggingface/datasets/issues/5965 | 1,763,648,540 | I_kwDODunzps5pHyQc | 5,965 | "Couldn't cast array of type" in complex datasets | [] | closed | false | null | 4 | 2023-06-19T14:16:14Z | 2023-07-26T15:13:53Z | 2023-07-26T15:13:53Z | null | ### Describe the bug
When doing a map of a dataset with complex types, sometimes `datasets` is unable to interpret the valid schema of a returned datasets.map() function. This often comes from conflicting types, like when both empty lists and filled lists are competing for the same field value.
This is prone to happen in batch mapping, when the mapper returns a sequence of null/empty values and other batches are non-null. A workaround is to manually cast the new batch to a pyarrow table (like implemented in this [workaround](https://github.com/piercefreeman/lassen/pull/3)) but it feels like this ideally should be solved at the core library level.
Note that the reproduction case only throws this error if the first datapoint has the empty list. If it is processed later, datasets already detects its representation as list-type and therefore allows the empty list to be provided.
### Steps to reproduce the bug
A trivial reproduction case:
```python
from typing import Iterator, Any
import pandas as pd
from datasets import Dataset
def batch_to_examples(batch: dict[str, list[Any]]) -> Iterator[dict[str, Any]]:
for i in range(next(iter(lengths))):
yield {feature: values[i] for feature, values in batch.items()}
def examples_to_batch(examples) -> dict[str, list[Any]]:
batch = {}
for example in examples:
for feature, value in example.items():
if feature not in batch:
batch[feature] = []
batch[feature].append(value)
return batch
def batch_process(examples, explicit_schema: bool):
new_examples = []
for example in batch_to_examples(examples):
new_examples.append(dict(texts=example["raw_text"].split()))
return examples_to_batch(new_examples)
df = pd.DataFrame(
[
{"raw_text": ""},
{"raw_text": "This is a test"},
{"raw_text": "This is another test"},
]
)
dataset = Dataset.from_pandas(df)
# datasets won't be able to typehint a dataset that starts with an empty example.
with pytest.raises(TypeError, match="Couldn't cast array of type"):
dataset = dataset.map(
batch_process,
batched=True,
batch_size=1,
num_proc=1,
remove_columns=dataset.column_names,
)
```
This results in crashes like:
```bash
File "/Users/piercefreeman/Library/Caches/pypoetry/virtualenvs/example-9kBqeSPy-py3.11/lib/python3.11/site-packages/datasets/table.py", line 1819, in wrapper
return func(array, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/piercefreeman/Library/Caches/pypoetry/virtualenvs/example-9kBqeSPy-py3.11/lib/python3.11/site-packages/datasets/table.py", line 2109, in cast_array_to_feature
return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/piercefreeman/Library/Caches/pypoetry/virtualenvs/example-9kBqeSPy-py3.11/lib/python3.11/site-packages/datasets/table.py", line 1819, in wrapper
return func(array, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/piercefreeman/Library/Caches/pypoetry/virtualenvs/example-9kBqeSPy-py3.11/lib/python3.11/site-packages/datasets/table.py", line 1998, in array_cast
raise TypeError(f"Couldn't cast array of type {array.type} to {pa_type}")
TypeError: Couldn't cast array of type string to null
```
### Expected behavior
The code should successfully map and create a new dataset without error.
### Environment info
Mac OSX, Linux | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5965/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5965/timeline | null | completed | null | null | false | [
"Thanks for reporting! \r\n\r\nSpecifying the target features explicitly should avoid this error:\r\n```python\r\ndataset = dataset.map(\r\n batch_process,\r\n batched=True,\r\n batch_size=1,\r\n num_proc=1,\r\n remove_columns=dataset.column_names,\r\n features=datasets.Features({\"texts\": datasets.Sequence(datasets.Value(\"string\"))})\r\n)\r\n```\r\n\r\nThis error stems from our type promotion not handling the nested case. But this promotion/casting allocates memory in most scenarios, which can be problematic for large datasets, so explicitly passing the features is the optimal solution.",
"Hi @mariosasko thanks for the context, this is helpful to know. Would it be worth having some logic to generate this explicit feature specification automatically if a type annotation for a .map returns a dataclass that can be inferred?\r\n\r\nFeels like something that would be easy to implement and could save memory / deal with this case in a standardized way.",
"> . Would it be worth having some logic to generate this explicit feature specification automatically if a type annotation for a .map returns a dataclass that can be inferred?\r\n\r\nInteresting proposal! Yes, we could consider doing this if the (return) type hint is `TypedDict`, and raise an error that type hints are incorrect if the cast using the inferred types fails.",
"@mariosasko Put up an initial PR to implement this proposal. Let me know your thoughts on direction and what else should be in-scope here."
] |
https://api.github.com/repos/huggingface/datasets/issues/589 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/589/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/589/comments | https://api.github.com/repos/huggingface/datasets/issues/589/events | https://github.com/huggingface/datasets/issues/589 | 696,488,447 | MDU6SXNzdWU2OTY0ODg0NDc= | 589 | Cannot use nlp.load_dataset text, AttributeError: module 'nlp.utils' has no attribute 'logging' | [] | closed | false | null | 0 | 2020-09-09T06:46:53Z | 2020-09-09T08:57:54Z | 2020-09-09T08:57:54Z | null |
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/nlp/load.py", line 533, in load_dataset
builder_cls = import_main_class(module_path, dataset=True)
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/nlp/load.py", line 61, in import_main_class
module = importlib.import_module(module_path)
File "/root/anaconda3/envs/pytorch/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/nlp/datasets/text/5dc629379536c4037d9c2063e1caa829a1676cf795f8e030cd90a537eba20c08/text.py", line 9, in <module>
logger = nlp.utils.logging.get_logger(__name__)
AttributeError: module 'nlp.utils' has no attribute 'logging'
```
Occurs on the following code, or any code including the load_dataset('text'):
```
dataset = load_dataset("text", data_files=file_path, split="train")
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,
truncation=True, max_length=args.block_size), batched=True)
dataset.set_format(type='torch', columns=['input_ids'])
return dataset
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/589/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/589/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/3362 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3362/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3362/comments | https://api.github.com/repos/huggingface/datasets/issues/3362/events | https://github.com/huggingface/datasets/pull/3362 | 1,068,809,768 | PR_kwDODunzps4vRR2r | 3,362 | Adapt image datasets | [] | closed | false | null | 3 | 2021-12-01T19:52:01Z | 2021-12-09T18:37:42Z | 2021-12-09T18:37:41Z | null | This PR:
* adapts the ImageClassification template to use the new Image feature
* adapts the following datasets to use the new Image feature:
* beans (+ fixes streaming)
* cast_vs_dogs (+ fixes streaming)
* cifar10
* cifar100
* fashion_mnist
* mnist
* head_qa
cc @nateraw | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3362/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3362/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3362.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3362",
"merged_at": "2021-12-09T18:37:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3362.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3362"
} | true | [
"This PR can be merged after #3163 is merged (this PR is pretty big because I was working on the forked branch).\r\n\r\n@lhoestq @albertvillanova Could you please take a look at the changes in `src/datasets/utils/streaming_download_manager.py`? These changes were required to support streaming of the `cats_vs_dogs` and the `beans` datasets.",
"The CI failures are due to the missing fields in the README files.",
"and thanks for adding support for Path.name and Path.parent for streaming :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/2161 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2161/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2161/comments | https://api.github.com/repos/huggingface/datasets/issues/2161/events | https://github.com/huggingface/datasets/issues/2161 | 849,127,041 | MDU6SXNzdWU4NDkxMjcwNDE= | 2,161 | any possibility to download part of large datasets only? | [] | closed | false | null | 6 | 2021-04-02T10:06:46Z | 2022-10-05T13:26:51Z | 2022-10-05T13:26:51Z | null | Hi
Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2161/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2161/timeline | null | completed | null | null | false | [
"Not yet but it’s on the short/mid-term roadmap (requested by many indeed).",
"oh, great, really awesome feature to have, thank you very much for the great, fabulous work",
"We'll work on dataset streaming soon. This should allow you to only load the examples you need ;)",
"thanks a lot Quentin, this would be really really a great feature to have\n\nOn Wed, Apr 7, 2021 at 12:14 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> We'll work on dataset streaming soon. This should allow you to only load\n> the examples you need ;)\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/2161#issuecomment-814791922>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AS37NMROD62QAKIJMAKWISTTHQWBVANCNFSM42IUI5JQ>\n> .\n>\n",
"Is streaming completed? On the 1.8.0 docs it is mentioned (https://huggingface.co/docs/datasets/dataset_streaming.html), but when following the example I get the following error:\r\n\r\n```\r\n>>> dataset2 = load_dataset(\"amazon_us_reviews\", \"Pet_Products_v1_00\", split='train', streaming=True)\r\n\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-21-1eedab26cff1> in <module>()\r\n----> 1 en_dataset = load_dataset('oscar', \"unshuffled_deduplicated_en\", split='train', streaming=True)\r\n\r\n3 frames\r\n/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs)\r\n 339 if value is not None:\r\n 340 if not hasattr(builder_config, key):\r\n--> 341 raise ValueError(f\"BuilderConfig {builder_config} doesn't have a '{key}' key.\")\r\n 342 setattr(builder_config, key, value)\r\n 343 \r\n\r\nValueError: BuilderConfig OscarConfig(name='unshuffled_deduplicated_en', version=1.0.0, data_dir=None, data_files=None, description='Unshuffled and deduplicated, English OSCAR dataset') doesn't have a 'streaming' key.\r\n```\r\n\r\nUPDATE: Managed to get streaming working by building from source and installing the additional `datasets[streaming]` package:\r\n\r\n```\r\n!pip install git+https://github.com/huggingface/datasets.git\r\n!pip install datasets[streaming]\r\n```",
"Hi ! Streaming is available on `master` only right now. We'll make a new release 1.9.0 on Monday :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/1642 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1642/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1642/comments | https://api.github.com/repos/huggingface/datasets/issues/1642/events | https://github.com/huggingface/datasets/pull/1642 | 775,159,568 | MDExOlB1bGxSZXF1ZXN0NTQ1ODk1MzY1 | 1,642 | Ollie dataset | [] | closed | false | null | 0 | 2020-12-28T02:43:37Z | 2021-01-04T13:35:25Z | 2021-01-04T13:35:24Z | null | This is the dataset used to train the Ollie open information extraction algorithm. It has over 21M sentences. See http://knowitall.github.io/ollie/ for more details. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1642/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1642/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1642.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1642",
"merged_at": "2021-01-04T13:35:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1642.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1642"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3546 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3546/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3546/comments | https://api.github.com/repos/huggingface/datasets/issues/3546/events | https://github.com/huggingface/datasets/pull/3546 | 1,096,367,684 | PR_kwDODunzps4wqYIV | 3,546 | Remove print statements in datasets | [] | closed | false | null | 1 | 2022-01-07T14:30:24Z | 2022-01-07T18:09:16Z | 2022-01-07T18:09:15Z | null | This is a second time I'm removing print statements in our datasets, so I've added a test to avoid these issues in the future. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3546/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3546/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3546.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3546",
"merged_at": "2022-01-07T18:09:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3546.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3546"
} | true | [
"The CI failures are unrelated to the changes."
] |
https://api.github.com/repos/huggingface/datasets/issues/877 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/877/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/877/comments | https://api.github.com/repos/huggingface/datasets/issues/877/events | https://github.com/huggingface/datasets/issues/877 | 748,234,438 | MDU6SXNzdWU3NDgyMzQ0Mzg= | 877 | DataLoader(datasets) become more and more slowly within iterations | [] | closed | false | null | 2 | 2020-11-22T12:41:10Z | 2020-11-29T15:45:12Z | 2020-11-29T15:45:12Z | null | Hello, when I for loop my dataloader, the loading speed is becoming more and more slowly!
```
dataset = load_from_disk(dataset_path) # around 21,000,000 lines
lineloader = tqdm(DataLoader(dataset, batch_size=1))
for idx, line in enumerate(lineloader):
# do some thing for each line
```
In the begining, the loading speed is around 2000it/s, but after 1 minutes later, the speed is much slower, just around 800it/s.
And when I set `num_workers=4` in DataLoader, the loading speed is much lower, just 130it/s.
Could you please help me with this problem?
Thanks a lot! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/877/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/877/timeline | null | completed | null | null | false | [
"Hi ! Thanks for reporting.\r\nDo you have the same slowdown when you iterate through the raw dataset object as well ? (no dataloader)\r\nIt would be nice to know whether it comes from the dataloader or not",
"> Hi ! Thanks for reporting.\r\n> Do you have the same slowdown when you iterate through the raw dataset object as well ? (no dataloader)\r\n> It would be nice to know whether it comes from the dataloader or not\r\n\r\nI did not iter data from raw dataset, maybe I will test later. Now I iter all files directly from `open(file)`, around 20000it/s."
] |
https://api.github.com/repos/huggingface/datasets/issues/4783 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4783/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4783/comments | https://api.github.com/repos/huggingface/datasets/issues/4783/events | https://github.com/huggingface/datasets/pull/4783 | 1,326,375,011 | PR_kwDODunzps48iHey | 4,783 | Docs for creating a loading script for image datasets | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | 7 | 2022-08-02T20:36:03Z | 2022-09-09T17:08:14Z | 2022-09-07T19:07:34Z | null | This PR is a first draft of creating a loading script for image datasets. Feel free to let me know if there are any specificities I'm missing for this. 🙂
To do:
- [x] Document how to create different configurations. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4783/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4783/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4783.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4783",
"merged_at": "2022-09-07T19:07:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4783.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4783"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"IMO it would make more sense to add a \"Create image dataset\" page with two main sections - a no-code approach with `imagefolder` + metadata (preferred way), and with a loading script (advanced). It should be clear when to choose which. If we leave this as-is, the user who jumps straight to the Vision section could be under the impression that writing a loading script is the preferred way to share a vision dataset due to how this subsection starts:\r\n```\r\nWrite a dataset loading script to share a dataset.\r\n```\r\n \r\nAlso, I think a note explaining how to make a dataset gated/disable the viewer to hide the data would be beneficial (it's pretty common to require submitting a form to access a CV dataset).",
"Great suggestion @mariosasko! I added your suggestions, let me know what you think. For gated dataset access, I just added a tip referring users to the relevant docs since it's more of a Hub feature than `datasets` feature.",
"Thanks, looks much better now :). I would also move the sections explaining how to create an `imagefolder` for the specific task from the [loading page](https://raw.githubusercontent.com/huggingface/datasets/main/docs/source/image_load.mdx) to this one. IMO it makes more sense to have the basic info (imagefolder structure + `load_dataset` call) there + a link to this page for info on how to create an image folder dataset.",
"Good idea! Moved everything about `imagefolder` + metadata to the create an image dataset section since the `load_dataset` call is the same for different computer vision tasks. ",
"Thanks for all the feedbacks! 🥰\r\n\r\nWhat do you think about creating how to share an `ImageFolder` dataset in a separate PR? I think we should create a new section under `Vision` for how to share an image dataset.",
"I love it thanks ! I think moving forward we can use CSV instead of JSON Lines in the docs ;)"
] |
https://api.github.com/repos/huggingface/datasets/issues/2882 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2882/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2882/comments | https://api.github.com/repos/huggingface/datasets/issues/2882/events | https://github.com/huggingface/datasets/issues/2882 | 991,800,141 | MDU6SXNzdWU5OTE4MDAxNDE= | 2,882 | `load_dataset('docred')` results in a `NonMatchingChecksumError` | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-09-09T05:55:02Z | 2021-09-13T11:24:30Z | 2021-09-13T11:24:30Z | null | ## Describe the bug
I get consistent `NonMatchingChecksumError: Checksums didn't match for dataset source files` errors when trying to execute `datasets.load_dataset('docred')`.
## Steps to reproduce the bug
It is quasi only this code:
```python
import datasets
data = datasets.load_dataset('docred')
```
## Expected results
The DocRED dataset should be loaded without any problems.
## Actual results
```
NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-4-b1b83f25a16c> in <module>
----> 1 d = datasets.load_dataset('docred')
~/anaconda3/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)
845
846 # Download and prepare data
--> 847 builder_instance.download_and_prepare(
848 download_config=download_config,
849 download_mode=download_mode,
~/anaconda3/lib/python3.8/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
613 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
614 if not downloaded_from_gcs:
--> 615 self._download_and_prepare(
616 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
617 )
~/anaconda3/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
673 # Checksums verification
674 if verify_infos:
--> 675 verify_checksums(
676 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
677 )
~/anaconda3/lib/python3.8/site-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
38 if len(bad_urls) > 0:
39 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls))
41 logger.info("All the checksums matched successfully" + for_verification_name)
42
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=1fDmfUUo5G7gfaoqWWvK81u08m71TK2g7']
```
## Environment info
- `datasets` version: 1.11.0
- Platform: Linux-5.11.0-7633-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 5.0.0
This error also happened on my Windows-partition, after freshly installing python 3.9 and `datasets`.
## Remarks
- I have already called `rm -rf /home/<user>/.cache/huggingface`, i.e., I have tried clearing the cache.
- The problem does not exist for other datasets, i.e., it seems to be DocRED-specific. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2882/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2882/timeline | null | completed | null | null | false | [
"Hi @tmpr, thanks for reporting.\r\n\r\nTwo weeks ago (23th Aug), the host of the source `docred` dataset updated one of the files (`dev.json`): you can see it [here](https://drive.google.com/drive/folders/1c5-0YwnoJx8NS6CV2f-NoTHR__BdkNqw).\r\n\r\nTherefore, the checksum needs to be updated.\r\n\r\nNormally, in the meantime, you could avoid the error by passing `ignore_verifications=True` to `load_dataset`. However, as the old link points to a non-existing file, the link must be updated too.\r\n\r\nI'm fixing all this.\r\n\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/6071 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6071/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6071/comments | https://api.github.com/repos/huggingface/datasets/issues/6071/events | https://github.com/huggingface/datasets/issues/6071 | 1,821,990,749 | I_kwDODunzps5smV9d | 6,071 | storage_options provided to load_dataset not fully piping through since datasets 2.14.0 | [] | closed | false | null | 2 | 2023-07-26T09:37:20Z | 2023-07-27T12:42:58Z | 2023-07-27T12:42:58Z | null | ### Describe the bug
Since the latest release of `datasets` (`2.14.0`), custom filesystem `storage_options` passed to `load_dataset()` do not seem to propagate through all the way - leading to problems if loading data files that need those options to be set.
I think this is because of the new `_prepare_path_and_storage_options()` (https://github.com/huggingface/datasets/pull/6028), which returns the right `storage_options` to use given a path and a `DownloadConfig` - but which might not be taking into account the extra `storage_options` explicitly provided e.g. through `load_dataset()`
### Steps to reproduce the bug
```python
import fsspec
import pandas as pd
import datasets
# Generate mock parquet file
data_files = "demo.parquet"
pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}).to_parquet(data_files)
_storage_options = {"x": 1, "y": 2}
fs = fsspec.filesystem("file", **_storage_options)
dataset = datasets.load_dataset(
"parquet",
data_files=data_files,
storage_options=fs.storage_options
)
```
Looking at the `storage_options` resolved here:
https://github.com/huggingface/datasets/blob/b0177910b32712f28d147879395e511207e39958/src/datasets/data_files.py#L331
they end up being `{}`, instead of propagating through the `storage_options` that were provided to `load_dataset` (`fs.storage_options`). As these then get used for the filesystem operation a few lines below
https://github.com/huggingface/datasets/blob/b0177910b32712f28d147879395e511207e39958/src/datasets/data_files.py#L339
the call will fail if the user-provided `storage_options` were needed.
---
A temporary workaround that seemed to work locally to bypass the problem was to bundle a duplicate of the `storage_options` into the `download_config`, so that they make their way all the way to `_prepare_path_and_storage_options()` and get extracted correctly:
```python
dataset = datasets.load_dataset(
"parquet",
data_files=data_files,
storage_options=fs.storage_options,
download_config=datasets.DownloadConfig(storage_options={fs.protocol: fs.storage_options}),
)
```
### Expected behavior
`storage_options` provided to `load_dataset` take effect in all backend filesystem operations.
### Environment info
datasets==2.14.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6071/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6071/timeline | null | completed | null | null | false | [
"Hi ! Thanks for reporting, I opened a PR to fix this\r\n\r\nWhat filesystem are you using ?",
"Hi @lhoestq ! Thank you so much 🙌 \r\n\r\nIt's a bit of a custom setup, but in practice I am using a [pyarrow.fs.S3FileSystem](https://arrow.apache.org/docs/python/generated/pyarrow.fs.S3FileSystem.html) (wrapped in a `fsspec.implementations.arrow.ArrowFSWrapper` [to make it](https://arrow.apache.org/docs/python/filesystems.html#using-arrow-filesystems-with-fsspec) `fsspec` compatible). I also register it as an entrypoint with `fsspec` so that it's the one that gets automatically resolved when looking for filesystems for the `s3` protocol\r\n\r\nIn my case the `storage_option` that seemed not getting piped through was the filesystem's `endpoint_override` that I use in some tests to point at a mock S3 bucket"
] |
https://api.github.com/repos/huggingface/datasets/issues/3005 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3005/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3005/comments | https://api.github.com/repos/huggingface/datasets/issues/3005/events | https://github.com/huggingface/datasets/issues/3005 | 1,014,615,420 | I_kwDODunzps48ec18 | 3,005 | DatasetDict.filter and Dataset.filter crashes with any "fn_kwargs" argument | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-10-04T00:49:29Z | 2021-10-11T10:18:01Z | 2021-10-04T08:46:13Z | null | ## Describe the bug
The ".filter" method of DatasetDict or Dataset objects fails when passing any "fn_kwargs" argument
## Steps to reproduce the bug
```python
import datasets
example_dataset = datasets.Dataset.from_dict({"a": {1, 2, 3, 4}})
def filter_value(example, value):
return example['a'] == value
filtered = example_dataset.filter(filter_value, fn_kwargs={'value': 3})
```
## Expected results
`filtered` is a dataset containing {"a": {3}}
## Actual results
> Traceback (most recent call last):
> File "C:\Users\qsemi\Documents\git\nlp_experiments\gpt_celebrity\src\test_faulty_filter.py", line 8, in <module>
> filtered = example_dataset.filter(filter_value, fn_kwargs={'value': 3})
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 185, in wrapper
> out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\fingerprint.py", line 398, in wrapper
> out = func(self, *args, **kwargs)
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 2169, in filter
> indices = self.map(
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 1686, in map
> return self._map_single(
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 185, in wrapper
> out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\fingerprint.py", line 398, in wrapper
> out = func(self, *args, **kwargs)
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 2048, in _map_single
> batch = apply_function_on_filtered_inputs(
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 1939, in apply_function_on_filtered_inputs
> function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
> TypeError: get_indices_from_mask_function() got an unexpected keyword argument 'value'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Windows-10-10.0.19042-SP0
- Python version: 3.9.7
- PyArrow version: 5.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3005/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3005/timeline | null | completed | null | null | false | [
"Hi @DrMatters, thanks for reporting.\r\n\r\nThis issue was fixed 14 days ago: #2950.\r\n\r\nCurrently, the fix is only in the master branch and will be made available in our next library release.\r\n\r\nIn the meantime, you can incorporate the fix by installing datasets from the master branch:\r\n```shell\r\npip install -U git+ssh://git@github.com/huggingface/datasets.git@master#egg=datasest\r\n```\r\nor\r\n```shell\r\npip install -U git+https://github.com/huggingface/datasets.git@master#egg=datasets\r\n```",
"Thanks, sorry for bothering"
] |
https://api.github.com/repos/huggingface/datasets/issues/4164 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4164/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4164/comments | https://api.github.com/repos/huggingface/datasets/issues/4164/events | https://github.com/huggingface/datasets/pull/4164 | 1,203,661,346 | PR_kwDODunzps42MfxX | 4,164 | Fix duplicate key in multi_news | [] | closed | false | null | 1 | 2022-04-13T18:48:24Z | 2022-04-13T21:04:16Z | 2022-04-13T20:58:02Z | null | To merge after this job succeeded: https://github.com/huggingface/datasets/runs/6012207928 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4164/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4164/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4164.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4164",
"merged_at": "2022-04-13T20:58:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4164.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4164"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/1914 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1914/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1914/comments | https://api.github.com/repos/huggingface/datasets/issues/1914/events | https://github.com/huggingface/datasets/pull/1914 | 812,149,201 | MDExOlB1bGxSZXF1ZXN0NTc2NTYyNTkz | 1,914 | Fix logging imports and make all datasets use library logger | [] | closed | false | null | 0 | 2021-02-19T16:12:34Z | 2021-02-21T19:48:03Z | 2021-02-21T19:48:03Z | null | Fix library relative logging imports and make all datasets use library logger. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1914/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1914/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1914.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1914",
"merged_at": "2021-02-21T19:48:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1914.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1914"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1325 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1325/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1325/comments | https://api.github.com/repos/huggingface/datasets/issues/1325/events | https://github.com/huggingface/datasets/pull/1325 | 759,595,556 | MDExOlB1bGxSZXF1ZXN0NTM0NTczNjM2 | 1,325 | Add humicroedit dataset | [] | closed | false | null | 2 | 2020-12-08T16:35:46Z | 2020-12-17T17:59:09Z | 2020-12-17T17:59:09Z | null | Pull request for adding humicroedit dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1325/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1325/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1325.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1325",
"merged_at": "2020-12-17T17:59:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1325.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1325"
} | true | [
"Updated the commit with the generated yaml tags",
"merging since the CI is fixed on master"
] |
https://api.github.com/repos/huggingface/datasets/issues/4366 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4366/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4366/comments | https://api.github.com/repos/huggingface/datasets/issues/4366/events | https://github.com/huggingface/datasets/issues/4366 | 1,239,534,165 | I_kwDODunzps5J4cpV | 4,366 | TypeError: __init__() missing 1 required positional argument: 'scheme' | [
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
}
] | closed | false | null | 1 | 2022-05-18T07:17:29Z | 2022-05-18T16:36:22Z | 2022-05-18T16:36:21Z | null | "name" : "node-1",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "",
"version" : {
"number" : "7.5.0",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "",
"build_date" : "2019-11-26T01:06:52.518245Z",
"build_snapshot" : false,
"lucene_version" : "8.3.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
when I run the order:
nohup python3 custom_service.pyc > service.log 2>&1&
the log:
nohup: 忽略输入
Traceback (most recent call last):
File "/home/xfz/p3_custom_test/custom_service.py", line 55, in <module>
File "/home/xfz/p3_custom_test/custom_service.py", line 48, in doInitialize
File "custom_impl.py", line 286, in custom_setup
File "custom_impl.py", line 127, in create_es_index
File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/__init__.py", line 345, in __init__
ssl_show_warn=ssl_show_warn,
File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/utils.py", line 105, in client_node_configs
node_configs = hosts_to_node_configs(hosts)
File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/utils.py", line 154, in hosts_to_node_configs
node_configs.append(host_mapping_to_node_config(host))
File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/utils.py", line 221, in host_mapping_to_node_config
return NodeConfig(**options) # type: ignore
TypeError: __init__() missing 1 required positional argument: 'scheme'
[1]+ 退出 1 nohup python3 custom_service.pyc > service.log 2>&1
custom_service_pyc can't running
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4366/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4366/timeline | null | completed | null | null | false | [
"Duplicate of:\r\n- #3956\r\n\r\nI think you should report that issue to `elasticsearch` library: https://github.com/elastic/elasticsearch-py"
] |
https://api.github.com/repos/huggingface/datasets/issues/1589 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1589/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1589/comments | https://api.github.com/repos/huggingface/datasets/issues/1589/events | https://github.com/huggingface/datasets/pull/1589 | 769,187,141 | MDExOlB1bGxSZXF1ZXN0NTQxMzcwMTM0 | 1,589 | Update doc2dial.py | [] | closed | false | null | 1 | 2020-12-16T18:50:56Z | 2022-07-06T15:19:57Z | 2022-07-06T15:19:57Z | null | Added data loader for machine reading comprehension tasks proposed in the Doc2Dial EMNLP 2020 paper. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1589/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1589/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1589.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1589",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1589.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1589"
} | true | [
"Thanks for adding the `doc2dial_rc` config :) \r\n\r\nIt looks like you're missing the dummy data for this config though. Could you add them please ?\r\nAlso to fix the CI you'll need to format the code with `make style`"
] |
https://api.github.com/repos/huggingface/datasets/issues/3019 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3019/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3019/comments | https://api.github.com/repos/huggingface/datasets/issues/3019/events | https://github.com/huggingface/datasets/pull/3019 | 1,015,339,983 | PR_kwDODunzps4speOB | 3,019 | Fix filter leaking | [] | closed | false | null | 0 | 2021-10-04T15:42:58Z | 2022-06-03T08:28:14Z | 2021-10-05T08:33:07Z | null | If filter is called after using a first transform `shuffle`, `select`, `shard`, `train_test_split`, or `filter`, then it could not work as expected and return examples from before the first transform. This is because the indices mapping was not taken into account when saving the indices to keep when doing the filtering
Affected versions: 1.12.0 and 1.12.1
This should fix #3010 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3019/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3019/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3019.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3019",
"merged_at": "2021-10-05T08:33:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3019.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3019"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5153 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5153/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5153/comments | https://api.github.com/repos/huggingface/datasets/issues/5153/events | https://github.com/huggingface/datasets/issues/5153 | 1,420,833,457 | I_kwDODunzps5UsDKx | 5,153 | default Image/AudioFolder infers labels when there is no metadata files even if there is only one dir | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2022-10-24T13:28:18Z | 2022-11-15T16:31:10Z | 2022-11-15T16:31:09Z | null | ### Describe the bug
By default FolderBasedBuilder infers labels if there is not metadata files, even if it's meaningless (for example, they are in a single directory or in the root folder, see this repo as an example: https://huggingface.co/datasets/patrickvonplaten/audios
As this is a corner case for quick exploration of images or audios on the Hub.
### Steps to reproduce the bug
If you have directory like this:
```
repo
image1.jpg
image2.jpg
image3.jpg
```
or
```
repo
data
image1.jpg
image2.jpg
image3.jpg
```
doing `ds = load_dataset(repo)` would create `label` feature:
```python
print(ds["train"][0])
>> {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x375 at 0x7FB5326468E0>, 'label': 0}
```
Also, if you have the following structure:
```
repo
data
image1.jpg
image2.jpg
image3.jpg
image4.jpg
image5.jpg
image6.jpg
```
it will infer two labels:
```python
print(ds["train"][0])
print(ds["train"][-1])
>> {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x375 at 0x7FB5326468E0>, 'label': 1}
>> {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x415 at 0x7FB5326555B0>, 'label': 0}
```
### Expected behavior
We should have only one base feature (Image/Audio) in such cases.
### Environment info
all versions of `datasets` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5153/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5153/timeline | null | completed | null | null | false | [
"Makes sense! For the last structure, we could count the path segments (delimited by \"/\" for URLs and `os.sep` for local paths) to ensure all inferred labels are on the same level. Otherwise, I think it's safe to assume they are meaningless and ignore them.\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/3901 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3901/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3901/comments | https://api.github.com/repos/huggingface/datasets/issues/3901/events | https://github.com/huggingface/datasets/issues/3901 | 1,167,339,773 | I_kwDODunzps5FlDD9 | 3,901 | Dataset viewer issue for IndicParaphrase- the preview doesn't show | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 1 | 2022-03-12T16:56:05Z | 2022-04-12T12:10:50Z | 2022-04-12T12:10:49Z | null | ## Dataset viewer issue for '*IndicParaphrase*'
**Link:** *[IndicParaphrase](https://huggingface.co/datasets/ai4bharat/IndicParaphrase/viewer/hi/validation)*
*The preview of the dataset doesn't come up.
The error on the console is:
Status code: 400
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: '/home/hf/datasets-preview-backend/hi_IndicParaphrase_v1.0.tar'*
Am I the one who added this dataset ? Yes
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3901/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3901/timeline | null | completed | null | null | false | [
"It seems to have been fixed:\r\n\r\n<img width=\"1534\" alt=\"Capture d’écran 2022-04-12 à 14 10 07\" src=\"https://user-images.githubusercontent.com/1676121/162959599-6b7fef7c-8411-4e03-8f00-90040a658079.png\">\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/6064 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6064/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6064/comments | https://api.github.com/repos/huggingface/datasets/issues/6064/events | https://github.com/huggingface/datasets/pull/6064 | 1,818,703,725 | PR_kwDODunzps5WPzAv | 6,064 | set dev version | [] | closed | false | null | 3 | 2023-07-24T15:56:00Z | 2023-07-24T16:05:19Z | 2023-07-24T15:56:10Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6064/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6064/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6064.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6064",
"merged_at": "2023-07-24T15:56:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6064.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6064"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6064). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006704 / 0.011353 (-0.004649) | 0.004208 / 0.011008 (-0.006800) | 0.085895 / 0.038508 (0.047387) | 0.079303 / 0.023109 (0.056193) | 0.353430 / 0.275898 (0.077532) | 0.390814 / 0.323480 (0.067334) | 0.006565 / 0.007986 (-0.001420) | 0.003588 / 0.004328 (-0.000740) | 0.065249 / 0.004250 (0.060999) | 0.059772 / 0.037052 (0.022720) | 0.356315 / 0.258489 (0.097826) | 0.404812 / 0.293841 (0.110971) | 0.031127 / 0.128546 (-0.097419) | 0.008656 / 0.075646 (-0.066991) | 0.288734 / 0.419271 (-0.130537) | 0.053157 / 0.043533 (0.009625) | 0.354651 / 0.255139 (0.099512) | 0.370590 / 0.283200 (0.087391) | 0.024944 / 0.141683 (-0.116738) | 1.472393 / 1.452155 (0.020238) | 1.548946 / 1.492716 (0.056229) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223430 / 0.018006 (0.205424) | 0.567359 / 0.000490 (0.566870) | 0.006744 / 0.000200 (0.006544) | 0.000094 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030174 / 0.037411 (-0.007237) | 0.084865 / 0.014526 (0.070339) | 0.098986 / 0.176557 (-0.077571) | 0.161458 / 0.737135 (-0.575678) | 0.099198 / 0.296338 (-0.197141) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404324 / 0.215209 (0.189115) | 4.043744 / 2.077655 (1.966090) | 1.972834 / 1.504120 (0.468714) | 1.801634 / 1.541195 (0.260439) | 1.891198 / 1.468490 (0.422708) | 0.488511 / 4.584777 (-4.096266) | 3.566890 / 3.745712 (-0.178823) | 3.369415 / 5.269862 (-1.900447) | 2.054995 / 4.565676 (-2.510682) | 0.057225 / 0.424275 (-0.367050) | 0.007360 / 0.007607 (-0.000247) | 0.471813 / 0.226044 (0.245769) | 4.734397 / 2.268929 (2.465468) | 2.526585 / 55.444624 (-52.918039) | 2.230535 / 6.876477 (-4.645942) | 2.434403 / 2.142072 (0.292330) | 0.630090 / 4.805227 (-4.175137) | 0.138544 / 6.500664 (-6.362120) | 0.060099 / 0.075469 (-0.015370) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.260951 / 1.841788 (-0.580837) | 20.051513 / 8.074308 (11.977204) | 14.675938 / 10.191392 (4.484546) | 0.169535 / 0.680424 (-0.510889) | 0.018574 / 0.534201 (-0.515627) | 0.394255 / 0.579283 (-0.185028) | 0.412713 / 0.434364 (-0.021651) | 0.475891 / 0.540337 (-0.064446) | 0.658223 / 1.386936 (-0.728713) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006969 / 0.011353 (-0.004384) | 0.004417 / 0.011008 (-0.006591) | 0.064399 / 0.038508 (0.025891) | 0.082928 / 0.023109 (0.059819) | 0.402285 / 0.275898 (0.126387) | 0.440032 / 0.323480 (0.116552) | 0.005896 / 0.007986 (-0.002090) | 0.003580 / 0.004328 (-0.000749) | 0.065340 / 0.004250 (0.061090) | 0.060363 / 0.037052 (0.023311) | 0.417413 / 0.258489 (0.158924) | 0.448527 / 0.293841 (0.154686) | 0.032238 / 0.128546 (-0.096308) | 0.008820 / 0.075646 (-0.066826) | 0.071516 / 0.419271 (-0.347755) | 0.050614 / 0.043533 (0.007081) | 0.406565 / 0.255139 (0.151426) | 0.422527 / 0.283200 (0.139328) | 0.025866 / 0.141683 (-0.115817) | 1.512256 / 1.452155 (0.060101) | 1.568433 / 1.492716 (0.075717) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.266521 / 0.018006 (0.248515) | 0.564524 / 0.000490 (0.564034) | 0.005236 / 0.000200 (0.005036) | 0.000085 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031998 / 0.037411 (-0.005413) | 0.090754 / 0.014526 (0.076229) | 0.105954 / 0.176557 (-0.070602) | 0.164506 / 0.737135 (-0.572629) | 0.108792 / 0.296338 (-0.187546) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422044 / 0.215209 (0.206835) | 4.204449 / 2.077655 (2.126795) | 2.232060 / 1.504120 (0.727940) | 2.060389 / 1.541195 (0.519194) | 2.152723 / 1.468490 (0.684233) | 0.488456 / 4.584777 (-4.096321) | 3.591102 / 3.745712 (-0.154611) | 5.250401 / 5.269862 (-0.019461) | 3.060259 / 4.565676 (-1.505417) | 0.057558 / 0.424275 (-0.366717) | 0.007881 / 0.007607 (0.000274) | 0.508631 / 0.226044 (0.282587) | 5.064857 / 2.268929 (2.795928) | 2.719068 / 55.444624 (-52.725556) | 2.389992 / 6.876477 (-4.486485) | 2.595073 / 2.142072 (0.453000) | 0.590179 / 4.805227 (-4.215048) | 0.136149 / 6.500664 (-6.364515) | 0.062546 / 0.075469 (-0.012923) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.369252 / 1.841788 (-0.472535) | 20.637580 / 8.074308 (12.563272) | 14.217129 / 10.191392 (4.025737) | 0.195464 / 0.680424 (-0.484960) | 0.018452 / 0.534201 (-0.515749) | 0.397044 / 0.579283 (-0.182239) | 0.401127 / 0.434364 (-0.033237) | 0.465033 / 0.540337 (-0.075305) | 0.613484 / 1.386936 (-0.773452) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006793 / 0.011353 (-0.004559) | 0.004374 / 0.011008 (-0.006635) | 0.084958 / 0.038508 (0.046450) | 0.080440 / 0.023109 (0.057331) | 0.317951 / 0.275898 (0.042053) | 0.376133 / 0.323480 (0.052653) | 0.005775 / 0.007986 (-0.002211) | 0.003644 / 0.004328 (-0.000684) | 0.064823 / 0.004250 (0.060573) | 0.059442 / 0.037052 (0.022390) | 0.319636 / 0.258489 (0.061147) | 0.389668 / 0.293841 (0.095827) | 0.031181 / 0.128546 (-0.097365) | 0.008725 / 0.075646 (-0.066921) | 0.288514 / 0.419271 (-0.130757) | 0.053466 / 0.043533 (0.009933) | 0.323131 / 0.255139 (0.067992) | 0.345276 / 0.283200 (0.062076) | 0.025046 / 0.141683 (-0.116637) | 1.491659 / 1.452155 (0.039504) | 1.562105 / 1.492716 (0.069389) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.286325 / 0.018006 (0.268319) | 0.578021 / 0.000490 (0.577531) | 0.007240 / 0.000200 (0.007040) | 0.000095 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030163 / 0.037411 (-0.007248) | 0.082100 / 0.014526 (0.067574) | 0.098331 / 0.176557 (-0.078225) | 0.160517 / 0.737135 (-0.576618) | 0.098479 / 0.296338 (-0.197859) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401782 / 0.215209 (0.186573) | 4.006330 / 2.077655 (1.928675) | 2.033841 / 1.504120 (0.529721) | 1.853248 / 1.541195 (0.312053) | 1.980046 / 1.468490 (0.511556) | 0.480636 / 4.584777 (-4.104141) | 3.684482 / 3.745712 (-0.061231) | 5.601940 / 5.269862 (0.332079) | 3.369683 / 4.565676 (-1.195993) | 0.057105 / 0.424275 (-0.367170) | 0.007462 / 0.007607 (-0.000145) | 0.474860 / 0.226044 (0.248815) | 4.749624 / 2.268929 (2.480695) | 2.492084 / 55.444624 (-52.952540) | 2.157985 / 6.876477 (-4.718491) | 2.420997 / 2.142072 (0.278925) | 0.574718 / 4.805227 (-4.230509) | 0.134672 / 6.500664 (-6.365992) | 0.061677 / 0.075469 (-0.013792) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.284151 / 1.841788 (-0.557637) | 20.186823 / 8.074308 (12.112515) | 14.247024 / 10.191392 (4.055632) | 0.171606 / 0.680424 (-0.508818) | 0.018619 / 0.534201 (-0.515582) | 0.394156 / 0.579283 (-0.185127) | 0.424684 / 0.434364 (-0.009679) | 0.476056 / 0.540337 (-0.064281) | 0.668751 / 1.386936 (-0.718185) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006807 / 0.011353 (-0.004546) | 0.004142 / 0.011008 (-0.006867) | 0.065503 / 0.038508 (0.026995) | 0.083232 / 0.023109 (0.060122) | 0.378278 / 0.275898 (0.102380) | 0.410191 / 0.323480 (0.086711) | 0.005660 / 0.007986 (-0.002326) | 0.003486 / 0.004328 (-0.000842) | 0.066109 / 0.004250 (0.061859) | 0.059654 / 0.037052 (0.022601) | 0.375965 / 0.258489 (0.117476) | 0.420046 / 0.293841 (0.126205) | 0.031587 / 0.128546 (-0.096959) | 0.008693 / 0.075646 (-0.066953) | 0.071121 / 0.419271 (-0.348151) | 0.049468 / 0.043533 (0.005935) | 0.373785 / 0.255139 (0.118646) | 0.395577 / 0.283200 (0.112377) | 0.024138 / 0.141683 (-0.117545) | 1.465451 / 1.452155 (0.013297) | 1.547565 / 1.492716 (0.054849) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.325241 / 0.018006 (0.307234) | 0.532415 / 0.000490 (0.531925) | 0.004755 / 0.000200 (0.004555) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033472 / 0.037411 (-0.003939) | 0.090574 / 0.014526 (0.076048) | 0.106712 / 0.176557 (-0.069845) | 0.164353 / 0.737135 (-0.572783) | 0.109344 / 0.296338 (-0.186994) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420161 / 0.215209 (0.204952) | 4.192334 / 2.077655 (2.114679) | 2.178181 / 1.504120 (0.674061) | 2.017405 / 1.541195 (0.476211) | 2.182783 / 1.468490 (0.714293) | 0.484037 / 4.584777 (-4.100740) | 3.641911 / 3.745712 (-0.103801) | 5.543874 / 5.269862 (0.274013) | 3.440084 / 4.565676 (-1.125593) | 0.056662 / 0.424275 (-0.367614) | 0.007773 / 0.007607 (0.000166) | 0.498357 / 0.226044 (0.272313) | 4.951315 / 2.268929 (2.682386) | 2.656732 / 55.444624 (-52.787892) | 2.370566 / 6.876477 (-4.505910) | 2.682289 / 2.142072 (0.540217) | 0.598479 / 4.805227 (-4.206749) | 0.151546 / 6.500664 (-6.349118) | 0.063278 / 0.075469 (-0.012191) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.385897 / 1.841788 (-0.455891) | 20.961851 / 8.074308 (12.887543) | 14.465688 / 10.191392 (4.274296) | 0.166156 / 0.680424 (-0.514268) | 0.018848 / 0.534201 (-0.515353) | 0.401712 / 0.579283 (-0.177571) | 0.416674 / 0.434364 (-0.017690) | 0.471834 / 0.540337 (-0.068503) | 0.622463 / 1.386936 (-0.764473) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/5645 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5645/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5645/comments | https://api.github.com/repos/huggingface/datasets/issues/5645/events | https://github.com/huggingface/datasets/issues/5645 | 1,627,108,278 | I_kwDODunzps5g-7O2 | 5,645 | Datasets map and select(range()) is giving dill error | [] | closed | false | null | 2 | 2023-03-16T10:01:28Z | 2023-03-17T04:24:51Z | 2023-03-17T04:24:51Z | null | ### Describe the bug
I'm using Huggingface Datasets library to load the dataset in google colab
When I do,
> data = train_dataset.select(range(10))
or
> train_datasets = train_dataset.map(
> process_data_to_model_inputs,
> batched=True,
> batch_size=batch_size,
> remove_columns=["article", "abstract"],
> )
I get following error: `module 'dill._dill' has no attribute 'log'`
I've tried downgrading the dill version from latest to 0.2.8, but no luck.
Stack trace:
> ---------------------------------------------------------------------------
> ModuleNotFoundError Traceback (most recent call last)
> /usr/local/lib/python3.9/dist-packages/datasets/utils/py_utils.py in _no_cache_fields(obj)
> 367 try:
> --> 368 import transformers as tr
> 369
>
> ModuleNotFoundError: No module named 'transformers'
>
> During handling of the above exception, another exception occurred:
>
> AttributeError Traceback (most recent call last)
> 17 frames
> <ipython-input-13-dd14813880a6> in <module>
> ----> 1 test = train_dataset.select(range(10))
>
> /usr/local/lib/python3.9/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
> 155 }
> 156 # apply actual function
> --> 157 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
> 158 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
> 159 # re-apply format to the output
>
> /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
> 155 if kwargs.get(fingerprint_name) is None:
> 156 kwargs_for_fingerprint["fingerprint_name"] = fingerprint_name
> --> 157 kwargs[fingerprint_name] = update_fingerprint(
> 158 self._fingerprint, transform, kwargs_for_fingerprint
> 159 )
>
> /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in update_fingerprint(fingerprint, transform, transform_args)
> 103 for key in sorted(transform_args):
> 104 hasher.update(key)
> --> 105 hasher.update(transform_args[key])
> 106 return hasher.hexdigest()
> 107
>
> /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in update(self, value)
> 55 def update(self, value):
> 56 self.m.update(f"=={type(value)}==".encode("utf8"))
> ---> 57 self.m.update(self.hash(value).encode("utf-8"))
> 58
> 59 def hexdigest(self):
>
> /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in hash(cls, value)
> 51 return cls.dispatch[type(value)](cls, value)
> 52 else:
> ---> 53 return cls.hash_default(value)
> 54
> 55 def update(self, value):
>
> /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in hash_default(cls, value)
> 44 @classmethod
> 45 def hash_default(cls, value):
> ---> 46 return cls.hash_bytes(dumps(value))
> 47
> 48 @classmethod
>
> /usr/local/lib/python3.9/dist-packages/datasets/utils/py_utils.py in dumps(obj)
> 387 file = StringIO()
> 388 with _no_cache_fields(obj):
> --> 389 dump(obj, file)
> 390 return file.getvalue()
> 391
>
> /usr/local/lib/python3.9/dist-packages/datasets/utils/py_utils.py in dump(obj, file)
> 359 def dump(obj, file):
> 360 """pickle an object to a file"""
> --> 361 Pickler(file, recurse=True).dump(obj)
> 362 return
> 363
>
> /usr/local/lib/python3.9/dist-packages/dill/_dill.py in dump(self, obj)
> 392 return
> 393
> --> 394 def load_session(filename='/tmp/session.pkl', main=None):
> 395 """update the __main__ module with the state from the session file"""
> 396 if main is None: main = _main_module
>
> /usr/lib/python3.9/pickle.py in dump(self, obj)
> 485 if self.proto >= 4:
> 486 self.framer.start_framing()
> --> 487 self.save(obj)
> 488 self.write(STOP)
> 489 self.framer.end_framing()
>
> /usr/local/lib/python3.9/dist-packages/dill/_dill.py in save(self, obj, save_persistent_id)
> 386 pickler._byref = False # disable pickling by name reference
> 387 pickler._recurse = False # disable pickling recursion for globals
> --> 388 pickler._session = True # is best indicator of when pickling a session
> 389 pickler.dump(main)
> 390 finally:
>
> /usr/lib/python3.9/pickle.py in save(self, obj, save_persistent_id)
> 558 f = self.dispatch.get(t)
> 559 if f is not None:
> --> 560 f(self, obj) # Call unbound method with explicit self
> 561 return
> 562
>
> /usr/local/lib/python3.9/dist-packages/dill/_dill.py in save_singleton(pickler, obj)
>
> /usr/lib/python3.9/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, state_setter, obj)
> 689 write(NEWOBJ)
> 690 else:
> --> 691 save(func)
> 692 save(args)
> 693 write(REDUCE)
>
> /usr/local/lib/python3.9/dist-packages/dill/_dill.py in save(self, obj, save_persistent_id)
> 386 pickler._byref = False # disable pickling by name reference
> 387 pickler._recurse = False # disable pickling recursion for globals
> --> 388 pickler._session = True # is best indicator of when pickling a session
> 389 pickler.dump(main)
> 390 finally:
>
> /usr/lib/python3.9/pickle.py in save(self, obj, save_persistent_id)
> 558 f = self.dispatch.get(t)
> 559 if f is not None:
> --> 560 f(self, obj) # Call unbound method with explicit self
> 561 return
> 562
>
> /usr/local/lib/python3.9/dist-packages/datasets/utils/py_utils.py in save_function(pickler, obj)
> 583 dill._dill.log.info("# F1")
> 584 else:
> --> 585 dill._dill.log.info("F2: %s" % obj)
> 586 name = getattr(obj, "__qualname__", getattr(obj, "__name__", None))
> 587 dill._dill.StockPickler.save_global(pickler, obj, name=name)
>
> AttributeError: module 'dill._dill' has no attribute 'log'
### Steps to reproduce the bug
After loading the dataset(eg: https://huggingface.co/datasets/scientific_papers) in google colab
do either
> data = train_dataset.select(range(10))
or
> train_datasets = train_dataset.map(
> process_data_to_model_inputs,
> batched=True,
> batch_size=batch_size,
> remove_columns=["article", "abstract"],
> )
### Expected behavior
The map and select function should work
### Environment info
dataset: https://huggingface.co/datasets/scientific_papers
dill = 0.3.6
python= 3.9.16
transformer = 4.2.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5645/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5645/timeline | null | completed | null | null | false | [
"It looks like an error that we observed once in https://github.com/huggingface/datasets/pull/5166\r\n\r\nCan you try to update `datasets` ?\r\n\r\n```\r\npip install -U datasets\r\n```\r\n\r\nif it doesn't work, can you make sure you don't have packages installed that may modify `dill`'s behavior, such as `apache-beam` ?",
"@lhoestq That fixed the problem, Thanks :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/5120 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5120/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5120/comments | https://api.github.com/repos/huggingface/datasets/issues/5120/events | https://github.com/huggingface/datasets/pull/5120 | 1,410,641,221 | PR_kwDODunzps5A4X10 | 5,120 | Fix `tqdm` zip bug | [] | closed | false | null | 11 | 2022-10-16T22:19:18Z | 2022-10-23T10:27:53Z | 2022-10-19T08:53:17Z | null | This PR solves #5117, by wrapping the entire `zip` clause in tqdm.
For more information, please checkout this Stack Overflow thread:
https://stackoverflow.com/questions/41171191/tqdm-progressbar-and-zip-built-in-do-not-work-together | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5120/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5120/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5120.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5120",
"merged_at": "2022-10-19T08:53:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5120.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5120"
} | true | [
"@albertvillanova Thanks for your comment. What do you think about creating 2 `pbar` for each case? I see the `pbar_iterable` is initialized differently. Maybe `pbar` can also be initialized like that.",
"@albertvillanova Another solution I implemented is to change `pbar_iterable` and add the `zip` to it. I updated the PR with this solution. Let me know what you think.",
"_The documentation is not available anymore as the PR was closed or merged._",
"@albertvillanova Done :) Let me know what you think.",
"@albertvillanova Thanks :) I also don't see an easy way to test this. This was just a problem in the way `tqdm` was used. I'm not sure we should cover it in tests.",
"Hi, \r\n\r\nFirst of all, thanks for this PR. \r\nIt's the first time I join a discussion on GitHUB on problem resolution in libraries such as transformers, so I hope I comply to the best practices for an efficient communication...\r\n\r\nI am running `AutoTokenizer.from_pretrained` in a Google Colab notebook for using with BERT base. \r\nI am experiencing issue [5117](https://github.com/huggingface/datasets/issues/5117).\r\n\r\nEach time I run my notebook, I do:\r\n\r\n`! pip install transformers \r\n! pip install datasets \r\n! pip install huggingface_hub`\r\n\r\nAs I understand, the issue has been resolved and the solution merged to the released version of the code?\r\nSo I expect that the bug is resolved in my notebook, however this is not the case.\r\n\r\nDo I get something wrong? \r\nDo I have to implement some change in the source code myself?\r\n\r\nThanks in advance for your help!",
"@Cochonaki Hi :) The problem was fixed but there wasn't a release since then. I believe a new release should come out in the upcoming weeks. Maybe someone from the core maintainers can answer that :)\r\n\r\ncc: @albertvillanova ",
"Baby Haiti Coffee SE is born\n\nNH watch\n\nOn Sun, Oct 23, 2022 at 02:39 Dudu Lasry ***@***.***> wrote:\n\n> @Cochonaki <https://github.com/Cochonaki> Hi :) The problem was fixed but\n> there wasn't a release since then. I believe a new release should come out\n> in the upcoming weeks. Maybe someone from the core maintainers can answer\n> that :)\n>\n> cc: @albertvillanova <https://github.com/albertvillanova>\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/pull/5120#issuecomment-1288024546>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AAB4E2NCT7QO7W3PTQGDIKDWETMQ7ANCNFSM6AAAAAARGRBY2M>\n> .\n> You are receiving this because you are subscribed to this thread.Message\n> ID: ***@***.***>\n>\n",
"Hi, @Cochonaki.\r\n\r\nAs @david1542 pointed out, we have not made a release since this bug was fixed. We will make one in the following weeks.\r\n\r\nIn the meantime, if you would like to incorporate the bug fix, you can install `datasets` from this repo main branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```",
"Thanks a lot @albertvillanova and @david1542, it works now!\r\nI am really thankful for your help, that encourages me to participate more in this community.\r\nSee you around!",
"Welcome!!! 🤗"
] |
https://api.github.com/repos/huggingface/datasets/issues/2060 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2060/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2060/comments | https://api.github.com/repos/huggingface/datasets/issues/2060/events | https://github.com/huggingface/datasets/pull/2060 | 832,588,591 | MDExOlB1bGxSZXF1ZXN0NTkzNzIxNzcx | 2,060 | Filtering refactor | [] | closed | false | null | 10 | 2021-03-16T09:23:30Z | 2021-10-13T09:09:04Z | 2021-10-13T09:09:03Z | null | fix https://github.com/huggingface/datasets/issues/2032
benchmarking is somewhat inconclusive, currently running on `book_corpus` with:
```python
bc = load_dataset("bookcorpus")
now = time.time()
bc.filter(lambda x: len(x["text"]) < 64)
elapsed = time.time() - now
print(elapsed)
```
this branch does it in 233 seconds, master in 1409 seconds. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2060/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2060/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2060.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2060",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2060.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2060"
} | true | [
"I thought at first that the multiproc test was not relevant now that we do stuff only in memory, but I think there's something that's actually broken, my tiny benchmark on bookcorpus runs forever (2hrs+) when I add `num_proc=4` as a kwarg, will investigate 👀 \r\n\r\nI'm not familiar with the caching you describe for `.map`, I'll look it up.",
"turns out the multi proc issue is also on master, I won't fix it in this PR but opened #2071 to track the problem.",
"tracemalloc outputs from this script:\r\n\r\n```python\r\nimport logging\r\nimport sys\r\nimport time\r\nimport tracemalloc\r\n\r\nfrom datasets import load_dataset, set_caching_enabled\r\n\r\n\r\nif __name__ == \"__main__\":\r\n set_caching_enabled(False)\r\n logging.basicConfig(level=logging.DEBUG)\r\n\r\n tracemalloc.start()\r\n bc = load_dataset(\"bookcorpus\")\r\n\r\n now = time.time()\r\n try:\r\n snapshot1 = tracemalloc.take_snapshot()\r\n bc[\"train\"].filter(lambda x: len(x[\"text\"]) < 64, num_proc=int(sys.argv[1]))\r\n except Exception as e:\r\n print(f\"cancelled: {e}\")\r\n exit(1)\r\n snapshot2 = tracemalloc.take_snapshot()\r\n tracemalloc.stop()\r\n elapsed = time.time() - now\r\n\r\n print(elapsed)\r\n top_stats = snapshot2.compare_to(snapshot1, \"lineno\")\r\n\r\n print(\"[ Top 10 differences ]\")\r\n for stat in top_stats[:10]:\r\n print(stat)\r\n\r\n```\r\n\r\n\r\nThis branch:\r\n\r\n```\r\n ssh://theo@35.205.12.130:22/home/theo/.local/share/miniconda3/envs/datasets/bin/python -u benchmark_filter.py 1\r\n DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): s3.amazonaws.com:443\r\n DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 \"HEAD /datasets.huggingface.co/datasets/datasets/bookcorpus/bookcorpus.py HTTP/1.1\" 200 0\r\n DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): raw.githubusercontent.com:443\r\n DEBUG:urllib3.connectionpool:https://raw.githubusercontent.com:443 \"HEAD /huggingface/datasets/master/datasets/bookcorpus/bookcorpus.py HTTP/1.1\" 200 0\r\n DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): raw.githubusercontent.com:443\r\n DEBUG:urllib3.connectionpool:https://raw.githubusercontent.com:443 \"HEAD /huggingface/datasets/master/datasets/bookcorpus/dataset_infos.json HTTP/1.1\" 200 0\r\n WARNING:datasets.builder:Reusing dataset bookcorpus (/home/theo/.cache/huggingface/datasets/bookcorpus/plain_text/1.0.0/af844be26c089fb64810e9f2cd841954fd8bd596d6ddd26326e4c70e2b8c96fc)\r\n 0%| | 0/74005 [00:00<?, ?ba/s]2021-03-23 10:23:20.051255: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\r\n 2021-03-23 10:23:20.051304: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\n DEBUG:tensorflow:Falling back to TensorFlow client; we recommended you install the Cloud TPU client directly with pip install cloud-tpu-client.\r\n 100%|████████████████████████████████████| 74005/74005 [12:16<00:00, 100.54ba/s]\r\n 815.6356580257416\r\n [ Top 10 differences ]\r\n <frozen importlib._bootstrap_external>:580: size=38.0 MiB (+33.7 MiB), count=326226 (+307928), average=122 B\r\n <frozen importlib._bootstrap>:219: size=7643 KiB (+7553 KiB), count=26372 (+25473), average=297 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/torch/__init__.py:427: size=1291 KiB (+1291 KiB), count=5924 (+5924), average=223 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/abc.py:85: size=1039 KiB (+1026 KiB), count=3428 (+3384), average=310 B\r\n <frozen importlib._bootstrap_external>:64: size=917 KiB (+891 KiB), count=5300 (+5132), average=177 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/collections/__init__.py:456: size=720 KiB (+709 KiB), count=3403 (+3349), average=217 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/tf_export.py:346: size=607 KiB (+607 KiB), count=3962 (+3962), average=157 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/linecache.py:137: size=998 KiB (+487 KiB), count=9551 (+4517), average=107 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/tf_decorator.py:241: size=367 KiB (+367 KiB), count=5225 (+5225), average=72 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/decorator_utils.py:114: size=359 KiB (+359 KiB), count=330 (+330), average=1114 B\r\n```\r\n\r\nOn master:\r\n```\r\n ssh://theo@35.205.12.130:22/home/theo/.local/share/miniconda3/envs/datasets/bin/python -u benchmark_filter.py 1\r\n DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): s3.amazonaws.com:443\r\n DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 \"HEAD /datasets.huggingface.co/datasets/datasets/bookcorpus/bookcorpus.py HTTP/1.1\" 200 0\r\n DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): raw.githubusercontent.com:443\r\n DEBUG:urllib3.connectionpool:https://raw.githubusercontent.com:443 \"HEAD /huggingface/datasets/master/datasets/bookcorpus/bookcorpus.py HTTP/1.1\" 200 0\r\n DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): raw.githubusercontent.com:443\r\n DEBUG:urllib3.connectionpool:https://raw.githubusercontent.com:443 \"HEAD /huggingface/datasets/master/datasets/bookcorpus/dataset_infos.json HTTP/1.1\" 200 0\r\n WARNING:datasets.builder:Reusing dataset bookcorpus (/home/theo/.cache/huggingface/datasets/bookcorpus/plain_text/1.0.0/af844be26c089fb64810e9f2cd841954fd8bd596d6ddd26326e4c70e2b8c96fc)\r\n 0%| | 0/74005 [00:00<?, ?ba/s]2021-03-23 12:26:47.219622: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\r\n 2021-03-23 12:26:47.219669: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\n DEBUG:tensorflow:Falling back to TensorFlow client; we recommended you install the Cloud TPU client directly with pip install cloud-tpu-client.\r\n 100%|███████████████████████████████████| 74005/74005 [1:02:17<00:00, 19.80ba/s]\r\n 3738.870892047882\r\n [ Top 10 differences ]\r\n <frozen importlib._bootstrap_external>:580: size=38.0 MiB (+33.7 MiB), count=326221 (+307919), average=122 B\r\n <frozen importlib._bootstrap>:219: size=7648 KiB (+7557 KiB), count=26455 (+25555), average=296 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/torch/__init__.py:427: size=1291 KiB (+1291 KiB), count=5924 (+5924), average=223 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/abc.py:85: size=1039 KiB (+1026 KiB), count=3429 (+3385), average=310 B\r\n <frozen importlib._bootstrap_external>:64: size=917 KiB (+891 KiB), count=5300 (+5132), average=177 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/collections/__init__.py:456: size=720 KiB (+709 KiB), count=3403 (+3349), average=217 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/tf_export.py:346: size=607 KiB (+607 KiB), count=3962 (+3962), average=157 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/linecache.py:137: size=1000 KiB (+489 KiB), count=9569 (+4535), average=107 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/tf_decorator.py:241: size=367 KiB (+367 KiB), count=5225 (+5225), average=72 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/decorator_utils.py:114: size=359 KiB (+359 KiB), count=330 (+330), average=1114 B\r\n```\r\n\r\nI'm not concluding much, it seems nothing is really happening to memory on `pyarrow::Table.filter`? ",
"Cool ! Maybe it increases the memory a bit but what's brought in memory is not the resulting Table but something else (not sure what though).\r\nWhat's the length of the resulting dataset ?\r\nYou can also take a look at `pyarrow.total_allocated_memory()` to show how much memory is being used by pyarrow",
"```diff\r\ndiff --git a/benchmarks/benchmark_filter.py b/benchmarks/benchmark_filter.py\r\nindex 4b9efd4e..a862c204 100644\r\n--- a/benchmarks/benchmark_filter.py\r\n+++ b/benchmarks/benchmark_filter.py\r\n@@ -1,6 +1,9 @@\r\n import logging\r\n import sys\r\n import time\r\n+import tracemalloc\r\n+\r\n+import pyarrow as pa\r\n \r\n from datasets import load_dataset, set_caching_enabled\r\n \r\n@@ -9,13 +12,28 @@ if __name__ == \"__main__\":\r\n set_caching_enabled(False)\r\n logging.basicConfig(level=logging.DEBUG)\r\n \r\n+ tracemalloc.start()\r\n bc = load_dataset(\"bookcorpus\")\r\n \r\n now = time.time()\r\n try:\r\n+ snapshot1 = tracemalloc.take_snapshot()\r\n+ pamem1 = pa.total_allocated_bytes()\r\n bc[\"train\"].filter(lambda x: len(x[\"text\"]) < 64, num_proc=int(sys.argv[1]))\r\n+ pamem2 = pa.total_allocated_bytes()\r\n+ snapshot2 = tracemalloc.take_snapshot()\r\n except Exception as e:\r\n print(f\"cancelled: {e}\")\r\n+ exit(1)\r\n+ tracemalloc.stop()\r\n elapsed = time.time() - now\r\n \r\n print(elapsed)\r\n+ top_stats = snapshot2.compare_to(snapshot1, \"lineno\")\r\n+\r\n+ print(\"[ Top 10 differences ]\")\r\n+ for stat in top_stats[:10]:\r\n+ print(stat)\r\n+\r\n+ print(\"[ pyarrow reporting ]\")\r\n+ print(f\"before: ({pamem1}) after: ({pamem2})\")\r\n```\r\n\r\nthis yields 0-0, does not seem like a good tool 😛 and the documentation is [quite mysterious.](https://arrow.apache.org/docs/python/generated/pyarrow.total_allocated_bytes.html)",
"Personally if I use your script to benchmark on this branch\r\n```python\r\nbc = load_dataset(\"bookcorpus\", split=\"train[:1%]\")\r\nbc = bc.filter(lambda x: len(x[\"text\"]) < 64)\r\n```\r\n\r\nthen I get\r\n```\r\n[ pyarrow reporting ]\r\nbefore: (0) after: (15300672)\r\n```\r\n\r\nMaybe you got 0-0 because the filter output is directly garbage collected, since you didn't do\r\n```python\r\nbc[\"train\"] = bc[\"train\"].filter(...)\r\n```\r\nCan you try again on your side just to make sure ?\r\n\r\nEven if the documentation doesn't say much, `pa.total_allocated_bytes` if pretty useful, and also very consistent.\r\nIt tracks the number of bytes used for arrow data.",
"> Maybe you got 0-0 because the filter output is directly garbage collected, since you didn't do\r\n> \r\n> ```python\r\n> bc[\"train\"] = bc[\"train\"].filter(...)\r\n> ```\r\nNice catch! I get 1.74GB for this branch",
"Looks like we may need to write the filtered table on the disk then.\r\n\r\nThe other option is to slice the table to keep only the good rows and concatenate them but this is too slow at the moment since slicing is O(n) until #1803 is fixed. I'll work on this issue this afternoon",
"From investigation it looks like the lib's `Table.filter` cannot send its output to memorymap, asked a question on the mailing list, see [here](https://lists.apache.org/thread.html/r8cd8591ce83a967eb0097a7f31785ac2f3ee95ea371c8c5beb0720ad%40%3Cuser.arrow.apache.org%3E)",
"closing in favor of #2836 "
] |
https://api.github.com/repos/huggingface/datasets/issues/1697 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1697/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1697/comments | https://api.github.com/repos/huggingface/datasets/issues/1697/events | https://github.com/huggingface/datasets/pull/1697 | 781,126,579 | MDExOlB1bGxSZXF1ZXN0NTUwOTAzNzI5 | 1,697 | Update DialogRE DatasetCard | [] | closed | false | null | 1 | 2021-01-07T08:22:33Z | 2021-01-07T13:34:28Z | 2021-01-07T13:34:28Z | null | Update the information in the dataset card for the Dialog RE dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1697/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1697/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1697.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1697",
"merged_at": "2021-01-07T13:34:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1697.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1697"
} | true | [
"Same as #1698, can you add a task tag for dialogue-modeling (under sequence-modeling) :) ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/125 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/125/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/125/comments | https://api.github.com/repos/huggingface/datasets/issues/125/events | https://github.com/huggingface/datasets/pull/125 | 618,869,048 | MDExOlB1bGxSZXF1ZXN0NDE4NTExNDE0 | 125 | [Newsroom] add newsroom | [] | closed | false | null | 0 | 2020-05-15T10:34:34Z | 2020-05-15T10:37:07Z | 2020-05-15T10:37:02Z | null | I checked it with the data link of the mail you forwarded @thomwolf => works well! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/125/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/125/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/125.diff",
"html_url": "https://github.com/huggingface/datasets/pull/125",
"merged_at": "2020-05-15T10:37:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/125.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/125"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5839 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5839/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5839/comments | https://api.github.com/repos/huggingface/datasets/issues/5839/events | https://github.com/huggingface/datasets/issues/5839 | 1,704,554,718 | I_kwDODunzps5lmXDe | 5,839 | Make models/functions optimized with `torch.compile` hashable | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 0 | 2023-05-10T20:02:08Z | 2023-05-10T20:02:08Z | null | null | As reported in https://github.com/huggingface/datasets/issues/5819, hashing functions/transforms that reference a model, or a function, optimized with `torch.compile` currently fails due to them not being picklable (the concrete error can be found in the linked issue).
The solutions to consider:
1. hashing/pickling the original, uncompiled version of a compiled model/function (attributes `_orig_mod`/`_torchdynamo_orig_callable`) (less precise than the 2nd option as it ignores the other params of `torch.compute`)
2. wait for https://github.com/pytorch/pytorch/issues/101107 to be resolved
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5839/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5839/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/1004 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1004/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1004/comments | https://api.github.com/repos/huggingface/datasets/issues/1004/events | https://github.com/huggingface/datasets/issues/1004 | 755,325,368 | MDU6SXNzdWU3NTUzMjUzNjg= | 1,004 | how large datasets are handled under the hood | [] | closed | false | null | 3 | 2020-12-02T14:32:40Z | 2022-10-05T12:13:29Z | 2022-10-05T12:13:29Z | null | Hi
I want to use multiple large datasets with a mapping style dataloader, where they cannot fit into memory, could you tell me how you handled the datasets under the hood? is this you bring all in memory in case of mapping style ones? or is this some sharding under the hood and you bring in memory when necessary, thanks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1004/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1004/timeline | null | completed | null | null | false | [
"This library uses Apache Arrow under the hood to store datasets on disk.\r\nThe advantage of Apache Arrow is that it allows to memory map the dataset. This allows to load datasets bigger than memory and with almost no RAM usage. It also offers excellent I/O speed.\r\n\r\nFor example when you access one element or one batch\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nsquad = load_dataset(\"squad\", split=\"train\")\r\nfirst_element = squad[0]\r\none_batch = squad[:8]\r\n```\r\n\r\nthen only this element/batch is loaded in memory, while the rest of the dataset is memory mapped.",
"How can we change how much data is loaded to memory with Arrow? I think that I am having some performance issue with it. When Arrow loads the data from disk it does it in multiprocess? It's almost twice slower training with arrow than in memory.\r\n\r\nEDIT:\r\nMy fault! I had not seen the `dataloader_num_workers` in `TrainingArguments` ! Now I can parallelize and go fast! Sorry, and thanks.",
"> How can we change how much data is loaded to memory with Arrow? I think that I am having some performance issue with it. When Arrow loads the data from disk it does it in multiprocess? It's almost twice slower training with arrow than in memory.\r\n\r\nLoading arrow data from disk is done with memory-mapping. This allows to load huge datasets without filling your RAM.\r\nMemory mapping is almost instantaneous and is done within one process.\r\n\r\nThen, the speed of querying examples from the dataset is I/O bounded depending on your disk. If it's an SSD then fetching examples from the dataset will be very fast.\r\nBut since the I/O speed of an SSD is lower than the one of RAM it's expected to be slower to fetch data from disk than from memory.\r\nStill, if you load the dataset in different processes then it can be faster but there will still be the I/O bottleneck of the disk.\r\n\r\n> EDIT:\r\n> My fault! I had not seen the `dataloader_num_workers` in `TrainingArguments` ! Now I can parallelize and go fast! Sorry, and thanks.\r\n\r\nOk let me know if that helps !\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/2937 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2937/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2937/comments | https://api.github.com/repos/huggingface/datasets/issues/2937/events | https://github.com/huggingface/datasets/issues/2937 | 999,548,277 | I_kwDODunzps47k-V1 | 2,937 | load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 4 | 2021-09-17T16:52:10Z | 2022-08-24T13:09:08Z | 2022-08-24T13:09:08Z | null | ## Describe the bug
Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('wiki_bio')
```
## Expected results
It is expected that the dataset downloads without any errors.
## Actual results
PermissionError see trace below:
```
Using custom data configuration default
Downloading and preparing dataset wiki_bio/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to C:\Users\username\.cache\huggingface\datasets\wiki_bio\default\1.1.0\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\builder.py", line 644, in download_and_prepare
self._save_info()
File "C:\Users\username\.conda\envs\hf\lib\contextlib.py", line 120, in __exit__
next(self.gen)
File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\builder.py", line 598, in incomplete_dir
os.rename(tmp_dir, dirname)
PermissionError: [WinError 5] Access is denied: 'C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9.incomplete' -> 'C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9'
```
By commenting out the os.rename() [L604](https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L604) and the shutil.rmtree() [L607](https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L607) lines, in my virtual environment, I was able to get the load process to complete, rename the directory manually and then rerun the `load_dataset('wiki_bio')` to get what I needed.
It seems that os.rename() in the `incomplete_dir` content manager is the culprit. Here's another project [Conan](https://github.com/conan-io/conan/issues/6560) with similar issue with os.rename() if it helps debug this issue.
## Environment info
- `datasets` version: 1.12.1
- Platform: Windows-10-10.0.22449-SP0
- Python version: 3.8.12
- PyArrow version: 5.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2937/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2937/timeline | null | completed | null | null | false | [
"Hi @daqieq, thanks for reporting.\r\n\r\nUnfortunately, I was not able to reproduce this bug:\r\n```ipython\r\nIn [1]: from datasets import load_dataset\r\n ...: ds = load_dataset('wiki_bio')\r\nDownloading: 7.58kB [00:00, 26.3kB/s]\r\nDownloading: 2.71kB [00:00, ?B/s]\r\nUsing custom data configuration default\r\nDownloading and preparing dataset wiki_bio/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\\r\n1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9...\r\nDownloading: 334MB [01:17, 4.32MB/s]\r\nDataset wiki_bio downloaded and prepared to C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9. Subsequent calls will reuse thi\r\ns data.\r\n```\r\n\r\nThis kind of error messages usually happen because:\r\n- Your running Python script hasn't write access to that directory\r\n- You have another program (the File Explorer?) already browsing inside that directory",
"Thanks @albertvillanova for looking at it! I tried on my personal Windows machine and it downloaded just fine.\r\n\r\nRunning on my work machine and on a colleague's machine it is consistently hitting this error. It's not a write access issue because the `.incomplete` directory is written just fine. It just won't rename and then it deletes the directory in the `finally` step. Also the zip file is written and extracted fine in the downloads directory.\r\n\r\nThat leaves another program that might be interfering, and there are plenty of those in my work machine ... (full antivirus, data loss prevention, etc.). So the question remains, why not extend the `try` block to allow catching the error and circle back to the rename after the unknown program is finished doing its 'stuff'. This is the approach that I read about in the linked repo (see my comments above).\r\n\r\nIf it's not high priority, that's fine. However, if someone were to write an PR that solved this issue in our environment in an `except` clause, would it be reviewed for inclusion in a future release? Just wondering whether I should spend any more time on this issue.",
"Hi @albertvillanova, even I am facing the same issue on my work machine:\r\n\r\n`Downloading and preparing dataset json/c4-en-html-with-metadata to C:\\Users\\......\\.cache\\huggingface\\datasets\\json\\c4-en-html-with-metadata-4635c2fd9249f62d\\0.0.0\\c90812beea906fcffe0d5e3bb9eba909a80a998b5f88e9f8acbd320aa91acfde...\r\n100%|███████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 983.42it/s]\r\n100%|███████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 209.01it/s]\r\nTraceback (most recent call last):\r\n File \"bsmetadata/preprocessing_utils.py\", line 710, in <module>\r\n ds = load_dataset(\r\n File \"C:\\Users\\.......\\AppData\\Roaming\\Python\\Python38\\site-packages\\datasets\\load.py\", line 1694, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"C:\\Users\\........\\AppData\\Roaming\\Python\\Python38\\site-packages\\datasets\\builder.py\", line 603, in download_and_prepare\r\n self._save_info()\r\n File \"C:\\Users\\..........\\AppData\\Local\\Programs\\Python\\Python38\\lib\\contextlib.py\", line 120, in __exit__\r\n next(self.gen)\r\n File \"C:\\Users\\.....\\AppData\\Roaming\\Python\\Python38\\site-packages\\datasets\\builder.py\", line 557, in incomplete_dir\r\n os.rename(tmp_dir, dirname)\r\nPermissionError: [WinError 5] Access is denied: 'C:\\\\Users\\\\.........\\\\.cache\\\\huggingface\\\\datasets\\\\json\\\\c4-en-html-with-metadata-4635c2fd9249f62d\\\\0.0.0\\\\c90812beea906fcffe0d5e3bb9eba909a80a998b5f88e9f8acbd320aa91acfde.incomplete' -> 'C:\\\\Users\\\\I355109\\\\.cache\\\\huggingface\\\\datasets\\\\json\\\\c4-en-html-with-metadata-4635c2fd9249f62d\\\\0.0.0\\\\c90812beea906fcffe0d5e3bb9eba909a80a998b5f88e9f8acbd320aa91acfde'`",
"I'm facing the same issue.\r\n\r\n## System Information\r\n\r\n- OS Edition: Windows 10 21H1\r\n- OS build: 19043.1826\r\n- Python version: 3.10.6 (installed using `choco install python`)\r\n- datasets: 2.4.0\r\n- PyArrow: 6.0.1\r\n\r\n## Troubleshooting steps\r\n\r\n- Restart the computer, unfortunately doesn't work! 🌚\r\n- Checked the permissions of `~./cache/...`, looks fine.\r\n- Tested with a simple file operation using the `open()` function and writing a hello_world.txt, it works fine.\r\n- Tested with a different `cache_dir` value on the `load_dataset()`, e.g. \"./data\"\r\n- Tested different datasets: `conll2003`, `squad_v2`, and `wiki_bio`.\r\n- Downgraded datasets from `2.4.0` to `2.1.0`, issue persists.\r\n- Tested it on WSL (Ubuntu 20.04), and it works! \r\n- Python reinstallation, in the first time downloading `conll2003` works fine, but `squad` or `squad_v2` raises Access Denied.\r\n - After the system or VSCode restart, the issue comes back.\r\n\r\n## Resolution\r\n\r\nI fixed it by changing the following command:\r\n\r\nhttps://github.com/huggingface/datasets/blob/68cffe30917a9abed68d28caf54b40c10f977602/src/datasets/builder.py#L666\r\n\r\nfor\r\n\r\n```python\r\nshutil.move(tmp_dir, dirname)\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/3625 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3625/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3625/comments | https://api.github.com/repos/huggingface/datasets/issues/3625/events | https://github.com/huggingface/datasets/issues/3625 | 1,113,017,522 | I_kwDODunzps5CV0yy | 3,625 | Add a metadata field for when source data was produced | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 5 | 2022-01-24T18:52:39Z | 2022-06-28T13:54:49Z | null | null | **Is your feature request related to a problem? Please describe.**
The current problem is that information about when source data was produced is not easily visible. Though there are a variety of metadata fields available in the dataset viewer, time period information is not included. This feature request suggests making metadata relating to the time that the underlying *source* data was produced more prominent and outlines why this specific information is of particular importance, both in domain-specific historic research and more broadly.
**Describe the solution you'd like**
There are a variety of metadata fields exposed in the dataset viewer (license, task categories, etc.) These fields make this metadata more prominent both for human users and as potentially machine-actionable information (for example, through the API). I would propose to add a metadata field that says when some underlying data was produced. For example, a dataset would be labelled as being produced between `1800-1900`.
**Describe alternatives you've considered**
This information is sometimes available in the Datacard or a paper describing the dataset. However, it's often not that easy to identify or extract this information, particularly if you want to use this field as a filter to identify relevant datasets.
**Additional context**
I believe this feature is relevant for a number of reasons:
- Increasingly, there is an interest in using historical data for training language models (for example, https://huggingface.co/dbmdz/bert-base-historic-dutch-cased), and datasets to support this task (for example, https://huggingface.co/datasets/bnl_newspapers). For these datasets, indicating the time periods covered is particularly relevant.
- More broadly, time is likely a common source of domain drift. Datasets of movie reviews from the 90s may not work well for recent movie reviews. As the documentation and long-term management of ML data become more of a priority, quickly understanding the time when the underlying text (or other data types) is arguably more important.
- time-series data: datasets are adding more support for time series data. Again, the periods covered might be particularly relevant here.
**open questions**
- I think some of my points above apply not only to the underlying data but also to annotations. As a result, there could also be an argument for encoding this information somewhere. However, I would argue (but could be persuaded otherwise) that this is probably less important for filtering. This type of context is already addressed in the datasheets template and often requires more narrative to discuss.
- what level of granularity would make sense for this? e.g. assigning a decade, century or year?
- how to encode this information? What formatting makes sense
- what specific time to encode; a data range? (mean, modal, min, max value?)
This is a slightly amorphous feature request - I would be happy to discuss further/try and propose a more concrete solution if this seems like something that could be worth considering. I realise this might also touch on other parts of the 🤗 hubs ecosystem. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3625/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3625/timeline | null | null | null | null | false | [
"A question to the datasets maintainers: is there a policy about how the set of allowed metadata fields is maintained and expanded?\r\n\r\nMetadata are very important, but defining the standard is always a struggle between allowing exhaustivity without being too complex. Archivists have Dublin Core, open data has https://frictionlessdata.io/, geo has ISO 19139 and INSPIRE, etc. and it's always a mess! I'm not sure we want to dig too much into it, but I'm curious to know if there has been some work on the metadata standard.",
"> Metadata are very important, but defining the standard is always a struggle between allowing exhaustivity without being too complex. Archivists have Dublin Core, open data has [frictionlessdata.io](https://frictionlessdata.io/), geo has ISO 19139 and INSPIRE, etc. and it's always a mess! I'm not sure we want to dig too much into it, but I'm curious to know if there has been some work on the metadata standard.\r\n\r\n\r\nI thought this is a potential issue with adding this field since it might be hard to define what is general enough to be useful for most data vs what becomes very domain-specific. Potentially adding one extra field leads to more and more fields in the future. \r\n\r\nAnother issue is that there are some metadata standards around data i.e. [datacite](https://schema.datacite.org/meta/kernel-4.4/), but not many aimed explicitly at ML data afaik. Some of the discussions around metadata for ML are also more focused on versioning/managing data in production environments. My thinking is that here, some reference to the time of production would also often be tracked/relevant, i.e. for triggering model training, so having this information available in the hub would also help address this use case. ",
"Adding a relevant paper related to this topic: [TimeLMs: Diachronic Language Models from Twitter](https://arxiv.org/abs/2202.03829)\r\n\r\n",
"Related: https://github.com/huggingface/datasets/issues/3877",
"Also related: the [Data Catalog Vocabulary - DCAT](https://www.w3.org/TR/vocab-dcat/) standard will be discussed in a new Working Group at the W3C: https://www.w3.org/2022/06/dx-wg-charter.html"
] |
https://api.github.com/repos/huggingface/datasets/issues/3196 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3196/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3196/comments | https://api.github.com/repos/huggingface/datasets/issues/3196/events | https://github.com/huggingface/datasets/pull/3196 | 1,042,223,913 | PR_kwDODunzps4t-bxy | 3,196 | QOL improvements: auto-flatten_indices and desc in map calls | [] | closed | false | null | 0 | 2021-11-02T11:28:50Z | 2021-11-02T15:41:09Z | 2021-11-02T15:41:08Z | null | This PR:
* automatically calls `flatten_indices` where needed: in `unique` and `save_to_disk` to avoid saving the indices file
* adds descriptions to the map calls
Fix #3040 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3196/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3196/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3196.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3196",
"merged_at": "2021-11-02T15:41:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3196.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3196"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/554 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/554/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/554/comments | https://api.github.com/repos/huggingface/datasets/issues/554/events | https://github.com/huggingface/datasets/issues/554 | 690,173,214 | MDU6SXNzdWU2OTAxNzMyMTQ= | 554 | nlp downloads to its module path | [] | closed | false | null | 8 | 2020-09-01T14:06:14Z | 2020-09-11T06:19:24Z | 2020-09-11T06:19:24Z | null | I am trying to package `nlp` for Nix, because it is now an optional dependency for `transformers`. The problem that I encounter is that the `nlp` library downloads to the module path, which is typically not writable in most package management systems:
```>>> import nlp
>>> squad_dataset = nlp.load_dataset('squad')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/nix/store/2yhik0hhqayksmkkfb0ylqp8cf5wa5wp-python3-3.8.5-env/lib/python3.8/site-packages/nlp/load.py", line 530, in load_dataset
module_path, hash = prepare_module(path, download_config=download_config, dataset=True)
File "/nix/store/2yhik0hhqayksmkkfb0ylqp8cf5wa5wp-python3-3.8.5-env/lib/python3.8/site-packages/nlp/load.py", line 329, in prepare_module
os.makedirs(main_folder_path, exist_ok=True)
File "/nix/store/685kq8pyhrvajah1hdsfn4q7gm3j4yd4-python3-3.8.5/lib/python3.8/os.py", line 223, in makedirs
mkdir(name, mode)
OSError: [Errno 30] Read-only file system: '/nix/store/2yhik0hhqayksmkkfb0ylqp8cf5wa5wp-python3-3.8.5-env/lib/python3.8/site-packages/nlp/datasets/squad'
```
Do you have any suggested workaround for this issue?
Perhaps overriding the default value for `force_local_path` of `prepare_module`? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/554/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/554/timeline | null | completed | null | null | false | [
"Indeed this is a known issue arising from the fact that we try to be compatible with cloupickle.\r\n\r\nDoes this also happen if you are installing in a virtual environment?",
"> Indeed this is a know issue with the fact that we try to be compatible with cloupickle.\r\n> \r\n> Does this also happen if you are installing in a virtual environment?\r\n\r\nThen it would work, because the package is in a writable path.",
"If it's fine for you then this is the recommended way to solve this issue.",
"> If it's fine for you then this is the recommended way to solve this issue.\r\n\r\nI don't want to use a virtual environment, because Nix is fully reproducible, and virtual environments are not. And I am the maintainer of the `transformers` in nixpkgs, so sooner or later I will have to package `nlp`, since it is becoming a dependency of `transformers` ;).",
"Ok interesting. We could have another check to see if it's possible to download and import the datasets script at another location than the module path. I think this would probably involve tweaking the python system path dynamically.\r\n\r\nI don't know anything about Nix so if you want to give this a try your self we can guide you or you can give us more information on your general project and how this works.\r\n\r\nRegarding `nlp` and `transformers`, we are not sure `nlp` will become a required dependency for `transformers`. It will probably be used a lot in the examples but I think it probably won't be a required dependency for the main package since we try to keep it as light as possible in terms of deps.\r\n\r\nHappy to help you make all these things work better for your use-case ",
"@danieldk modules are now installed in a different location (by default in the cache directory of the lib, in `~/.cache/huggingface/modules`). You can also change that using the environment variable `HF_MODULES_PATH`\r\n\r\nFeel free to play with this change from the master branch for now, and let us know if it sounds good for you :)\r\nWe plan to do a release in the next coming days",
"Awesome! I’ll hopefully have some time in the coming days to try this.",
"> Feel free to play with this change from the master branch for now, and let us know if it sounds good for you :)\r\n> We plan to do a release in the next coming days\r\n\r\nThanks for making this change! I just packaged the latest commit on master and it works like a charm now! :partying_face: "
] |
https://api.github.com/repos/huggingface/datasets/issues/5358 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5358/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5358/comments | https://api.github.com/repos/huggingface/datasets/issues/5358/events | https://github.com/huggingface/datasets/pull/5358 | 1,495,270,822 | PR_kwDODunzps5FYBcq | 5,358 | Fix `fs.open` resource leaks | [] | closed | false | null | 3 | 2022-12-13T22:35:51Z | 2023-01-05T16:46:31Z | 2023-01-05T15:59:51Z | null | Invoking `{load,save}_from_dict` results in resource leak warnings, this should fix.
Introduces no significant logic changes. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5358/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5358/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5358.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5358",
"merged_at": "2023-01-05T15:59:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5358.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5358"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@mariosasko Sorry, I didn't check tests/style after doing a merge from the Git UI last week. Thx for fixing. \r\n\r\nFYI I'm getting \"Only those with [write access](https://docs.github.com/articles/what-are-the-different-access-permissions) to this repository can merge pull requests.\" so it seems somebody else needs to merge this.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008816 / 0.011353 (-0.002536) | 0.004691 / 0.011008 (-0.006317) | 0.100039 / 0.038508 (0.061531) | 0.035422 / 0.023109 (0.012313) | 0.312600 / 0.275898 (0.036702) | 0.378684 / 0.323480 (0.055204) | 0.007593 / 0.007986 (-0.000392) | 0.005183 / 0.004328 (0.000855) | 0.078040 / 0.004250 (0.073790) | 0.041845 / 0.037052 (0.004793) | 0.325251 / 0.258489 (0.066762) | 0.363459 / 0.293841 (0.069618) | 0.038006 / 0.128546 (-0.090540) | 0.011911 / 0.075646 (-0.063735) | 0.335020 / 0.419271 (-0.084251) | 0.048765 / 0.043533 (0.005233) | 0.305913 / 0.255139 (0.050774) | 0.337620 / 0.283200 (0.054420) | 0.101867 / 0.141683 (-0.039816) | 1.450091 / 1.452155 (-0.002064) | 1.437303 / 1.492716 (-0.055413) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225650 / 0.018006 (0.207644) | 0.492480 / 0.000490 (0.491990) | 0.002857 / 0.000200 (0.002658) | 0.000075 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026231 / 0.037411 (-0.011180) | 0.105479 / 0.014526 (0.090953) | 0.118438 / 0.176557 (-0.058119) | 0.167313 / 0.737135 (-0.569822) | 0.119416 / 0.296338 (-0.176923) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.396233 / 0.215209 (0.181024) | 3.943325 / 2.077655 (1.865671) | 1.778864 / 1.504120 (0.274744) | 1.587957 / 1.541195 (0.046763) | 1.615404 / 1.468490 (0.146914) | 0.709427 / 4.584777 (-3.875350) | 3.823310 / 3.745712 (0.077598) | 3.461376 / 5.269862 (-1.808486) | 1.888330 / 4.565676 (-2.677346) | 0.086910 / 0.424275 (-0.337365) | 0.012215 / 0.007607 (0.004608) | 0.504877 / 0.226044 (0.278833) | 5.051513 / 2.268929 (2.782584) | 2.249389 / 55.444624 (-53.195235) | 1.890949 / 6.876477 (-4.985528) | 2.015584 / 2.142072 (-0.126489) | 0.862313 / 4.805227 (-3.942914) | 0.166295 / 6.500664 (-6.334369) | 0.061131 / 0.075469 (-0.014338) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.201804 / 1.841788 (-0.639984) | 14.589425 / 8.074308 (6.515117) | 13.855522 / 10.191392 (3.664130) | 0.193406 / 0.680424 (-0.487018) | 0.028614 / 0.534201 (-0.505587) | 0.439857 / 0.579283 (-0.139426) | 0.443330 / 0.434364 (0.008966) | 0.514078 / 0.540337 (-0.026259) | 0.608245 / 1.386936 (-0.778691) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007087 / 0.011353 (-0.004265) | 0.005024 / 0.011008 (-0.005985) | 0.096852 / 0.038508 (0.058344) | 0.032870 / 0.023109 (0.009761) | 0.397790 / 0.275898 (0.121892) | 0.420717 / 0.323480 (0.097237) | 0.005552 / 0.007986 (-0.002434) | 0.003742 / 0.004328 (-0.000586) | 0.074788 / 0.004250 (0.070537) | 0.048030 / 0.037052 (0.010977) | 0.398520 / 0.258489 (0.140031) | 0.460919 / 0.293841 (0.167078) | 0.037652 / 0.128546 (-0.090894) | 0.012249 / 0.075646 (-0.063397) | 0.333077 / 0.419271 (-0.086194) | 0.052364 / 0.043533 (0.008831) | 0.394358 / 0.255139 (0.139219) | 0.414193 / 0.283200 (0.130994) | 0.103569 / 0.141683 (-0.038114) | 1.499208 / 1.452155 (0.047053) | 1.619481 / 1.492716 (0.126764) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229476 / 0.018006 (0.211470) | 0.448670 / 0.000490 (0.448180) | 0.000399 / 0.000200 (0.000199) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027550 / 0.037411 (-0.009862) | 0.109180 / 0.014526 (0.094654) | 0.118372 / 0.176557 (-0.058185) | 0.153136 / 0.737135 (-0.583999) | 0.122689 / 0.296338 (-0.173650) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445163 / 0.215209 (0.229954) | 4.426350 / 2.077655 (2.348695) | 2.194902 / 1.504120 (0.690782) | 2.019049 / 1.541195 (0.477854) | 2.032795 / 1.468490 (0.564305) | 0.700752 / 4.584777 (-3.884025) | 3.797616 / 3.745712 (0.051903) | 2.046414 / 5.269862 (-3.223447) | 1.345037 / 4.565676 (-3.220639) | 0.085389 / 0.424275 (-0.338886) | 0.012824 / 0.007607 (0.005217) | 0.553875 / 0.226044 (0.327831) | 5.550252 / 2.268929 (3.281323) | 2.702822 / 55.444624 (-52.741803) | 2.346257 / 6.876477 (-4.530220) | 2.410772 / 2.142072 (0.268699) | 0.848271 / 4.805227 (-3.956957) | 0.170787 / 6.500664 (-6.329877) | 0.064344 / 0.075469 (-0.011125) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.266222 / 1.841788 (-0.575566) | 14.501194 / 8.074308 (6.426886) | 13.413678 / 10.191392 (3.222286) | 0.589048 / 0.680424 (-0.091375) | 0.018246 / 0.534201 (-0.515955) | 0.425221 / 0.579283 (-0.154062) | 0.425900 / 0.434364 (-0.008464) | 0.494023 / 0.540337 (-0.046314) | 0.604324 / 1.386936 (-0.782612) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/4913 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4913/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4913/comments | https://api.github.com/repos/huggingface/datasets/issues/4913/events | https://github.com/huggingface/datasets/pull/4913 | 1,355,232,007 | PR_kwDODunzps4-BP00 | 4,913 | Add license and citation information to cosmos_qa dataset | [] | closed | false | null | 1 | 2022-08-30T06:23:19Z | 2022-08-30T09:49:31Z | 2022-08-30T09:47:35Z | null | This PR adds the license information to `cosmos_qa` dataset, once reported via email by Yejin Choi, the dataset is licensed under CC BY 4.0.
This PR also updates the citation information. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4913/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4913/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4913.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4913",
"merged_at": "2022-08-30T09:47:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4913.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4913"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3294 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3294/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3294/comments | https://api.github.com/repos/huggingface/datasets/issues/3294/events | https://github.com/huggingface/datasets/issues/3294 | 1,057,495,473 | I_kwDODunzps4_CBmx | 3,294 | Add Natural Adversarial Objects dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",
"default": false,
"description": "Vision datasets",
"id": 3608941089,
"name": "vision",
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision"
}
] | open | false | null | 0 | 2021-11-18T15:34:44Z | 2021-12-08T12:00:02Z | null | null | ## Adding a Dataset
- **Name:** Natural Adversarial Objects (NAO)
- **Description:** Natural Adversarial Objects (NAO) is a new dataset to evaluate the robustness of object detection models. NAO contains 7,934 images and 9,943 objects that are unmodified and representative of real-world scenarios, but cause state-of-the-art detection models to misclassify with high confidence.
- **Paper:** https://arxiv.org/abs/2111.04204v1
- **Data:** https://drive.google.com/drive/folders/15P8sOWoJku6SSEiHLEts86ORfytGezi8
- **Motivation:** interesting object detection dataset useful for miscclassifications
cc @NielsRogge
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3294/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3294/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5800 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5800/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5800/comments | https://api.github.com/repos/huggingface/datasets/issues/5800/events | https://github.com/huggingface/datasets/pull/5800 | 1,686,348,096 | PR_kwDODunzps5PRTRh | 5,800 | Change downloaded file permission based on umask | [] | closed | false | null | 1 | 2023-04-27T08:13:30Z | 2023-04-27T09:33:05Z | 2023-04-27T09:30:16Z | null | This PR changes the permission of downloaded files to cache, so that the umask is taken into account.
Related to:
- #2157
Fix #5799.
CC: @stas00 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5800/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5800/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5800.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5800",
"merged_at": "2023-04-27T09:30:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5800.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5800"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/30 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/30/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/30/comments | https://api.github.com/repos/huggingface/datasets/issues/30/events | https://github.com/huggingface/datasets/pull/30 | 610,549,072 | MDExOlB1bGxSZXF1ZXN0NDExOTY4Mzk3 | 30 | add metrics which require download files from github | [] | closed | false | null | 0 | 2020-05-01T04:13:22Z | 2022-10-04T09:31:58Z | 2020-05-11T08:19:54Z | null | To download files from github, I copied the `load_dataset_module` and its dependencies (without the builder) in `load.py` to `metrics/metric_utils.py`. I made the following changes:
- copy the needed files in a folder`metric_name`
- delete all other files that are not needed
For metrics that require an external import, I first create a `<metric_name>_imports.py` file which contains all external urls. Then I create a `<metric_name>.py` in which I will load the external files using `<metric_name>_imports.py` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/30/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/30/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/30.diff",
"html_url": "https://github.com/huggingface/datasets/pull/30",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/30.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/30"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1027 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1027/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1027/comments | https://api.github.com/repos/huggingface/datasets/issues/1027/events | https://github.com/huggingface/datasets/issues/1027 | 755,695,420 | MDU6SXNzdWU3NTU2OTU0MjA= | 1,027 | Hi | [] | closed | false | null | 0 | 2020-12-02T23:47:14Z | 2020-12-03T16:42:41Z | 2020-12-03T16:42:41Z | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1027/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1027/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/2987 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2987/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2987/comments | https://api.github.com/repos/huggingface/datasets/issues/2987/events | https://github.com/huggingface/datasets/issues/2987 | 1,011,026,141 | I_kwDODunzps48Qwjd | 2,987 | ArrowInvalid: Can only convert 1-dimensional array values | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-09-29T14:18:52Z | 2021-10-01T13:57:45Z | 2021-10-01T13:57:45Z | null | ## Describe the bug
For the ViT and LayoutLMv2 demo notebooks in my [Transformers-Tutorials repo](https://github.com/NielsRogge/Transformers-Tutorials), people reported an ArrowInvalid issue after applying the following function to a Dataset:
```
def preprocess_data(examples):
images = [Image.open(path).convert("RGB") for path in examples['image_path']]
words = examples['words']
boxes = examples['bboxes']
word_labels = examples['ner_tags']
encoded_inputs = processor(images, words, boxes=boxes, word_labels=word_labels,
padding="max_length", truncation=True)
return encoded_inputs
```
```
Full trace:
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
<ipython-input-8-0fc3efc6f0c2> in <module>()
27
28 train_dataset = datasets['train'].map(preprocess_data, batched=True, remove_columns=datasets['train'].column_names,
---> 29 features=features)
30 test_dataset = datasets['test'].map(preprocess_data, batched=True, remove_columns=datasets['test'].column_names,
31 features=features)
13 frames
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
1701 new_fingerprint=new_fingerprint,
1702 disable_tqdm=disable_tqdm,
-> 1703 desc=desc,
1704 )
1705 else:
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
183 }
184 # apply actual function
--> 185 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
186 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
187 # re-apply format to the output
/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
396 # Call actual function
397
--> 398 out = func(self, *args, **kwargs)
399
400 # Update fingerprint of in-place transforms + update in-place history of transforms
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2063 writer.write_table(batch)
2064 else:
-> 2065 writer.write_batch(batch)
2066 if update_data and writer is not None:
2067 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file
/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)
409 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col)
410 typed_sequence_examples[col] = typed_sequence
--> 411 pa_table = pa.Table.from_pydict(typed_sequence_examples)
412 self.write_table(pa_table, writer_batch_size)
413
/usr/local/lib/python3.7/dist-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_pydict()
/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.asarray()
/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array()
/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol()
/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in __arrow_array__(self, type)
106 storage = numpy_to_pyarrow_listarray(self.data, type=type.value_type)
107 else:
--> 108 storage = pa.array(self.data, type.storage_dtype)
109 out = pa.ExtensionArray.from_storage(type, storage)
110 elif isinstance(self.data, np.ndarray):
/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array()
/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array()
/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowInvalid: Can only convert 1-dimensional array values
```
It can be fixed by adding the following line:
```diff
def preprocess_data(examples):
images = [Image.open(path).convert("RGB") for path in examples['image_path']]
words = examples['words']
boxes = examples['bboxes']
word_labels = examples['ner_tags']
encoded_inputs = processor(images, words, boxes=boxes, word_labels=word_labels,
padding="max_length", truncation=True)
+ encoded_inputs["image"] = np.array(encoded_inputs["image"])
return encoded_inputs
```
However, would be great if this can be fixed within Datasets itself. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2987/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2987/timeline | null | completed | null | null | false | [
"Hi @NielsRogge, thanks for reporting!\r\n\r\nIn `datasets`, we were handling N-dimensional arrays only when passed as an instance of `np.array`, not when passed as a list of `np.array`s.\r\n\r\nI'm fixing it."
] |
https://api.github.com/repos/huggingface/datasets/issues/1735 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1735/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1735/comments | https://api.github.com/repos/huggingface/datasets/issues/1735/events | https://github.com/huggingface/datasets/pull/1735 | 785,184,740 | MDExOlB1bGxSZXF1ZXN0NTU0MjUzMDcw | 1,735 | Update add new dataset template | [] | closed | false | null | 2 | 2021-01-13T15:08:09Z | 2021-01-14T15:16:01Z | 2021-01-14T15:16:00Z | null | This PR fixes a few typos in the "Add new dataset template" and clarifies a bit what to do for the dummy data creation when the `auto_generate` flag can't work. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1735/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1735/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1735.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1735",
"merged_at": "2021-01-14T15:16:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1735.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1735"
} | true | [
"Add new \"dataset\"? ;)",
"Lol, too used to Transformers ;-)"
] |
https://api.github.com/repos/huggingface/datasets/issues/5688 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5688/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5688/comments | https://api.github.com/repos/huggingface/datasets/issues/5688/events | https://github.com/huggingface/datasets/issues/5688 | 1,648,463,504 | I_kwDODunzps5iQY6Q | 5,688 | Wikipedia download_and_prepare for GCS | [] | open | false | null | 2 | 2023-03-30T23:43:22Z | 2023-03-31T13:31:32Z | null | null | ### Describe the bug
I am unable to download the wikipedia dataset onto GCS.
When I run the script provided the memory firstly gets eaten up, then it crashes.
I tried running this on a VM with 128GB RAM and all I got was a two empty files: _data_builder.lock_, _data.incomplete/beam-temp-wikipedia-train-1ab2039acf3611ed87a9893475de0093_
I have troubleshot this for two straight days now, but I am just unable to get the dataset into storage.
### Steps to reproduce the bug
Run this and insert a path:
```
import datasets
builder = datasets.load_dataset_builder(
"wikipedia", language="en", date="20230320", beam_runner="DirectRunner")
builder.download_and_prepare({path}, file_format="parquet")
```
This is where the problem of it eating RAM occurs.
I have also tried several versions of this, based on the docs:
```
import gcsfs
import datasets
storage_options = {"project": "tdt4310", "token": "cloud"}
fs = gcsfs.GCSFileSystem(**storage_options)
output_dir = "gcs://wikipediadata/"
builder = datasets.load_dataset_builder(
"wikipedia", date="20230320", language="en", beam_runner="DirectRunner")
builder.download_and_prepare(
output_dir, storage_options=storage_options, file_format="parquet")
```
The error message that is received here is:
> ValueError: Unable to get filesystem from specified path, please use the correct path or ensure the required dependency is installed, e.g., pip install apache-beam[gcp]. Path specified: gcs://wikipediadata/wikipedia-train [while running 'train/Save to parquet/Write/WriteImpl/InitializeWrite']
I have ran `pip install apache-beam[gcp]`
### Expected behavior
The wikipedia data loaded into GCS
Everything worked when testing with a smaller demo dataset found somewhere in the docs
### Environment info
Newest published version of datasets. Python 3.9. Also tested with Python 3.7. 128GB RAM Google Cloud VM instance. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5688/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5688/timeline | null | null | null | null | false | [
"Hi @adrianfagerland, thanks for reporting.\r\n\r\nPlease note that \"wikipedia\" is a special dataset, with an Apache Beam builder: https://beam.apache.org/\r\nYou can find more info about Beam datasets in our docs: https://huggingface.co/docs/datasets/beam\r\n\r\nIt was implemented to be run in parallel processing, using one of the distributed back-ends supported by Apache Beam: https://beam.apache.org/get-started/beam-overview/#apache-beam-pipeline-runners\r\n\r\nThat is, you are trying to process the source wikipedia data on your machine (not distributed) when passing `beam_runner=\"DirectRunner\"`.\r\n\r\nAs documented in the wikipedia dataset page (https://huggingface.co/datasets/wikipedia):\r\n\r\n Some subsets of Wikipedia have already been processed by HuggingFace, and you can load them just with:\r\n \r\n from datasets import load_dataset\r\n \r\n load_dataset(\"wikipedia\", \"20220301.en\")\r\n\r\n The list of pre-processed subsets is:\r\n - \"20220301.de\"\r\n - \"20220301.en\"\r\n - \"20220301.fr\"\r\n - \"20220301.frr\"\r\n - \"20220301.it\"\r\n - \"20220301.simple\"\r\n\r\nTo download the available processed data (in Arrow format):\r\n```python\r\nbuilder = datasets.load_dataset_builder(\"wikipedia\", \"20220301.en\")\r\nbuilder.download_and_prepare(your_path)\r\n```",
"When running this using :\r\n```\r\nimport datasets\r\nfrom apache_beam.options.pipeline_options import PipelineOptions\r\nfrom gcsfs import GCSFileSystem\r\n\r\nstorage_options = {\"project\":\"tdt4310\", \"token\":\"cloud\"}\r\nfs = GCSFileSystem(**storage_options)\r\n\r\noutput_dir = \"gcs://quiz_transformer/\"\r\nbeam_options = PipelineOptions(\r\n region=\"europe-west4\",\r\n project=\"tdt4310\",\r\n temp_location=output_dir+\"tmp/\")\r\n\r\n\r\nbuilder = datasets.load_dataset_builder(\"wikipedia\", \"20220301.en\", beam_runner=\"dataflow\", beam_options=beam_options)\r\nbuilder.download_and_prepare(\r\n output_dir, storage_options=storage_options, file_format=\"parquet\")\r\n```\r\nI now get this error:\r\n```\r\nraise FileNotFoundError(f\"Couldn't find file at {url}\")\r\nFileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/enwiki/20220301/dumpstatus.json\r\nDownloading data files: 0%| | 0/1 [00:00<?, ?it/s]\r\n```\r\n\r\nI get the same error for this:\r\n```\r\nimport datasets\r\nfrom gcsfs import GCSFileSystem\r\n\r\nstorage_options = {\"project\":\"tdt4310\", \"token\":\"cloud\"}\r\nfs = GCSFileSystem(**storage_options)\r\n\r\noutput_dir = \"gcs://quiz_transformer/\"\r\nbuilder = datasets.load_dataset_builder(\"wikipedia\", \"20220301.en\")\r\nbuilder.download_and_prepare(\r\n output_dir, storage_options=storage_options, file_format=\"parquet\")\r\n```\r\n\r\n\r\n\r\n"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.