url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
1.2B
1.82B
node_id
stringlengths
18
19
number
int64
4.13k
6.08k
title
stringlengths
1
290
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
3 values
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
2
33.9k
βŒ€
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
3 values
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/5355
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5355/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5355/comments
https://api.github.com/repos/huggingface/datasets/issues/5355/events
https://github.com/huggingface/datasets/pull/5355
1,493,076,860
PR_kwDODunzps5FQcYG
5,355
Clean up Table class docstrings
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-12-13T00:29:47
2022-12-13T18:17:56
2022-12-13T18:14:42
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5355", "html_url": "https://github.com/huggingface/datasets/pull/5355", "diff_url": "https://github.com/huggingface/datasets/pull/5355.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5355.patch", "merged_at": "2022-12-13T18:14:42" }
This PR cleans up the `Table` class docstrings :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5355/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5355/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5354
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5354/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5354/comments
https://api.github.com/repos/huggingface/datasets/issues/5354/events
https://github.com/huggingface/datasets/issues/5354
1,492,174,125
I_kwDODunzps5Y8MUt
5,354
Consider using "Sequence" instead of "List"
{ "login": "tranhd95", "id": 15568078, "node_id": "MDQ6VXNlcjE1NTY4MDc4", "avatar_url": "https://avatars.githubusercontent.com/u/15568078?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tranhd95", "html_url": "https://github.com/tranhd95", "followers_url": "https://api.github.com/users/tranhd95/followers", "following_url": "https://api.github.com/users/tranhd95/following{/other_user}", "gists_url": "https://api.github.com/users/tranhd95/gists{/gist_id}", "starred_url": "https://api.github.com/users/tranhd95/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tranhd95/subscriptions", "organizations_url": "https://api.github.com/users/tranhd95/orgs", "repos_url": "https://api.github.com/users/tranhd95/repos", "events_url": "https://api.github.com/users/tranhd95/events{/privacy}", "received_events_url": "https://api.github.com/users/tranhd95/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
open
false
{ "login": "avinashsai", "id": 22453634, "node_id": "MDQ6VXNlcjIyNDUzNjM0", "avatar_url": "https://avatars.githubusercontent.com/u/22453634?v=4", "gravatar_id": "", "url": "https://api.github.com/users/avinashsai", "html_url": "https://github.com/avinashsai", "followers_url": "https://api.github.com/users/avinashsai/followers", "following_url": "https://api.github.com/users/avinashsai/following{/other_user}", "gists_url": "https://api.github.com/users/avinashsai/gists{/gist_id}", "starred_url": "https://api.github.com/users/avinashsai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avinashsai/subscriptions", "organizations_url": "https://api.github.com/users/avinashsai/orgs", "repos_url": "https://api.github.com/users/avinashsai/repos", "events_url": "https://api.github.com/users/avinashsai/events{/privacy}", "received_events_url": "https://api.github.com/users/avinashsai/received_events", "type": "User", "site_admin": false }
[ { "login": "avinashsai", "id": 22453634, "node_id": "MDQ6VXNlcjIyNDUzNjM0", "avatar_url": "https://avatars.githubusercontent.com/u/22453634?v=4", "gravatar_id": "", "url": "https://api.github.com/users/avinashsai", "html_url": "https://github.com/avinashsai", "followers_url": "https://api.github.com/users/avinashsai/followers", "following_url": "https://api.github.com/users/avinashsai/following{/other_user}", "gists_url": "https://api.github.com/users/avinashsai/gists{/gist_id}", "starred_url": "https://api.github.com/users/avinashsai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avinashsai/subscriptions", "organizations_url": "https://api.github.com/users/avinashsai/orgs", "repos_url": "https://api.github.com/users/avinashsai/repos", "events_url": "https://api.github.com/users/avinashsai/events{/privacy}", "received_events_url": "https://api.github.com/users/avinashsai/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi! Linking a comment to provide more info on the issue: https://stackoverflow.com/a/39458225. This means we should replace all (most of) the occurrences of `List` with `Sequence` in function signatures.\r\n\r\n@tranhd95 Would you be interested in submitting a PR?", "Hi all! I tried to reproduce this issue and didn't work for me. Also in your example i noticed that the variables have different names: `list_of_filenames` and `list_of_files`, could this be related to that?\r\n```python\r\n#I found random data in parquet format:\r\n!wget \"https://github.com/Teradata/kylo/raw/master/samples/sample-data/parquet/userdata1.parquet\"\r\n!wget \"https://github.com/Teradata/kylo/raw/master/samples/sample-data/parquet/userdata2.parquet\"\r\n\r\n#Then i try reproduce\r\nlist_of_files = [\"userdata1.parquet\", \"userdata2.parquet\"]\r\nds = Dataset.from_parquet(list_of_files)\r\n```\r\n**My output:**\r\n```python\r\nWARNING:datasets.builder:Using custom data configuration default-e287d097dc54e046\r\nDownloading and preparing dataset parquet/default to /root/.cache/huggingface/datasets/parquet/default-e287d097dc54e046/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec...\r\nDownloading data files: 100%\r\n1/1 [00:00<00:00, 40.38it/s]\r\nExtracting data files: 100%\r\n1/1 [00:00<00:00, 23.43it/s]\r\nDataset parquet downloaded and prepared to /root/.cache/huggingface/datasets/parquet/default-e287d097dc54e046/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec. Subsequent calls will reuse this data.\r\n```\r\nP.S. This is my first experience with open source. So do not judge strictly if I do not understand something)", "@dantema There is indeed a typo in variable names. Nevertheless, I'm sorry if I was not clear but the output is from `mypy` type checker. You can run the code snippet without issues. The problem is with the type checking.", "However, I found out that the type annotation is actually misleading. The [`from_parquet`](https://github.com/huggingface/datasets/blob/5ef1ab1cc06c2b7a574bf2df454cd9fcb071ccb2/src/datasets/arrow_dataset.py#L1039) method should also accept list of [`PathLike`](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/typing.py#L8) objects which includes [`os.PathLike`](https://docs.python.org/3/library/os.html#os.PathLike). But if I would ran the code snippet below, an exception is thrown.\r\n\r\n**Code**\r\n```py\r\nfrom pathlib import Path\r\n\r\nlist_of_filenames = [Path(\"foo.parquet\"), Path(\"bar.parquet\")]\r\nds = Dataset.from_parquet(list_of_filenames)\r\n```\r\n**Output**\r\n```py\r\n[/usr/local/lib/python3.8/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in from_parquet(path_or_paths, split, features, cache_dir, keep_in_memory, columns, **kwargs)\r\n 1071 from .io.parquet import ParquetDatasetReader\r\n 1072 \r\n-> 1073 return ParquetDatasetReader(\r\n 1074 path_or_paths,\r\n 1075 split=split,\r\n\r\n[/usr/local/lib/python3.8/dist-packages/datasets/io/parquet.py](https://localhost:8080/#) in __init__(self, path_or_paths, split, features, cache_dir, keep_in_memory, streaming, **kwargs)\r\n 35 path_or_paths = path_or_paths if isinstance(path_or_paths, dict) else {self.split: path_or_paths}\r\n 36 hash = _PACKAGED_DATASETS_MODULES[\"parquet\"][1]\r\n---> 37 self.builder = Parquet(\r\n 38 cache_dir=cache_dir,\r\n 39 data_files=path_or_paths,\r\n\r\n[/usr/local/lib/python3.8/dist-packages/datasets/builder.py](https://localhost:8080/#) in __init__(self, cache_dir, config_name, hash, base_path, info, features, use_auth_token, repo_id, data_files, data_dir, name, **config_kwargs)\r\n 298 \r\n 299 if data_files is not None and not isinstance(data_files, DataFilesDict):\r\n--> 300 data_files = DataFilesDict.from_local_or_remote(\r\n 301 sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token\r\n 302 )\r\n\r\n[/usr/local/lib/python3.8/dist-packages/datasets/data_files.py](https://localhost:8080/#) in from_local_or_remote(cls, patterns, base_path, allowed_extensions, use_auth_token)\r\n 794 for key, patterns_for_key in patterns.items():\r\n 795 out[key] = (\r\n--> 796 DataFilesList.from_local_or_remote(\r\n 797 patterns_for_key,\r\n 798 base_path=base_path,\r\n\r\n[/usr/local/lib/python3.8/dist-packages/datasets/data_files.py](https://localhost:8080/#) in from_local_or_remote(cls, patterns, base_path, allowed_extensions, use_auth_token)\r\n 762 ) -> \"DataFilesList\":\r\n 763 base_path = base_path if base_path is not None else str(Path().resolve())\r\n--> 764 data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)\r\n 765 origin_metadata = _get_origin_metadata_locally_or_by_urls(data_files, use_auth_token=use_auth_token)\r\n 766 return cls(data_files, origin_metadata)\r\n\r\n[/usr/local/lib/python3.8/dist-packages/datasets/data_files.py](https://localhost:8080/#) in resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)\r\n 357 data_files = []\r\n 358 for pattern in patterns:\r\n--> 359 if is_remote_url(pattern):\r\n 360 data_files.append(Url(pattern))\r\n 361 else:\r\n\r\n[/usr/local/lib/python3.8/dist-packages/datasets/utils/file_utils.py](https://localhost:8080/#) in is_remote_url(url_or_filename)\r\n 62 \r\n 63 def is_remote_url(url_or_filename: str) -> bool:\r\n---> 64 parsed = urlparse(url_or_filename)\r\n 65 return parsed.scheme in (\"http\", \"https\", \"s3\", \"gs\", \"hdfs\", \"ftp\")\r\n 66 \r\n\r\n[/usr/lib/python3.8/urllib/parse.py](https://localhost:8080/#) in urlparse(url, scheme, allow_fragments)\r\n 373 Note that we don't break the components up in smaller bits\r\n 374 (e.g. netloc is a single string) and we don't expand % escapes.\"\"\"\r\n--> 375 url, scheme, _coerce_result = _coerce_args(url, scheme)\r\n 376 splitresult = urlsplit(url, scheme, allow_fragments)\r\n 377 scheme, netloc, url, query, fragment = splitresult\r\n\r\n[/usr/lib/python3.8/urllib/parse.py](https://localhost:8080/#) in _coerce_args(*args)\r\n 125 if str_input:\r\n 126 return args + (_noop,)\r\n--> 127 return _decode_args(args) + (_encode_result,)\r\n 128 \r\n 129 # Result objects are more helpful than simple tuples\r\n\r\n[/usr/lib/python3.8/urllib/parse.py](https://localhost:8080/#) in _decode_args(args, encoding, errors)\r\n 109 def _decode_args(args, encoding=_implicit_encoding,\r\n 110 errors=_implicit_errors):\r\n--> 111 return tuple(x.decode(encoding, errors) if x else '' for x in args)\r\n 112 \r\n 113 def _coerce_args(*args):\r\n\r\n[/usr/lib/python3.8/urllib/parse.py](https://localhost:8080/#) in <genexpr>(.0)\r\n 109 def _decode_args(args, encoding=_implicit_encoding,\r\n 110 errors=_implicit_errors):\r\n--> 111 return tuple(x.decode(encoding, errors) if x else '' for x in args)\r\n 112 \r\n 113 def _coerce_args(*args):\r\n\r\nAttributeError: 'PosixPath' object has no attribute 'decode'\r\n```\r\n\r\n@mariosasko Should I create a new issue? ", "@mariosasko I would like to take this issue up. ", "@avinashsai Hi, I've assigned you the issue.\r\n\r\n@tranhd95 Yes, feel free to report this in a new issue.", "@avinashsai Are you still working on this? If not I would like to give it a try." ]
2022-12-12T15:39:45
2023-07-26T16:25:51
null
NONE
null
null
null
### Feature request Hi, please consider using `Sequence` type annotation instead of `List` in function arguments such as in [`Dataset.from_parquet()`](https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L1088). It leads to type checking errors, see below. **How to reproduce** ```py list_of_filenames = ["foo.parquet", "bar.parquet"] ds = Dataset.from_parquet(list_of_filenames) ``` **Expected mypy output:** ``` Success: no issues found ``` **Actual mypy output:** ```py test.py:19: error: Argument 1 to "from_parquet" of "Dataset" has incompatible type "List[str]"; expected "Union[Union[str, bytes, PathLike[Any]], List[Union[str, bytes, PathLike[Any]]]]" [arg-type] test.py:19: note: "List" is invariant -- see https://mypy.readthedocs.io/en/stable/common_issues.html#variance test.py:19: note: Consider using "Sequence" instead, which is covariant ``` **Env:** mypy 0.991, Python 3.10.0, datasets 2.7.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5354/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5354/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5353
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5353/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5353/comments
https://api.github.com/repos/huggingface/datasets/issues/5353/events
https://github.com/huggingface/datasets/issues/5353
1,491,880,500
I_kwDODunzps5Y7Eo0
5,353
Support remote file systems for `Audio`
{ "login": "OllieBroadhurst", "id": 46894149, "node_id": "MDQ6VXNlcjQ2ODk0MTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/46894149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/OllieBroadhurst", "html_url": "https://github.com/OllieBroadhurst", "followers_url": "https://api.github.com/users/OllieBroadhurst/followers", "following_url": "https://api.github.com/users/OllieBroadhurst/following{/other_user}", "gists_url": "https://api.github.com/users/OllieBroadhurst/gists{/gist_id}", "starred_url": "https://api.github.com/users/OllieBroadhurst/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/OllieBroadhurst/subscriptions", "organizations_url": "https://api.github.com/users/OllieBroadhurst/orgs", "repos_url": "https://api.github.com/users/OllieBroadhurst/repos", "events_url": "https://api.github.com/users/OllieBroadhurst/events{/privacy}", "received_events_url": "https://api.github.com/users/OllieBroadhurst/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Just seen https://github.com/huggingface/datasets/issues/5281" ]
2022-12-12T13:22:13
2022-12-12T13:37:14
2022-12-12T13:37:14
NONE
null
null
null
### Feature request Hi there! It would be super cool if `Audio()`, and potentially other features, could read files from a remote file system. ### Motivation Large amounts of data is often stored in buckets. `load_from_disk` is able to retrieve data from cloud storage but to my knowledge actually copies the datasets across first, so if you're working off a system with smaller disk specs (like a VM), you can run out of space very quickly. ### Your contribution Something like this (for Google Cloud Platform in this instance): ```python from datasets import Dataset, Audio import gcsfs fs = gcsfs.GCSFileSystem() list_of_audio_fp = {'audio': ['1', '2', '3']} ds = Dataset.from_dict(list_of_audio_fp) ds = ds.cast_column("audio", Audio(sampling_rate=16000, fs=fs)) ``` Under the hood: ```python import librosa from io import BytesIO def load_audio(fp, sampling_rate=None, fs=None): if fs is not None: with fs.open(fp, 'rb') as f: arr, sr = librosa.load(BytesIO(f), sr=sampling_rate) else: # Perform existing io operations ``` Written from memory so some things could be wrong.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5353/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5353/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5352
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5352/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5352/comments
https://api.github.com/repos/huggingface/datasets/issues/5352/events
https://github.com/huggingface/datasets/issues/5352
1,490,796,414
I_kwDODunzps5Y279-
5,352
__init__() got an unexpected keyword argument 'input_size'
{ "login": "J-shel", "id": 82662111, "node_id": "MDQ6VXNlcjgyNjYyMTEx", "avatar_url": "https://avatars.githubusercontent.com/u/82662111?v=4", "gravatar_id": "", "url": "https://api.github.com/users/J-shel", "html_url": "https://github.com/J-shel", "followers_url": "https://api.github.com/users/J-shel/followers", "following_url": "https://api.github.com/users/J-shel/following{/other_user}", "gists_url": "https://api.github.com/users/J-shel/gists{/gist_id}", "starred_url": "https://api.github.com/users/J-shel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/J-shel/subscriptions", "organizations_url": "https://api.github.com/users/J-shel/orgs", "repos_url": "https://api.github.com/users/J-shel/repos", "events_url": "https://api.github.com/users/J-shel/events{/privacy}", "received_events_url": "https://api.github.com/users/J-shel/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi @J-shel, thanks for reporting.\r\n\r\nI think the issue comes from your call to `load_dataset`. As first argument, you should pass:\r\n- either the name of your dataset (\"mrf\") if this is already published on the Hub\r\n- or the path to the loading script of your dataset (\"path/to/your/local/mrf.py\").", "Hi, following your suggestion, I changed my call to load_dataset. Below is the latest:\r\nreader = load_dataset('data/mrf.py',\"default\", input_size=1024, split=split, streaming=True, keep_in_memory=None)\r\nHowever, I still got the same error.\r\nI have one question that is if I only define input_size=2048 in BUILDER_CONFIGS, may I specify input_size=1024 when loading the dataset? Cause I found that I could only specify name=\"default\" since I only define name=\"default\" in BUILDER_CONFIGS." ]
2022-12-12T02:52:03
2022-12-19T01:38:48
null
NONE
null
null
null
### Describe the bug I try to define a custom configuration with a input_size attribute following the instructions by "Specifying several dataset configurations" in https://huggingface.co/docs/datasets/v1.2.1/add_dataset.html But when I load the dataset, I got an error "__init__() got an unexpected keyword argument 'input_size'" ### Steps to reproduce the bug Following is the code to define the dataset: class CsvConfig(datasets.BuilderConfig): """BuilderConfig for CSV.""" input_size: int = 2048 class MRF(datasets.ArrowBasedBuilder): """Archival MRF data""" BUILDER_CONFIG_CLASS = CsvConfig VERSION = datasets.Version("1.0.0") BUILDER_CONFIGS = [ CsvConfig(name="default", version=VERSION, description="MRF data", input_size=2048), ] ... def _generate_examples(self): input_size = self.config.input_size if input_size > 1000: numin = 10000 else: numin = 15000 Below is the code to load the dataset: reader = load_dataset("default", input_size=1024) ### Expected behavior I hope to pass the "input_size" parameter to MRF datasets, and change "input_size" to any value when loading the datasets. ### Environment info - `datasets` version: 2.5.1 - Platform: Linux-4.18.0-305.3.1.el8.x86_64-x86_64-with-glibc2.31 - Python version: 3.9.12 - PyArrow version: 9.0.0 - Pandas version: 1.5.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5352/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5352/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5351
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5351/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5351/comments
https://api.github.com/repos/huggingface/datasets/issues/5351/events
https://github.com/huggingface/datasets/issues/5351
1,490,659,504
I_kwDODunzps5Y2aiw
5,351
Do we need to implement `_prepare_split`?
{ "login": "jmwoloso", "id": 7530947, "node_id": "MDQ6VXNlcjc1MzA5NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/7530947?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmwoloso", "html_url": "https://github.com/jmwoloso", "followers_url": "https://api.github.com/users/jmwoloso/followers", "following_url": "https://api.github.com/users/jmwoloso/following{/other_user}", "gists_url": "https://api.github.com/users/jmwoloso/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmwoloso/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmwoloso/subscriptions", "organizations_url": "https://api.github.com/users/jmwoloso/orgs", "repos_url": "https://api.github.com/users/jmwoloso/repos", "events_url": "https://api.github.com/users/jmwoloso/events{/privacy}", "received_events_url": "https://api.github.com/users/jmwoloso/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi! `DatasetBuilder` is a parent class for concrete builders: `GeneratorBasedBuilder`, `ArrowBasedBuilder` and `BeamBasedBuilder`. When writing a builder script, these classes are the ones you should inherit from. And since all of them implement `_prepare_split`, you only have to implement the three methods mentioned above.", "Thanks so much @mariosasko for the fast response! I've been referencing [this page in the docs](https://huggingface.co/docs/datasets/v2.4.0/en/about_dataset_load) because it it pretty comprehensive in terms of what we have to do and I figured since we subclass the `BuilderConfig` the same pattern would hold, but I've also seen the page with those sub-classed builders as well, so that fills in a knowledge gap for me.", "cc @stevhliu who may have some ideas on how to improve this part of the docs.", "one more question for my understanding @mariosasko. the requirement of a loading script has always seemed counterintuitive to me. if i have to provide a script with every dataset, what is the point of using `datasets` if we're doing all the work of loading it, I can just do that in my code and skip the datasets integration (this of course discounts other potential benefits around metadata management, etc., my example is just simplest use case though for the sake of discussion).\r\n\r\nso i figured I would implement my own `BuilderConfig` and `DatasetBuilder` to handle that portion of it and not have to make a script. i _thought_ this would result in `datasets` (via `download_and_prepare`) then making me something that I could load using `load_dataset` moving forward.\r\n\r\nConcretely, i envisioned this pattern being possible:\r\n\r\n ```\r\nclass MyBuilderConfig(BuilderConfig):\r\n def __init__(self, name=\"my_named_dataset\", ...):\r\n super().__init__(name, ...)\r\n\r\nclass MyDatasetBuilder(GeneratorBasedBuilder):\r\n BUILDER_CONFIG_CLASS = MyBuilderConfig\r\n ....\r\n\r\nmy_builder = MyDatasetBuilder(...)\r\n\r\n# this doesn't exactly work like I thought; I don't get a dataset back, but NoneType instead\r\n# though I can see it loading the files and it generates the cache, etc.\r\nmy_dataset = my_builder.download_and_prepare()\r\n\r\n# load the dataset in the future by referencing it by name and loading from the cached arrow version\r\nnew_instance_of_my_dataset = load_dataset(\"my_named_dataset\")\r\n```\r\n\r\nI've seen references to the `save_to_disk` method which might be the next step I need in order to load it by name, in which case, that makes sense, then i just need to debug why `download_and_prepare` isn't returning me a dataset, but I feel like I still have a larger conceptual knowledge gap on how to use the library correctly.\r\n\r\nThanks again in advance!", "> the requirement of a loading script has always seemed counterintuitive to me\r\n\r\nThis is a requirement only for datasets not stored in standard formats such as CSV, JSON, SQL, Parquet, ImageFolder, etc. \r\n\r\n> if i have to provide a script with every dataset, what is the point of using datasets if we're doing all the work of loading it, I can just do that in my code and skip the datasets integration (this of course discounts other potential benefits around metadata management, etc., my example is just simplest use case though for the sake of discussion)\r\n\r\nOur README/documentation lists the main features... \r\n\r\nOne of the main ones is that our library makes it easy to work with datasets larger than RAM (thanks to Arrow and the caching mechanism), and this is not trivial to implement.\r\n\r\nRegarding the step-by-step builder, this is the pattern:\r\n```python\r\nfrom datasets import load_dataset_builder\r\nbuilder = load_dataset_builder(\"path/to/script\") # or direct instantiation with MyDatasetBuilder(...)\r\nbuilder.download_and_prepare()\r\ndset = builder.as_dataset()\r\n```", "ok, that makes sense. thank you @mariosasko. I realized i'd never looked on the hub at any of the files associated with any datasets. just did that now and it appears that i'll need to have a script regardless _but_ that will just contain my custom config and builder classes, so without realizing it I was already making my script, I just need to wrap that in a file that sits alongside my data (I looked at Glue and realized I was already doing what I thought didn't make sense to have to do, lol).\r\n\r\n`download_and_prepare` isn't returning me a dataset though, but I'll look into that and open another issue if I can't figure it out.", "`download_and_prepare` downloads and prepares the arrow files. You need to call `as_dataset` on the builder to get the dataset.", "ok, I think I was assigning the output of `builder.download_and_prepare` but it's an inplace op, so that explains the `NoneType` i was getting back. Now I'm getting:\r\n\r\n```\r\nArrowInvalid Traceback (most recent call last)\r\n<ipython-input-7-3ed50fb87c70> in <module>\r\n----> 1 ds = dataset_builder.as_dataset()\r\n\r\n/databricks/python/lib/python3.8/site-packages/datasets/builder.py in as_dataset(self, split, run_post_process, ignore_verifications, in_memory)\r\n 1020 \r\n 1021 # Create a dataset for each of the given splits\r\n-> 1022 datasets = map_nested(\r\n 1023 partial(\r\n 1024 self._build_single_dataset,\r\n\r\n/databricks/python/lib/python3.8/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, types, disable_tqdm, desc)\r\n 442 num_proc = 1\r\n 443 if num_proc <= 1 or len(iterable) < parallel_min_length:\r\n--> 444 mapped = [\r\n 445 _single_map_nested((function, obj, types, None, True, None))\r\n 446 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n\r\n/databricks/python/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0)\r\n 443 if num_proc <= 1 or len(iterable) < parallel_min_length:\r\n 444 mapped = [\r\n--> 445 _single_map_nested((function, obj, types, None, True, None))\r\n 446 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n 447 ]\r\n\r\n/databricks/python/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args)\r\n 344 # Singleton first to spare some computation\r\n 345 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):\r\n--> 346 return function(data_struct)\r\n 347 \r\n 348 # Reduce logging to keep things readable in multiprocessing with tqdm\r\n\r\n/databricks/python/lib/python3.8/site-packages/datasets/builder.py in _build_single_dataset(self, split, run_post_process, ignore_verifications, in_memory)\r\n 1051 \r\n 1052 # Build base dataset\r\n-> 1053 ds = self._as_dataset(\r\n 1054 split=split,\r\n 1055 in_memory=in_memory,\r\n\r\n/databricks/python/lib/python3.8/site-packages/datasets/builder.py in _as_dataset(self, split, in_memory)\r\n 1120 \"\"\"\r\n 1121 cache_dir = self._fs._strip_protocol(self._output_dir)\r\n-> 1122 dataset_kwargs = ArrowReader(cache_dir, self.info).read(\r\n 1123 name=self.name,\r\n 1124 instructions=split,\r\n\r\n/databricks/python/lib/python3.8/site-packages/datasets/arrow_reader.py in read(self, name, instructions, split_infos, in_memory)\r\n 236 msg = f'Instruction \"{instructions}\" corresponds to no data!'\r\n 237 raise ValueError(msg)\r\n--> 238 return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)\r\n 239 \r\n 240 def read_files(\r\n\r\n/databricks/python/lib/python3.8/site-packages/datasets/arrow_reader.py in read_files(self, files, original_instructions, in_memory)\r\n 257 \"\"\"\r\n 258 # Prepend path to filename\r\n--> 259 pa_table = self._read_files(files, in_memory=in_memory)\r\n 260 # If original_instructions is not None, convert it to a human-readable NamedSplit\r\n 261 if original_instructions is not None:\r\n\r\n/databricks/python/lib/python3.8/site-packages/datasets/arrow_reader.py in _read_files(self, files, in_memory)\r\n 192 f[\"filename\"] = os.path.join(self._path, f[\"filename\"])\r\n 193 for f_dict in files:\r\n--> 194 pa_table: Table = self._get_table_from_filename(f_dict, in_memory=in_memory)\r\n 195 pa_tables.append(pa_table)\r\n 196 pa_tables = [t for t in pa_tables if len(t) > 0]\r\n\r\n/databricks/python/lib/python3.8/site-packages/datasets/arrow_reader.py in _get_table_from_filename(self, filename_skip_take, in_memory)\r\n 327 filename_skip_take[\"take\"] if \"take\" in filename_skip_take else None,\r\n 328 )\r\n--> 329 table = ArrowReader.read_table(filename, in_memory=in_memory)\r\n 330 if take == -1:\r\n 331 take = len(table) - skip\r\n\r\n/databricks/python/lib/python3.8/site-packages/datasets/arrow_reader.py in read_table(filename, in_memory)\r\n 348 \"\"\"\r\n 349 table_cls = InMemoryTable if in_memory else MemoryMappedTable\r\n--> 350 return table_cls.from_file(filename)\r\n 351 \r\n 352 \r\n\r\n/databricks/python/lib/python3.8/site-packages/datasets/table.py in from_file(cls, filename, replays)\r\n 1034 @classmethod\r\n 1035 def from_file(cls, filename: str, replays=None):\r\n-> 1036 table = _memory_mapped_arrow_table_from_file(filename)\r\n 1037 table = cls._apply_replays(table, replays)\r\n 1038 return cls(table, filename, replays)\r\n\r\n/databricks/python/lib/python3.8/site-packages/datasets/table.py in _memory_mapped_arrow_table_from_file(filename)\r\n 48 def _memory_mapped_arrow_table_from_file(filename: str) -> pa.Table:\r\n 49 memory_mapped_stream = pa.memory_map(filename)\r\n---> 50 opened_stream = pa.ipc.open_stream(memory_mapped_stream)\r\n 51 pa_table = opened_stream.read_all()\r\n 52 return pa_table\r\n\r\n/databricks/python/lib/python3.8/site-packages/pyarrow/ipc.py in open_stream(source)\r\n 152 reader : RecordBatchStreamReader\r\n 153 \"\"\"\r\n--> 154 return RecordBatchStreamReader(source)\r\n 155 \r\n 156 \r\n\r\n/databricks/python/lib/python3.8/site-packages/pyarrow/ipc.py in __init__(self, source)\r\n 43 \r\n 44 def __init__(self, source):\r\n---> 45 self._open(source)\r\n 46 \r\n 47 \r\n\r\n/databricks/python/lib/python3.8/site-packages/pyarrow/ipc.pxi in pyarrow.lib._RecordBatchStreamReader._open()\r\n\r\n/databricks/python/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\n/databricks/python/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowInvalid: Tried reading schema message, was null or length 0\r\n```\r\n\r\n", "looks like my arrow files are all empty @mariosasko \r\n\r\n![image](https://user-images.githubusercontent.com/7530947/208179977-9ae62c9a-866c-472b-9a09-25d1191188fb.png)\r\n\r\n\r\ni also see the `incomplete_info.lock` file a level up too. seems like the data isn't being persisted to disk when I call `download_and_prepare`. is there something else i need to do before then, perhaps?", "quick update @mariosasko. i got it working! i had to downgrade to `datasets==2.4.0`. testing other versions now and will let you know the results.", "I've tested with every version of `datasets>2.4.0` and i get the same error with all of them." ]
2022-12-12T01:38:54
2022-12-20T18:20:57
2022-12-12T16:48:56
NONE
null
null
null
### Describe the bug I'm not sure this is a bug or if it's just missing in the documentation, or i'm not doing something correctly, but I'm subclassing `DatasetBuilder` and getting the following error because on the `DatasetBuilder` class the `_prepare_split` method is abstract (as are the others we are required to implement, hence the genesis of my question): ``` Traceback (most recent call last): File "/home/jason/source/python/prism_machine_learning/examples/create_hf_datasets.py", line 28, in <module> dataset_builder.download_and_prepare() File "/home/jason/.virtualenvs/pml/lib/python3.8/site-packages/datasets/builder.py", line 704, in download_and_prepare self._download_and_prepare( File "/home/jason/.virtualenvs/pml/lib/python3.8/site-packages/datasets/builder.py", line 793, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/jason/.virtualenvs/pml/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split raise NotImplementedError() NotImplementedError ``` ### Steps to reproduce the bug I will share implementation if it turns out that everything should be working (i.e. we only need to implement those 3 methods the docs mention), but I don't want to distract from the original question. ### Expected behavior I just need to know if there are additional methods we need to implement when subclassing `DatasetBuilder` besides what the documentation specifies -> `_info`, `_split_generators` and `_generate_examples` ### Environment info - `datasets` version: 2.4.0 - Platform: Linux-5.4.0-135-generic-x86_64-with-glibc2.2.5 - Python version: 3.8.12 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5351/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5351/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5350
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5350/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5350/comments
https://api.github.com/repos/huggingface/datasets/issues/5350/events
https://github.com/huggingface/datasets/pull/5350
1,487,559,904
PR_kwDODunzps5E8y2E
5,350
Clean up Loading methods docstrings
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-12-09T22:25:30
2022-12-12T17:27:20
2022-12-12T17:24:01
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5350", "html_url": "https://github.com/huggingface/datasets/pull/5350", "diff_url": "https://github.com/huggingface/datasets/pull/5350.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5350.patch", "merged_at": "2022-12-12T17:24:01" }
Clean up for the docstrings in Loading methods!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5350/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5350/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5349
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5349/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5349/comments
https://api.github.com/repos/huggingface/datasets/issues/5349/events
https://github.com/huggingface/datasets/pull/5349
1,487,396,780
PR_kwDODunzps5E8N6G
5,349
Clean up remaining Main Classes docstrings
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-12-09T20:17:15
2022-12-12T17:27:17
2022-12-12T17:24:13
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5349", "html_url": "https://github.com/huggingface/datasets/pull/5349", "diff_url": "https://github.com/huggingface/datasets/pull/5349.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5349.patch", "merged_at": "2022-12-12T17:24:13" }
This PR cleans up the remaining docstrings in Main Classes (`IterableDataset`, `IterableDatasetDict`, and `Features`).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5349/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5349/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5348
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5348/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5348/comments
https://api.github.com/repos/huggingface/datasets/issues/5348/events
https://github.com/huggingface/datasets/issues/5348
1,486,975,626
I_kwDODunzps5YoXKK
5,348
The data downloaded in the download folder of the cache does not respect `umask`
{ "login": "SaulLu", "id": 55560583, "node_id": "MDQ6VXNlcjU1NTYwNTgz", "avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SaulLu", "html_url": "https://github.com/SaulLu", "followers_url": "https://api.github.com/users/SaulLu/followers", "following_url": "https://api.github.com/users/SaulLu/following{/other_user}", "gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}", "starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions", "organizations_url": "https://api.github.com/users/SaulLu/orgs", "repos_url": "https://api.github.com/users/SaulLu/repos", "events_url": "https://api.github.com/users/SaulLu/events{/privacy}", "received_events_url": "https://api.github.com/users/SaulLu/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "note, that `datasets` already did some of that umask fixing in the past and also at the hub - the recent work on the hub about the same: https://github.com/huggingface/huggingface_hub/pull/1220\r\n\r\nAlso I noticed that each file has a .json counterpart and the latter always has the correct perms:\r\n\r\n```\r\n-rw------- 1 uue59kq cnw 173M Dec 9 01:37 537596e64721e2ae3d98785b91d30fda0360c196a8224e29658ad629e7303a4d\r\n-rw-rw---- 1 uue59kq cnw 101 Dec 9 01:37 537596e64721e2ae3d98785b91d30fda0360c196a8224e29658ad629e7303a4d.json\r\n```\r\n\r\nso perhaps cheating is possible and syncing the perms between the 2 will do the trick." ]
2022-12-09T15:46:27
2022-12-09T17:21:26
null
CONTRIBUTOR
null
null
null
### Describe the bug For a project on a cluster we are several users to share the same cache for the datasets library. And we have a problem with the permissions on the data downloaded in the cache. Indeed, it seems that the data is downloaded by giving read and write permissions only to the user launching the command (and no permissions to the group). In our case, those permissions don't respect the `umask` of this user, which was `0007`. Traceback: ``` Using custom data configuration default Downloading and preparing dataset text_caps/default to /gpfswork/rech/cnw/commun/datasets/HuggingFaceM4___text_caps/default/1.0.0/2b9ad220cd90fcf2bfb454645bc54364711b83d6d39401ffdaf8cc40882e9141... Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 921.62it/s] --------------------------------------------------------------------------- PermissionError Traceback (most recent call last) Cell In [3], line 1 ----> 1 ds = load_dataset(dataset_name) File /gpfswork/rech/cnw/commun/conda/lucile-m4_3/lib/python3.8/site-packages/datasets/load.py:1746, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1743 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES 1745 # Download and prepare data -> 1746 builder_instance.download_and_prepare( 1747 download_config=download_config, 1748 download_mode=download_mode, 1749 ignore_verifications=ignore_verifications, 1750 try_from_hf_gcs=try_from_hf_gcs, 1751 use_auth_token=use_auth_token, 1752 ) 1754 # Build dataset for splits 1755 keep_in_memory = ( 1756 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 1757 ) File /gpfswork/rech/cnw/commun/conda/lucile-m4_3/lib/python3.8/site-packages/datasets/builder.py:704, in DatasetBuilder.download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 702 logger.warning("HF google storage unreachable. Downloading and preparing it from source") 703 if not downloaded_from_gcs: --> 704 self._download_and_prepare( 705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 706 ) 707 # Sync info 708 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File /gpfswork/rech/cnw/commun/conda/lucile-m4_3/lib/python3.8/site-packages/datasets/builder.py:1227, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verify_infos) 1226 def _download_and_prepare(self, dl_manager, verify_infos): -> 1227 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File /gpfswork/rech/cnw/commun/conda/lucile-m4_3/lib/python3.8/site-packages/datasets/builder.py:771, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 769 split_dict = SplitDict(dataset_name=self.name) 770 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 771 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 773 # Checksums verification 774 if verify_infos and dl_manager.record_checksums: File /gpfswork/rech/cnw/commun/modules/datasets_modules/datasets/HuggingFaceM4--TextCaps/2b9ad220cd90fcf2bfb454645bc54364711b83d6d39401ffdaf8cc40882e9141/TextCaps.py:125, in TextCapsDataset._split_generators(self, dl_manager) 123 def _split_generators(self, dl_manager): 124 # urls = _URLS[self.config.name] # TODO later --> 125 data_dir = dl_manager.download_and_extract(_URLS) 126 gen_kwargs = { 127 split_name: { 128 f"{dir_name}_path": Path(data_dir[dir_name][split_name]) (...) 133 for split_name in ["train", "val", "test"] 134 } 136 for split_name in ["train", "val", "test"]: File /gpfswork/rech/cnw/commun/conda/lucile-m4_3/lib/python3.8/site-packages/datasets/download/download_manager.py:431, in DownloadManager.download_and_extract(self, url_or_urls) 415 def download_and_extract(self, url_or_urls): 416 """Download and extract given url_or_urls. 417 418 Is roughly equivalent to: (...) 429 extracted_path(s): `str`, extracted paths of given URL(s). 430 """ --> 431 return self.extract(self.download(url_or_urls)) File /gpfswork/rech/cnw/commun/conda/lucile-m4_3/lib/python3.8/site-packages/datasets/download/download_manager.py:324, in DownloadManager.download(self, url_or_urls) 321 self.downloaded_paths.update(dict(zip(url_or_urls.flatten(), downloaded_path_or_paths.flatten()))) 323 start_time = datetime.now() --> 324 self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths) 325 duration = datetime.now() - start_time 326 logger.info(f"Checksum Computation took {duration.total_seconds() // 60} min") File /gpfswork/rech/cnw/commun/conda/lucile-m4_3/lib/python3.8/site-packages/datasets/download/download_manager.py:229, in DownloadManager._record_sizes_checksums(self, url_or_urls, downloaded_path_or_paths) 226 """Record size/checksum of downloaded files.""" 227 for url, path in zip(url_or_urls.flatten(), downloaded_path_or_paths.flatten()): 228 # call str to support PathLike objects --> 229 self._recorded_sizes_checksums[str(url)] = get_size_checksum_dict( 230 path, record_checksum=self.record_checksums 231 ) File /gpfswork/rech/cnw/commun/conda/lucile-m4_3/lib/python3.8/site-packages/datasets/utils/info_utils.py:82, in get_size_checksum_dict(path, record_checksum) 80 if record_checksum: 81 m = sha256() ---> 82 with open(path, "rb") as f: 83 for chunk in iter(lambda: f.read(1 << 20), b""): 84 m.update(chunk) PermissionError: [Errno 13] Permission denied: '/gpfswork/rech/cnw/commun/datasets/downloads/1e6aa6d23190c30885194fabb193dce3874d902d7636b66315ee8aaa584e80d6' ``` ### Steps to reproduce the bug I think the following will reproduce the bug. Given 2 users belonging to the same group with `umask` set to `0007` - first run with User 1: ```python from datasets import load_dataset ds_name = "HuggingFaceM4/VQAv2" ds = load_dataset(ds_name) ``` - then run with User 2: ```python from datasets import load_dataset ds_name = "HuggingFaceM4/TextCaps" ds = load_dataset(ds_name) ``` ### Expected behavior No `PermissionError` ### Environment info - `datasets` version: 2.4.0 - Platform: Linux-4.18.0-305.65.1.el8_4.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.13 - PyArrow version: 7.0.0 - Pandas version: 1.4.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5348/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5348/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5347
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5347/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5347/comments
https://api.github.com/repos/huggingface/datasets/issues/5347/events
https://github.com/huggingface/datasets/pull/5347
1,486,920,261
PR_kwDODunzps5E6jb1
5,347
Force soundfile to return float32 instead of the default float64
{ "login": "qmeeus", "id": 25608944, "node_id": "MDQ6VXNlcjI1NjA4OTQ0", "avatar_url": "https://avatars.githubusercontent.com/u/25608944?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qmeeus", "html_url": "https://github.com/qmeeus", "followers_url": "https://api.github.com/users/qmeeus/followers", "following_url": "https://api.github.com/users/qmeeus/following{/other_user}", "gists_url": "https://api.github.com/users/qmeeus/gists{/gist_id}", "starred_url": "https://api.github.com/users/qmeeus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qmeeus/subscriptions", "organizations_url": "https://api.github.com/users/qmeeus/orgs", "repos_url": "https://api.github.com/users/qmeeus/repos", "events_url": "https://api.github.com/users/qmeeus/events{/privacy}", "received_events_url": "https://api.github.com/users/qmeeus/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "cc @polinaeterna", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5347). All of your documentation changes will be reflected on that endpoint.", "Cool ! Feel free to add a comment in the code to explain that and we can merge :)", "I'm not sure if this is a good change since we plan to get rid of `torchaudio` in the next couple of months...", "What do you think @polinaeterna @patrickvonplaten ? Models are usually using float32 (e.g. Wev2vec2 in `transformers`) IIRC", "IMO we can safely assume that float32 is always good enough when using audio models in inference or training. Nevertheless there might be use cases for audio datasets in the future where float64 is needed. \r\n\r\n=> I would by default always cast to float32, but then possible allow the user to cast to float64 ", "> I'm not sure if this is a good change since we plan to get rid of torchaudio in the next couple of months...\r\n\r\n@mariosasko I agree but who knows how long we will have to wait until we are really able to do so (https://github.com/bastibe/libsndfile-binaries/pull/17 is a draft. so as @patrickvonplaten is okay with float32, I'd merge.\r\n\r\n\r\n", "@polinaeterna Can you comment on the linked PR to see why it's still a draft? Maybe we can help somehow to get this merged finally.\r\n\r\nI think it's weird to align `soundfile` with `torchaudio` when the latter is only used for MP3 (and prob for not much longer). " ]
2022-12-09T15:10:24
2023-01-17T16:12:49
null
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5347", "html_url": "https://github.com/huggingface/datasets/pull/5347", "diff_url": "https://github.com/huggingface/datasets/pull/5347.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5347.patch", "merged_at": null }
(Fixes issue #5345)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5347/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5347/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5346
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5346/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5346/comments
https://api.github.com/repos/huggingface/datasets/issues/5346/events
https://github.com/huggingface/datasets/issues/5346
1,486,884,983
I_kwDODunzps5YoBB3
5,346
[Quick poll] Give your opinion on the future of the Hugging Face Open Source ecosystem!
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "As the survey is finished, can we close this issue, @LysandreJik ?", "Yes! I'll post a public summary on the forums shortly.", "Is the summary available? I would be interested in reading your findings." ]
2022-12-09T14:48:02
2023-06-02T20:24:44
2023-01-25T19:35:40
MEMBER
null
null
null
Thanks to all of you, Datasets is just about to pass 15k stars! Since the last survey, a lot has happened: the [diffusers](https://github.com/huggingface/diffusers), [evaluate](https://github.com/huggingface/evaluate) and [skops](https://github.com/skops-dev/skops) libraries were born. `timm` joined the Hugging Face ecosystem. There were 25 new releases of `transformers`, 21 new releases of `datasets`, 13 new releases of `accelerate`. If you have a couple of minutes and want to participate in shaping the future of the ecosystem, please share your thoughts: [**hf.co/oss-survey**](https://docs.google.com/forms/d/e/1FAIpQLSf4xFQKtpjr6I_l7OfNofqiR8s-WG6tcNbkchDJJf5gYD72zQ/viewform?usp=sf_link) (please reply in the above feedback form rather than to this thread) Thank you all on behalf of the HuggingFace team! πŸ€—
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5346/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 3, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5346/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5345
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5345/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5345/comments
https://api.github.com/repos/huggingface/datasets/issues/5345/events
https://github.com/huggingface/datasets/issues/5345
1,486,555,384
I_kwDODunzps5Ymwj4
5,345
Wrong dtype for array in audio features
{ "login": "qmeeus", "id": 25608944, "node_id": "MDQ6VXNlcjI1NjA4OTQ0", "avatar_url": "https://avatars.githubusercontent.com/u/25608944?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qmeeus", "html_url": "https://github.com/qmeeus", "followers_url": "https://api.github.com/users/qmeeus/followers", "following_url": "https://api.github.com/users/qmeeus/following{/other_user}", "gists_url": "https://api.github.com/users/qmeeus/gists{/gist_id}", "starred_url": "https://api.github.com/users/qmeeus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qmeeus/subscriptions", "organizations_url": "https://api.github.com/users/qmeeus/orgs", "repos_url": "https://api.github.com/users/qmeeus/repos", "events_url": "https://api.github.com/users/qmeeus/events{/privacy}", "received_events_url": "https://api.github.com/users/qmeeus/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "After some more investigation, this is due to [this line of code](https://github.com/huggingface/datasets/blob/main/src/datasets/features/audio.py#L279). The function `sf.read(file)` should be updated to `sf.read(file, dtype=\"float32\")`\r\n\r\nIndeed, the default value in soundfile is `float64` ([see here](https://pysoundfile.readthedocs.io/en/latest/#soundfile.read)). \r\n", "@qmeeus I agree, decoding of different audio formats should return the same dtypes indeed!\r\n\r\nBut note that here you are concatenating datasets with different sampling rates: 48000 for CommonVoice and 16000 for Voxpopuli. So you should cast them to the same sampling rate value before interleaving, for example:\r\n```\r\ncv = cv.cast_column(\"audio\", Audio(sampling_rate=16000))\r\n```\r\notherwise you would get the same error because features of the same column (\"audio\") are not the same.\r\n\r\nAlso, the error you get is unexpected. Could you please confirm that you use the latest main version of the `datasets`? We had an issue that could lead to an error like this after using `rename_column` method, but it was fixed in https://github.com/huggingface/datasets/pull/5287 ", "Hi Polina,\r\nSorry for the late answer\r\nIt is possible that the issue was due to a bug that is now fixed. I installed an editable version of datasets from github, but I don't recall whether I had updated it at the time of the issue. My research led me to other directions so I did not follow through on the interleave datasets.\r\n" ]
2022-12-09T11:05:11
2023-02-10T14:39:28
null
NONE
null
null
null
### Describe the bug When concatenating/interleaving different datasets, I stumble into an error because the features can't be aligned. After some investigation, I understood that the audio arrays had different dtypes, namely `float32` and `float64`. Consequently, the datasets cannot be merged. ### Steps to reproduce the bug For example, for `facebook/voxpopuli` and `mozilla-foundation/common_voice_11_0`: ``` from datasets import load_dataset, interleave_datasets covost = load_dataset("mozilla-foundation/common_voice_11_0", "en", split="train", streaming=True) voxpopuli = datasets.load_dataset("facebook/voxpopuli", "nl", split="train", streaming=True) sample_cv, = covost.take(1) sample_vp, = voxpopuli.take(1) assert sample_cv["audio"]["array"].dtype == sample_vp["audio"]["array"].dtype # Fails dataset = interleave_datasets([covost, voxpopuli]) # ValueError: The features can't be aligned because the key audio of features {'audio_id': Value(dtype='string', id=None), 'language': Value(dtype='int64', id=None), 'audio': {'array': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None), 'path': Value(dtype='string', id=None), 'sampling_rate': Value(dtype='int64', id=None)}, 'normalized_text': Value(dtype='string', id=None), 'gender': Value(dtype='string', id=None), 'speaker_id': Value(dtype='string', id=None), 'is_gold_transcript': Value(dtype='bool', id=None), 'accent': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None)} has unexpected type - {'array': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None), 'path': Value(dtype='string', id=None), 'sampling_rate': Value(dtype='int64', id=None)} (expected either Audio(sampling_rate=16000, mono=True, decode=True, id=None) or Value("null"). ``` ### Expected behavior The audio should be loaded to arrays with a unique dtype (I guess `float32`) ### Environment info ``` - `datasets` version: 2.7.1.dev0 - Platform: Linux-4.18.0-425.3.1.el8.x86_64-x86_64-with-glibc2.28 - Python version: 3.9.15 - PyArrow version: 10.0.1 - Pandas version: 1.5.2 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5345/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5345/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5344
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5344/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5344/comments
https://api.github.com/repos/huggingface/datasets/issues/5344/events
https://github.com/huggingface/datasets/pull/5344
1,485,628,319
PR_kwDODunzps5E2BPN
5,344
Clean up Dataset and DatasetDict
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-12-09T00:02:08
2022-12-13T00:56:07
2022-12-13T00:53:02
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5344", "html_url": "https://github.com/huggingface/datasets/pull/5344", "diff_url": "https://github.com/huggingface/datasets/pull/5344.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5344.patch", "merged_at": "2022-12-13T00:53:01" }
This PR cleans up the docstrings for the other half of the methods in `Dataset` and finishes `DatasetDict`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5344/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5344/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5343
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5343/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5343/comments
https://api.github.com/repos/huggingface/datasets/issues/5343/events
https://github.com/huggingface/datasets/issues/5343
1,485,297,823
I_kwDODunzps5Yh9if
5,343
T5 for Q&A produces truncated sentence
{ "login": "junyongyou", "id": 13484072, "node_id": "MDQ6VXNlcjEzNDg0MDcy", "avatar_url": "https://avatars.githubusercontent.com/u/13484072?v=4", "gravatar_id": "", "url": "https://api.github.com/users/junyongyou", "html_url": "https://github.com/junyongyou", "followers_url": "https://api.github.com/users/junyongyou/followers", "following_url": "https://api.github.com/users/junyongyou/following{/other_user}", "gists_url": "https://api.github.com/users/junyongyou/gists{/gist_id}", "starred_url": "https://api.github.com/users/junyongyou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/junyongyou/subscriptions", "organizations_url": "https://api.github.com/users/junyongyou/orgs", "repos_url": "https://api.github.com/users/junyongyou/repos", "events_url": "https://api.github.com/users/junyongyou/events{/privacy}", "received_events_url": "https://api.github.com/users/junyongyou/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2022-12-08T19:48:46
2022-12-08T19:57:17
2022-12-08T19:57:17
NONE
null
null
null
Dear all, I am fine-tuning T5 for Q&A task using the MedQuAD ([GitHub - abachaa/MedQuAD: Medical Question Answering Dataset of 47,457 QA pairs created from 12 NIH websites](https://github.com/abachaa/MedQuAD)) dataset. In the dataset, there are many long answers with thousands of words. I have used pytorch_lightning to train the T5-large model. I have two questions. For example, I set both the max_length, max_input_length, max_output_length to 128. How to deal with those long answers? I just left them as is and the T5Tokenizer can automatically handle. I would assume the tokenizer just truncates an answer at the position of 128th word (or 127th). Is it possible that I manually split an answer into different parts, each part has 128 words; and then all these sub-answers serve as a separate answer to the same question? Another question is that I get incomplete (truncated) answers when using the fine-tuned model in inference, even though the predicted answer is shorter than 128 words. I found a message posted 2 years ago saying that one should add at the end of texts when fine-tuning T5. I followed that but then got a warning message that duplicated were found. I am assuming that this is because the tokenizer truncates an answer text, thus is missing in the truncated answer, such that the end token is not produced in predicted answer. However, I am not sure. Can anybody point out how to address this issue? Any suggestions are highly appreciated. Below is some code snippet. ` import pytorch_lightning as pl from torch.utils.data import DataLoader import torch import numpy as np import time from pathlib import Path from transformers import ( Adafactor, T5ForConditionalGeneration, T5Tokenizer, get_linear_schedule_with_warmup ) from torch.utils.data import RandomSampler from question_answering.utils import * class T5FineTuner(pl.LightningModule): def __init__(self, hyparams): super(T5FineTuner, self).__init__() self.hyparams = hyparams self.model = T5ForConditionalGeneration.from_pretrained(hyparams.model_name_or_path) self.tokenizer = T5Tokenizer.from_pretrained(hyparams.tokenizer_name_or_path) if self.hyparams.freeze_embeds: self.freeze_embeds() if self.hyparams.freeze_encoder: self.freeze_params(self.model.get_encoder()) # assert_all_frozen() self.step_count = 0 self.output_dir = Path(self.hyparams.output_dir) n_observations_per_split = { 'train': self.hyparams.n_train, 'validation': self.hyparams.n_val, 'test': self.hyparams.n_test } self.n_obs = {k: v if v >= 0 else None for k, v in n_observations_per_split.items()} self.em_score_list = [] self.subset_score_list = [] data_folder = r'C:\Datasets\MedQuAD-master' self.train_data, self.val_data, self.test_data = load_medqa_data(data_folder) def freeze_params(self, model): for param in model.parameters(): param.requires_grad = False def freeze_embeds(self): try: self.freeze_params(self.model.model.shared) for d in [self.model.model.encoder, self.model.model.decoder]: self.freeze_params(d.embed_positions) self.freeze_params(d.embed_tokens) except AttributeError: self.freeze_params(self.model.shared) for d in [self.model.encoder, self.model.decoder]: self.freeze_params(d.embed_tokens) def lmap(self, f, x): return list(map(f, x)) def is_logger(self): return self.trainer.proc_rank <= 0 def forward(self, input_ids, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, labels=None): return self.model( input_ids, attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, labels=labels ) def _step(self, batch): labels = batch['target_ids'] labels[labels[:, :] == self.tokenizer.pad_token_id] = -100 outputs = self( input_ids = batch['source_ids'], attention_mask=batch['source_mask'], labels=labels, decoder_attention_mask=batch['target_mask'] ) loss = outputs[0] return loss def ids_to_clean_text(self, generated_ids): gen_text = self.tokenizer.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True) return self.lmap(str.strip, gen_text) def _generative_step(self, batch): t0 = time.time() generated_ids = self.model.generate( batch["source_ids"], attention_mask=batch["source_mask"], use_cache=True, decoder_attention_mask=batch['target_mask'], max_length=128, num_beams=2, early_stopping=True ) preds = self.ids_to_clean_text(generated_ids) targets = self.ids_to_clean_text(batch["target_ids"]) gen_time = (time.time() - t0) / batch["source_ids"].shape[0] loss = self._step(batch) base_metrics = {'val_loss': loss} summ_len = np.mean(self.lmap(len, generated_ids)) base_metrics.update(gen_time=gen_time, gen_len=summ_len, preds=preds, target=targets) em_score, subset_match_score = calculate_scores(preds, targets) self.em_score_list.append(em_score) self.subset_score_list.append(subset_match_score) em_score = torch.tensor(em_score, dtype=torch.float32) subset_match_score = torch.tensor(subset_match_score, dtype=torch.float32) base_metrics.update(em_score=em_score, subset_match_score=subset_match_score) # rouge_results = self.rouge_metric.compute() # rouge_dict = self.parse_score(rouge_results) return base_metrics def training_step(self, batch, batch_idx): loss = self._step(batch) tensorboard_logs = {'train_loss': loss} return {'loss': loss, 'log': tensorboard_logs} def training_epoch_end(self, outputs): avg_train_loss = torch.stack([x['loss'] for x in outputs]).mean() tensorboard_logs = {'avg_train_loss': avg_train_loss} # return {'avg_train_loss': avg_train_loss, 'log': tensorboard_logs, 'progress_bar': tensorboard_logs} def validation_step(self, batch, batch_idx): return self._generative_step(batch) def validation_epoch_end(self, outputs): avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean() tensorboard_logs = {'val_loss': avg_loss} if len(self.em_score_list) <= 2: average_em_score = sum(self.em_score_list) / len(self.em_score_list) average_subset_match_score = sum(self.subset_score_list) / len(self.subset_score_list) else: latest_em_score = self.em_score_list[:-2] latest_subset_score = self.subset_score_list[:-2] average_em_score = sum(latest_em_score) / len(latest_em_score) average_subset_match_score = sum(latest_subset_score) / len(latest_subset_score) average_em_score = torch.tensor(average_em_score, dtype=torch.float32) average_subset_match_score = torch.tensor(average_subset_match_score, dtype=torch.float32) tensorboard_logs.update(em_score=average_em_score, subset_match_score=average_subset_match_score) self.target_gen = [] self.prediction_gen = [] return { 'avg_val_loss': avg_loss, 'em_score': average_em_score, 'subset_match_socre': average_subset_match_score, 'log': tensorboard_logs, 'progress_bar': tensorboard_logs } def configure_optimizers(self): model = self.model no_decay = ["bias", "LayerNorm.weight"] optimizer_grouped_parameters = [ { "params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], "weight_decay": self.hyparams.weight_decay, }, { "params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], "weight_decay": 0.0, }, ] optimizer = Adafactor(optimizer_grouped_parameters, lr=self.hyparams.learning_rate, scale_parameter=False, relative_step=False) self.opt = optimizer return [optimizer] def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx, optimizer_closure=None, on_tpu=False, using_native_amp=False, using_lbfgs=False): optimizer.step(closure=optimizer_closure) optimizer.zero_grad() self.lr_scheduler.step() def get_tqdm_dict(self): tqdm_dict = {"loss": "{:.3f}".format(self.trainer.avg_loss), "lr": self.lr_scheduler.get_last_lr()[-1]} return tqdm_dict def train_dataloader(self): n_samples = self.n_obs['train'] train_dataset = get_dataset(tokenizer=self.tokenizer, data=self.train_data, num_samples=n_samples, args=self.hyparams) sampler = RandomSampler(train_dataset) dataloader = DataLoader(train_dataset, sampler=sampler, batch_size=self.hyparams.train_batch_size, drop_last=True, num_workers=4) # t_total = ( # (len(dataloader.dataset) // (self.hyparams.train_batch_size * max(1, self.hyparams.n_gpu))) # // self.hyparams.gradient_accumulation_steps # * float(self.hyparams.num_train_epochs) # ) t_total = 100000 scheduler = get_linear_schedule_with_warmup( self.opt, num_warmup_steps=self.hyparams.warmup_steps, num_training_steps=t_total ) self.lr_scheduler = scheduler return dataloader def val_dataloader(self): n_samples = self.n_obs['validation'] validation_dataset = get_dataset(tokenizer=self.tokenizer, data=self.val_data, num_samples=n_samples, args=self.hyparams) sampler = RandomSampler(validation_dataset) return DataLoader(validation_dataset, shuffle=False, batch_size=self.hyparams.eval_batch_size, sampler=sampler, num_workers=4) def test_dataloader(self): n_samples = self.n_obs['test'] test_dataset = get_dataset(tokenizer=self.tokenizer, data=self.test_data, num_samples=n_samples, args=self.hyparams) return DataLoader(test_dataset, batch_size=self.hyparams.eval_batch_size, num_workers=4) def on_save_checkpoint(self, checkpoint): save_path = self.output_dir.joinpath("best_tfmr") self.model.config.save_step = self.step_count self.model.save_pretrained(save_path) self.tokenizer.save_pretrained(save_path) import os import argparse import pytorch_lightning as pl from question_answering.t5_closed_book import T5FineTuner if __name__ == '__main__': args_dict = dict( output_dir="", # path to save the checkpoints model_name_or_path='t5-large', tokenizer_name_or_path='t5-large', max_input_length=128, max_output_length=128, freeze_encoder=False, freeze_embeds=False, learning_rate=1e-5, weight_decay=0.0, adam_epsilon=1e-8, warmup_steps=0, train_batch_size=4, eval_batch_size=4, num_train_epochs=2, gradient_accumulation_steps=10, n_gpu=1, resume_from_checkpoint=None, val_check_interval=0.5, n_val=4000, n_train=-1, n_test=-1, early_stop_callback=False, fp_16=False, opt_level='O1', max_grad_norm=1.0, seed=101, ) args_dict.update({'output_dir': 't5_large_MedQuAD_256', 'num_train_epochs': 100, 'train_batch_size': 16, 'eval_batch_size': 16, 'learning_rate': 1e-3}) args = argparse.Namespace(**args_dict) checkpoint_callback = pl.callbacks.ModelCheckpoint(dirpath=args.output_dir, monitor="em_score", mode="max", save_top_k=1) ## If resuming from checkpoint, add an arg resume_from_checkpoint train_params = dict( accumulate_grad_batches=args.gradient_accumulation_steps, gpus=args.n_gpu, max_epochs=args.num_train_epochs, # early_stop_callback=False, precision=16 if args.fp_16 else 32, # amp_level=args.opt_level, # resume_from_checkpoint=args.resume_from_checkpoint, gradient_clip_val=args.max_grad_norm, checkpoint_callback=checkpoint_callback, val_check_interval=args.val_check_interval, # accelerator='dp' # logger=wandb_logger, # callbacks=[LoggingCallback()], ) model = T5FineTuner(args) trainer = pl.Trainer(**train_params) trainer.fit(model) `
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5343/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5343/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5342
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5342/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5342/comments
https://api.github.com/repos/huggingface/datasets/issues/5342/events
https://github.com/huggingface/datasets/issues/5342
1,485,244,178
I_kwDODunzps5YhwcS
5,342
Emotion dataset cannot be downloaded
{ "login": "cbarond", "id": 78887193, "node_id": "MDQ6VXNlcjc4ODg3MTkz", "avatar_url": "https://avatars.githubusercontent.com/u/78887193?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cbarond", "html_url": "https://github.com/cbarond", "followers_url": "https://api.github.com/users/cbarond/followers", "following_url": "https://api.github.com/users/cbarond/following{/other_user}", "gists_url": "https://api.github.com/users/cbarond/gists{/gist_id}", "starred_url": "https://api.github.com/users/cbarond/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cbarond/subscriptions", "organizations_url": "https://api.github.com/users/cbarond/orgs", "repos_url": "https://api.github.com/users/cbarond/repos", "events_url": "https://api.github.com/users/cbarond/events{/privacy}", "received_events_url": "https://api.github.com/users/cbarond/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate", "name": "duplicate", "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists" } ]
closed
false
null
[]
null
[ "Hi @cbarond there's already an open issue at https://github.com/dair-ai/emotion_dataset/issues/5, as the data seems to be missing now, so check that issue instead πŸ‘πŸ» ", "Thanks @cbarond for reporting and @alvarobartt for pointing to the issue we opened in the author's repo.\r\n\r\nIndeed, this issue was first raised in the \"emotion\" dataset Community tab: https://huggingface.co/datasets/emotion/discussions/3\r\n\r\nI'm closing this issue and leave the issue above for the subsequent updates.\r\n\r\nDuplicate of: https://huggingface.co/datasets/emotion/discussions/3", "try using \"SetFit/emotion\" instead", "> try using \"SetFit/emotion\" instead\r\n\r\nI' replaced \"emotion\" with \"SetFit/Emotion\", but the code is getting stuck at\r\n\r\n`emotions = load_dataset(\"SetFit/emotion\")`\r\n\r\nI pause execution using the debugger, and it takes me to filelock.py:226\r\n\r\n`with self._thread_lock:`\r\n\r\nDo you know a way to get past this issue?", "thanks @honeyimholm - worked for me", "> try using \"SetFit/emotion\" instead\r\n\r\nIt really helps a lot, thank you!", "The dataset loading script has been fixed: https://huggingface.co/datasets/emotion/discussions/4" ]
2022-12-08T19:07:09
2023-02-23T19:13:19
2022-12-09T10:46:11
NONE
null
null
null
### Describe the bug The emotion dataset gives a FileNotFoundError. The full error is: `FileNotFoundError: Couldn't find file at https://www.dropbox.com/s/1pzkadrvffbqw6o/train.txt?dl=1`. It was working yesterday (December 7, 2022), but stopped working today (December 8, 2022). ### Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("emotion") ``` ### Expected behavior The dataset should load properly. ### Environment info - `datasets` version: 2.7.1 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.9.13 - PyArrow version: 10.0.1 - Pandas version: 1.5.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5342/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5342/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5341
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5341/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5341/comments
https://api.github.com/repos/huggingface/datasets/issues/5341/events
https://github.com/huggingface/datasets/pull/5341
1,484,376,644
PR_kwDODunzps5Exohx
5,341
Remove tasks.json
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-12-08T11:04:35
2022-12-09T12:26:21
2022-12-09T12:23:20
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5341", "html_url": "https://github.com/huggingface/datasets/pull/5341", "diff_url": "https://github.com/huggingface/datasets/pull/5341.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5341.patch", "merged_at": "2022-12-09T12:23:20" }
After discussions in https://github.com/huggingface/datasets/pull/5335 we should remove this file that is not used anymore. We should update https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts instead.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5341/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5341/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5340
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5340/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5340/comments
https://api.github.com/repos/huggingface/datasets/issues/5340/events
https://github.com/huggingface/datasets/pull/5340
1,483,182,158
PR_kwDODunzps5EtWo3
5,340
Clean up DatasetInfo and Dataset docstrings
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-12-08T00:17:53
2022-12-08T19:33:14
2022-12-08T19:30:10
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5340", "html_url": "https://github.com/huggingface/datasets/pull/5340", "diff_url": "https://github.com/huggingface/datasets/pull/5340.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5340.patch", "merged_at": "2022-12-08T19:30:10" }
This PR cleans up the docstrings for `DatasetInfo` and about half of the methods in `Dataset`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5340/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5340/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5339
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5339/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5339/comments
https://api.github.com/repos/huggingface/datasets/issues/5339/events
https://github.com/huggingface/datasets/pull/5339
1,482,817,424
PR_kwDODunzps5EsC8N
5,339
Add Video feature, videofolder, and video-classification task
{ "login": "nateraw", "id": 32437151, "node_id": "MDQ6VXNlcjMyNDM3MTUx", "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nateraw", "html_url": "https://github.com/nateraw", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "organizations_url": "https://api.github.com/users/nateraw/orgs", "repos_url": "https://api.github.com/users/nateraw/repos", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "received_events_url": "https://api.github.com/users/nateraw/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5339). All of your documentation changes will be reflected on that endpoint.", "@lhoestq I think I need some serious help with the tests πŸ˜…...I started this locally but it got too time consuming.\n\nOne issue I remember running into is with lossless audio encoding/decoding. I started thinking of using the underlying Audio feature instead of PyAV so I didn't have to rewrite similar logic here...but assumed that would turn into a mess w/ underlying logic" ]
2022-12-07T20:48:34
2023-01-05T23:54:12
null
CONTRIBUTOR
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5339", "html_url": "https://github.com/huggingface/datasets/pull/5339", "diff_url": "https://github.com/huggingface/datasets/pull/5339.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5339.patch", "merged_at": null }
This PR does the following: - Adds `Video` feature (Resolves #5225 ) - Adds `video-classification` task - Adds `videofolder` packaged module for easy loading of local video classification datasets TODO: - [ ] add tests - [ ] add docs
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5339/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5339/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5338
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5338/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5338/comments
https://api.github.com/repos/huggingface/datasets/issues/5338/events
https://github.com/huggingface/datasets/issues/5338
1,482,646,151
I_kwDODunzps5YX2KH
5,338
`map()` stops every 1000 steps
{ "login": "bayartsogt-ya", "id": 43239645, "node_id": "MDQ6VXNlcjQzMjM5NjQ1", "avatar_url": "https://avatars.githubusercontent.com/u/43239645?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bayartsogt-ya", "html_url": "https://github.com/bayartsogt-ya", "followers_url": "https://api.github.com/users/bayartsogt-ya/followers", "following_url": "https://api.github.com/users/bayartsogt-ya/following{/other_user}", "gists_url": "https://api.github.com/users/bayartsogt-ya/gists{/gist_id}", "starred_url": "https://api.github.com/users/bayartsogt-ya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bayartsogt-ya/subscriptions", "organizations_url": "https://api.github.com/users/bayartsogt-ya/orgs", "repos_url": "https://api.github.com/users/bayartsogt-ya/repos", "events_url": "https://api.github.com/users/bayartsogt-ya/events{/privacy}", "received_events_url": "https://api.github.com/users/bayartsogt-ya/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi !\r\n\r\n> It starts using all the cores (I am not sure why because I did not pass num_proc)\r\n\r\nThe tokenizer uses Rust code that is multithreaded. And maybe the `feature_extractor` might run some things in parallel as well - but I'm not super familiar with its internals.\r\n\r\n> then progress bar stops at every 1k steps. (starts using a single core)\r\n\r\nEvery 1000 examples we flush the processed examples to disk. It is this way because Arrow is a columnar format: you must write data chunk by chunk. The processing in on hold while writing right now - maybe this can be improved in the future.", "Hi @lhoestq \r\nThanks for the explanation! it was so helpful! Let me check why `feature_extractor` is running on multiple cpus." ]
2022-12-07T19:09:40
2022-12-10T00:39:29
2022-12-10T00:39:28
NONE
null
null
null
### Describe the bug I am passing the following `prepare_dataset` function to `Dataset.map` (code is inspired from [here](https://github.com/huggingface/community-events/blob/main/whisper-fine-tuning-event/run_speech_recognition_seq2seq_streaming.py#L454)) ```python3 def prepare_dataset(batch): # load and resample audio data from 48 to 16kHz audio = batch["audio"] # compute log-Mel input features from input audio array batch["input_features"] = feature_extractor(audio["array"], sampling_rate=audio["sampling_rate"]).input_features[0] # encode target text to label ids batch["labels"] = tokenizer(batch[text_column]).input_ids return batch ... train_ds = train_ds.map(prepare_dataset) ``` Here is the exact code I am running https://github.com/bayartsogt-ya/whisper-multiple-hf-datasets/blob/main/train.py#L70-L71 It starts using all the cores (I am not sure why because I did not pass `num_proc`) then progress bar stops at every 1k steps. (starts using a single core) then come back to using all the cores again. link to [screen record](https://youtu.be/jPQpQQGp6Gc) Can someone explain this process and maybe provide a way to improve this pipeline? cc: @lhoestq ### Steps to reproduce the bug 1. load the dataset 2. create a Whisper processor 3. create a `prepare_dataset` function 4. pass the function to `dataset.map(prepare_dataset)` ### Expected behavior - Use a single core per a function - not to stop at some point? ### Environment info - `datasets` version: 2.7.1.dev0 - Platform: Linux-5.4.0-109-generic-x86_64-with-glibc2.27 - Python version: 3.8.10 - PyArrow version: 10.0.1 - Pandas version: 1.5.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5338/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5338/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5337
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5337/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5337/comments
https://api.github.com/repos/huggingface/datasets/issues/5337/events
https://github.com/huggingface/datasets/issues/5337
1,481,692,156
I_kwDODunzps5YUNP8
5,337
Support webdataset format
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[ "I like the idea of having `webdataset` as an optional dependency to ensure our loader generates web datasets the same way as the main project.", "Webdataset is the one of the most popular dataset formats for large scale computer vision tasks. Upvote for this issue. ", "Any updates on this?", "We haven't had the bandwidth to implement it so far, but if someone wants to give it a shot please don't hesitate ^^" ]
2022-12-07T11:32:25
2023-05-26T10:34:45
null
MEMBER
null
null
null
Webdataset is an efficient format for iterable datasets. It would be nice to support it in `datasets`, as discussed in https://github.com/rom1504/img2dataset/issues/234. In particular it would be awesome to be able to load one using `load_dataset` in streaming mode (either from a local directory, or from a dataset on the Hugging Face Hub). Some datasets on the Hub are already in webdataset format. It terms of implementation, we can have something similar to the Parquet loader. I also think it's fine to have webdataset as an optional dependency.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5337/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5337/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5336
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5336/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5336/comments
https://api.github.com/repos/huggingface/datasets/issues/5336/events
https://github.com/huggingface/datasets/pull/5336
1,479,649,900
PR_kwDODunzps5Egzed
5,336
Set `IterableDataset.map` param `batch_size` typing as optional
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5336). All of your documentation changes will be reflected on that endpoint.", "Hi @mariosasko, @lhoestq I was wondering whether we should include `batched` as a `pytest.mark` param for the functions testing `IterableDataset.map` so as to ensure that the changes done in this PR work fine without breaking anything of the actual functionality.\r\n\r\nI've pushed updated tests just for one of the unit testing functions to be run as `pytest tests/test_iterable_dataset.py::test_mapped_examples_iterable -s --durations 0`, but some are still missing `batched` param, it was just to ask you whether we're supposed to do this for the rest of the functions or not, if it's a yes I'll push the commit as it's ready, but didn't want to push extra stuff that may be discarded later!\r\n\r\nThanks :hugs:", "Thanks for the feedback @lhoestq, I agree with keeping `Optional` instead of `Union[type, None]` for now πŸ‘πŸ»" ]
2022-12-06T17:08:10
2022-12-07T14:14:56
2022-12-07T14:06:27
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5336", "html_url": "https://github.com/huggingface/datasets/pull/5336", "diff_url": "https://github.com/huggingface/datasets/pull/5336.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5336.patch", "merged_at": "2022-12-07T14:06:27" }
This PR solves #5325 ~Indeed we're using the typing for optional values as `Union[type, None]` as it's similar to how Python 3.10 handles optional values as `type | None`, instead of using `Optional[type]`.~ ~Do we want to start using `Union[type, None]` for type-hinting optional values or just keep on using `Optional`?~ -> Keeping `Optional` still for consistency with the rest of the code in `datasets` Also we now allow `batch_size` to be `None` for `IterableDataset.map` and `IterableDataset.filter`e.g. `MappedExamplesIterable` as `map` is internally instantiating those and propagating the `batch_size` param so if it can be `None` for `map` it should also do so for `MappedExamplesIterable`, as well as for `FilteredExamplesIterable` when calling `IterableDataset.filter`. ## TODOs - [x] Add integration tests - [x] Handle scenario where `batched=True` and `batch_size=None` or `batch_size<=0`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5336/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5336/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5335
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5335/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5335/comments
https://api.github.com/repos/huggingface/datasets/issues/5335/events
https://github.com/huggingface/datasets/pull/5335
1,478,890,788
PR_kwDODunzps5EeHdA
5,335
Update tasks.json
{ "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I think the only place where we need to add it is here https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts\r\n\r\nAnd I think we can remove tasks.json completely from this repo", "Isn't tasks.json used anymore in this repo?", "> I think the only place where we need to add it is here https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts\r\n> \r\n> And I think we can remove tasks.json completely from this repo\r\n\r\nWhat about the warning I mentioned in https://github.com/huggingface/datasets/issues/5255#issuecomment-1339013527? Also, the depth estimation entry is already present in https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts. ", "The update is based on what I received in the output of the export job (c.f. https://github.com/huggingface/datasets/issues/5255#issuecomment-1339107195). \r\n\r\nEdit: Oh, are you referring to the dataset card of NYU Depth V2?", "Yes, my suggestion was for the dataset card: you got the error message because you tried to set `depth-estimation` in `class_ids` instead of `class_categories`.", "> What about the warning I mentioned in https://github.com/huggingface/datasets/issues/5255#issuecomment-1339013527? Also, the depth estimation entry is already present in https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts.\r\n\r\nif you place it in `task_categories` you should be good :)", "yes i would suggest rm'ing tasks.json here for clarity", "Closing it. ", "It's not clear if we can remove it btw, since old versions of `evaluate` rely on it (see https://github.com/huggingface/evaluate/pull/309)\r\n\r\ncc @lvwerra ", "Actually it can be removed without incidence in old versions of evaluate since we kept an hardcoded `known_task_ids` that is marked \"DEPRECATED\"" ]
2022-12-06T11:37:57
2022-12-08T11:05:33
2022-12-07T12:46:03
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5335", "html_url": "https://github.com/huggingface/datasets/pull/5335", "diff_url": "https://github.com/huggingface/datasets/pull/5335.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5335.patch", "merged_at": null }
Context: * https://github.com/huggingface/datasets/issues/5255#issuecomment-1339107195 Cc: @osanseviero
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5335/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5335/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5334
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5334/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5334/comments
https://api.github.com/repos/huggingface/datasets/issues/5334/events
https://github.com/huggingface/datasets/pull/5334
1,477,421,927
PR_kwDODunzps5EY9zN
5,334
Clean up docstrings
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks ! Let us know if we can help :)\r\n\r\nSmall pref for having multiple PRs", "Awesome, thanks! Sorry this one is a little big, I'll open some smaller ones next :)" ]
2022-12-05T20:56:08
2022-12-09T01:44:25
2022-12-09T01:41:44
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5334", "html_url": "https://github.com/huggingface/datasets/pull/5334", "diff_url": "https://github.com/huggingface/datasets/pull/5334.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5334.patch", "merged_at": "2022-12-09T01:41:44" }
As raised by @polinaeterna in #5324, some of the docstrings are a bit of a mess because it has both Markdown and Sphinx syntax. This PR fixes the docstring for `DatasetBuilder`. I'll start working on cleaning up the rest of the docstrings and removing the old Sphinx syntax (let me know if you prefer one big PR with all the cleaned changes or multiple smaller ones)! 🧼
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5334/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5334/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5333
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5333/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5333/comments
https://api.github.com/repos/huggingface/datasets/issues/5333/events
https://github.com/huggingface/datasets/pull/5333
1,476,890,156
PR_kwDODunzps5EXGQ2
5,333
fix: πŸ› pass the token to get the list of config names
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-12-05T16:06:09
2022-12-06T08:25:17
2022-12-06T08:22:49
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5333", "html_url": "https://github.com/huggingface/datasets/pull/5333", "diff_url": "https://github.com/huggingface/datasets/pull/5333.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5333.patch", "merged_at": "2022-12-06T08:22:49" }
Otherwise, get_dataset_infos doesn't work on gated or private datasets, even with the correct token.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5333/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5333/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5332
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5332/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5332/comments
https://api.github.com/repos/huggingface/datasets/issues/5332/events
https://github.com/huggingface/datasets/issues/5332
1,476,513,072
I_kwDODunzps5YAc0w
5,332
Passing numpy array to ClassLabel names causes ValueError
{ "login": "freddyheppell", "id": 1475568, "node_id": "MDQ6VXNlcjE0NzU1Njg=", "avatar_url": "https://avatars.githubusercontent.com/u/1475568?v=4", "gravatar_id": "", "url": "https://api.github.com/users/freddyheppell", "html_url": "https://github.com/freddyheppell", "followers_url": "https://api.github.com/users/freddyheppell/followers", "following_url": "https://api.github.com/users/freddyheppell/following{/other_user}", "gists_url": "https://api.github.com/users/freddyheppell/gists{/gist_id}", "starred_url": "https://api.github.com/users/freddyheppell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/freddyheppell/subscriptions", "organizations_url": "https://api.github.com/users/freddyheppell/orgs", "repos_url": "https://api.github.com/users/freddyheppell/repos", "events_url": "https://api.github.com/users/freddyheppell/events{/privacy}", "received_events_url": "https://api.github.com/users/freddyheppell/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Should `datasets` allow `ClassLabel` input parameter to be an `np.array` even though internally we need to cast it to a Python list? @lhoestq @mariosasko ", "Hi! No, I don't think so. The `names` parameter is [annotated](https://github.com/huggingface/datasets/blob/582236640b9109988e5f7a16a8353696ffa09a16/src/datasets/features/features.py#L892) as `List[str]` (**NumPy arrays are not lists**), and considering that type checking is not a common practice in Python, I think we can leave the code as-is.", "I appreciate it is the wrong type, and that type checking is not common, but I think there's a few circumstances that make it a good idea from a usability perspective.\r\n\r\nIt's quite a difficult error to debug because it comes from a utility function (so it's not immediately obvious which parameter caused it). What makes it even more difficult is the exception happens when the features instance is used to instantiate the dataset, **not** when when the wrong type is actually passed when the features is instantiated. When I was debugging the error, I didn't really consider it could be an issue with the features instance because it had instantiated fine. It's also not one of the more common exceptions caused by trying to use a non-list as a list.\r\n\r\nIt's also relatively easy to accidentally get a numpy array of class types (e.g. calling `unique()` on a pandas dataframe column). Additionally, passing in a `set` instead of the list (again, relatively easy because people may run `set(classes)` to generate uniques) causes an error when the features instance is used, albeit a slightly more obvious one.\r\n\r\nThe names list is already being processed and validated in the `__post_init__` method anyway, so it would not really be adding any complexity to check it is actually a list here too. I'm happy to contribute this change if you change your mind about whether it's worthwhile.", "I agree that it's not easy to debug this issue, so perhaps we could add some basic type checking (e.g. `not isinstance(names, list)` -> error) to make debugging easier. Feel free to submit a PR.\r\n\r\n> Additionally, passing in a set instead of the list (again, relatively easy because people may run set(classes) to generate uniques) causes an error when the features instance is used, albeit a slightly more obvious one.\r\n\r\n`set` is an unordered structure (it's ordered in Python 3.6+, but this is CPython's implementation detail), and the order of ClassLabel `names` matters, so this doesn't require a fix.", "What about checking for `Sequence` instead? I think users can pass a list or a tuple as well." ]
2022-12-05T12:59:03
2022-12-22T16:32:50
2022-12-22T16:32:50
CONTRIBUTOR
null
null
null
### Describe the bug If a numpy array is passed to the names argument of ClassLabel, creating a dataset with those features causes an error. ### Steps to reproduce the bug https://colab.research.google.com/drive/1cV_es1PWZiEuus17n-2C-w0KEoEZ68IX TLDR: If I define my classes as: ``` my_classes = np.array(['one', 'two', 'three']) ``` Then this errors: ```py features = Features({'value': Value('string'), 'label': ClassLabel(names=my_classes)}) dataset = Dataset.from_list(my_data, features=features) ``` ``` ValueError Traceback (most recent call last) [<ipython-input-8-a8a9d53ec82f>](https://localhost:8080/#) in <module> ----> 1 dataset = Dataset.from_list(my_data, features=features) 11 frames [/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in _asdict_inner(obj) 183 for f in fields(obj): 184 value = _asdict_inner(getattr(obj, f.name)) --> 185 if not f.init or value != f.default or f.metadata.get("include_in_asdict_even_if_is_default", False): 186 result[f.name] = value 187 return result ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() ``` But this works: ``` features2 = Features({'value': Value('string'), 'label': ClassLabel(names=list(my_classes))}) dataset2 = Dataset.from_list(my_data, features=features2) ``` ### Expected behavior If I provide a numpy array of class names, I would expect either an error that the names list is the wrong type, or for it to be cast internally. ### Environment info - `datasets` version: 2.7.1 - Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.10 - Python version: 3.8.15 - PyArrow version: 10.0.1 - Pandas version: 1.5.2 Additionally: - Numpy version: 1.23.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5332/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5332/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5331
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5331/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5331/comments
https://api.github.com/repos/huggingface/datasets/issues/5331/events
https://github.com/huggingface/datasets/pull/5331
1,473,146,738
PR_kwDODunzps5EKDpr
5,331
Support for multiple configs in packaged modules via metadata yaml info
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "feel free to merge `main` into your PR to fix the CI :)", "Let me see if I can fix the pattern thing ^^'", "Hmm I think it would be easier to specify the `data_files` in the end, because having a split pattern like `{split}-...` at the root of the repository can lead to unexpected behaviors IMO, and we probably don't want to have a different behavior for `data_files` depending if it's inside a `data_dir` or not\r\n\r\nMaybe something like\r\n```yaml\r\nbuilder_config:\r\n data_dir: data_dir\r\n data_files:\r\n - split: train\r\n pattern: train-[0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*\r\n```", " > Also, I'm not sure if it's a good idea to have this field in the YAML metadata - Transformers use this part of the card only for Hub-related stuff (widgets, tags, CO2 emission, etc.), and I think we should aim to do the same in Datasets. We could achieve this by having these kwargs in a special file (they can be seen as a faster way of defining a builder (builder script) that subclasses a packaged builder) and removing the dataset_info field (the only useful info there seem to be features and we can fetch those directly from a dataset script/Parquet files).\r\n\r\nSomething like `config.json`?\r\n\r\n```json\r\n{\r\n \"data_dir\": \"data\"\r\n \"data_files\": {\r\n \"train\": \"train-[0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*\"\r\n }\r\n}\r\n```\r\n\r\nwe could also support lists for several configs", "opened https://github.com/huggingface/datasets/issues/5694", "I opened a PR to this PR to add data_files in YAML: https://github.com/polinaeterna/datasets/pull/1\r\n\r\n```yaml\r\nbuilder_config:\r\n data_files:\r\n - split: train\r\n pattern: data/train-*\r\n```", "Let me open a PR to see if I can move the data files resolution outside of the MetadataConfigs to not modify it in-place", "I wonder if we can make the cache backward compatible: we could just check if the cache directory with the old path exists. It will be useful for the research team which has a big datasets cache", "> I wonder if we can make the cache backward compatible: we could just check if the cache directory with the old path exists. It will be useful for the research team which has a big datasets cache\r\n\r\n![image](https://github.com/huggingface/datasets/assets/16348744/90a96e79-2a0d-4d37-95bd-b75fa962c094)\r\n\r\nIn the next PR maybe? :D \r\nIt's possible but requires some additional logic to correctly pass old `config_kwargs` (which used to include `data_files` but now it's `None` for builders from metadata) to generate the hash which is used to create the path.", "If we only consider datasets that were pushed to hub, it's just a matter of using `\"{username}__parquet\"` instead of `\"{username}__{dataset_name}\"` in the cache directory name. The hashes stay the same :)\r\n\r\nEDIT: and the config name\r\nEDIT2: and the arrow file names", "Did a small PR for backward compatibility, it was easy to add in the end: https://github.com/polinaeterna/datasets/pull/3", "Just created a branch [dev-3.0](https://github.com/huggingface/datasets/tree/dev-3.0) in which we can merge this one and the other datasets 3.0 related PRs", "@lhoestq why can't we merge it in main?", "We can, it was just in case we had other things to merge after @mariosasko or @albertvillanova 's reviews", "@lhoestq @albertvillanova @mariosasko we agreed on having `configs` (in plural) as a metadata field in readme but apparently Hub's yaml validation doesn't allow it to be not a list :D \r\n![image](https://github.com/huggingface/datasets/assets/16348744/52131ee8-80e0-4f6e-90cd-8ff83caf4625)\r\n(with `config` (in singular) it works)\r\n\r\nedit: and now the tests for hub datasets with metadata configs are failing because I cannot change the yaml there...", "> we agreed on having configs (in plural) as a metadata field in readme but apparently Hub's yaml validation doesn't allow it to be not a list :D\r\n\r\nIf the `configs` field is specified in the YAML, the Hub can use it to [improve](https://github.com/huggingface/moon-landing/blob/97aca4cac32fbb7d84ce5eba9b18afad87968c4a/server/views/components/DatasetLibraryModal/datasetLibrarySnippets.ts#L11) the `Use in dataset library` snippet by listing the possible config values in `load_dataset`. So I think this needs to be fixed on the Hub side.\r\n\r\nPS: I couldn't find an instance of someone using this field on the Hub, so I think using it for this feature is OK.", "> I couldn't find an instance of someone using this field on the Hub, so I think using it for this feature is OK.\r\n\r\n@mariosasko I think it's because @lhoestq renamed `configs` to `config_names` in all canonical datasets :D so yes, `configs` field is now supposed to include custom configuration parameters introduced in this PR, and `config_names` is used (not really used lol) for list of strings of config names. It's being fixed on the Hub's side https://github.com/huggingface/moon-landing/pull/6490", "after more thought I agree it's maybe overkill to do a major release for this one, since we have a good backward compatibility", "There is one edge case I forgot to mention in the reviews - I think it's a good idea to support passing config params that are functions (Pandas uses them a lot) using this API (e.g. `converters` in the CSV config for converting a string column into a sequence). I see two solutions: string blocks with Python code in YAML or PyYAML [tags](https://pyyaml.org/wiki/PyYAMLDocumentation#yaml-tags-and-python-types). \r\n\r\nBut I think this can be addressed later.", "I'm resolving the conflicts and writing some docs :) let's merge this soon !", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005868 / 0.011353 (-0.005485) | 0.003544 / 0.011008 (-0.007464) | 0.080329 / 0.038508 (0.041821) | 0.061072 / 0.023109 (0.037963) | 0.307802 / 0.275898 (0.031904) | 0.340353 / 0.323480 (0.016873) | 0.004665 / 0.007986 (-0.003321) | 0.002779 / 0.004328 (-0.001550) | 0.062065 / 0.004250 (0.057815) | 0.046350 / 0.037052 (0.009297) | 0.312045 / 0.258489 (0.053556) | 0.353524 / 0.293841 (0.059683) | 0.026965 / 0.128546 (-0.101581) | 0.007906 / 0.075646 (-0.067740) | 0.260678 / 0.419271 (-0.158593) | 0.044167 / 0.043533 (0.000634) | 0.309757 / 0.255139 (0.054618) | 0.340188 / 0.283200 (0.056988) | 0.020440 / 0.141683 (-0.121243) | 1.486886 / 1.452155 (0.034732) | 1.548330 / 1.492716 (0.055614) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.188658 / 0.018006 (0.170652) | 0.422204 / 0.000490 (0.421715) | 0.003508 / 0.000200 (0.003308) | 0.000068 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025173 / 0.037411 (-0.012238) | 0.072868 / 0.014526 (0.058343) | 0.084817 / 0.176557 (-0.091739) | 0.151667 / 0.737135 (-0.585468) | 0.085632 / 0.296338 (-0.210706) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400998 / 0.215209 (0.185789) | 4.022274 / 2.077655 (1.944619) | 2.025768 / 1.504120 (0.521648) | 1.874193 / 1.541195 (0.332998) | 2.006537 / 1.468490 (0.538047) | 0.501799 / 4.584777 (-4.082978) | 2.987487 / 3.745712 (-0.758225) | 4.552295 / 5.269862 (-0.717566) | 2.775859 / 4.565676 (-1.789817) | 0.057596 / 0.424275 (-0.366679) | 0.006449 / 0.007607 (-0.001158) | 0.470776 / 0.226044 (0.244732) | 4.725933 / 2.268929 (2.457005) | 2.480130 / 55.444624 (-52.964494) | 2.183919 / 6.876477 (-4.692558) | 2.408052 / 2.142072 (0.265979) | 0.584038 / 4.805227 (-4.221190) | 0.124964 / 6.500664 (-6.375701) | 0.060939 / 0.075469 (-0.014530) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.221263 / 1.841788 (-0.620524) | 18.326372 / 8.074308 (10.252064) | 13.398937 / 10.191392 (3.207545) | 0.149153 / 0.680424 (-0.531271) | 0.016941 / 0.534201 (-0.517260) | 0.332106 / 0.579283 (-0.247177) | 0.339958 / 0.434364 (-0.094406) | 0.378125 / 0.540337 (-0.162212) | 0.517787 / 1.386936 (-0.869149) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005927 / 0.011353 (-0.005426) | 0.003607 / 0.011008 (-0.007402) | 0.062925 / 0.038508 (0.024417) | 0.058676 / 0.023109 (0.035566) | 0.362129 / 0.275898 (0.086231) | 0.395864 / 0.323480 (0.072384) | 0.004652 / 0.007986 (-0.003334) | 0.002893 / 0.004328 (-0.001435) | 0.062696 / 0.004250 (0.058445) | 0.049988 / 0.037052 (0.012935) | 0.365366 / 0.258489 (0.106877) | 0.412326 / 0.293841 (0.118485) | 0.027118 / 0.128546 (-0.101429) | 0.008179 / 0.075646 (-0.067467) | 0.068048 / 0.419271 (-0.351223) | 0.041065 / 0.043533 (-0.002468) | 0.359858 / 0.255139 (0.104719) | 0.386589 / 0.283200 (0.103390) | 0.020467 / 0.141683 (-0.121216) | 1.438070 / 1.452155 (-0.014084) | 1.479617 / 1.492716 (-0.013099) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231516 / 0.018006 (0.213510) | 0.413407 / 0.000490 (0.412917) | 0.000358 / 0.000200 (0.000158) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026071 / 0.037411 (-0.011340) | 0.076486 / 0.014526 (0.061960) | 0.085943 / 0.176557 (-0.090613) | 0.138087 / 0.737135 (-0.599048) | 0.087466 / 0.296338 (-0.208872) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417711 / 0.215209 (0.202502) | 4.171915 / 2.077655 (2.094260) | 2.140677 / 1.504120 (0.636557) | 1.960164 / 1.541195 (0.418969) | 2.002134 / 1.468490 (0.533644) | 0.499699 / 4.584777 (-4.085078) | 2.991814 / 3.745712 (-0.753898) | 2.906589 / 5.269862 (-2.363272) | 1.842305 / 4.565676 (-2.723372) | 0.057633 / 0.424275 (-0.366642) | 0.006465 / 0.007607 (-0.001142) | 0.492874 / 0.226044 (0.266830) | 4.931613 / 2.268929 (2.662684) | 2.623161 / 55.444624 (-52.821463) | 2.310624 / 6.876477 (-4.565853) | 2.483146 / 2.142072 (0.341074) | 0.586910 / 4.805227 (-4.218317) | 0.124681 / 6.500664 (-6.375983) | 0.061561 / 0.075469 (-0.013908) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.319111 / 1.841788 (-0.522677) | 18.637326 / 8.074308 (10.563018) | 13.803912 / 10.191392 (3.612520) | 0.143989 / 0.680424 (-0.536435) | 0.017025 / 0.534201 (-0.517176) | 0.333156 / 0.579283 (-0.246127) | 0.342163 / 0.434364 (-0.092201) | 0.380357 / 0.540337 (-0.159981) | 0.512261 / 1.386936 (-0.874675) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f49a16346dc35e5eabeec39778d0f2e4e850dfd7 \"CML watermark\")\n" ]
2022-12-02T16:43:44
2023-07-24T15:49:54
2023-07-13T13:27:56
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5331", "html_url": "https://github.com/huggingface/datasets/pull/5331", "diff_url": "https://github.com/huggingface/datasets/pull/5331.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5331.patch", "merged_at": "2023-07-13T13:27:56" }
will solve https://github.com/huggingface/datasets/issues/5209 and https://github.com/huggingface/datasets/issues/5151 and many other... Config parameters for packaged builders are parsed from `β€œbuilder_config”` field in README.md file (separate firs-level field, not part of β€œdataset_info”), example: ```yaml --- dataset_info: ... configs: - config_name: v1 data_dir: v1 drop_labels: true - config_name: v2 data_dir: v2 drop_labels: false ``` I tried to align packaged builders with custom configs parsed from metadata with scripts dataset builder as much as possible. Their builders are created dynamically (see `configure_builder_class()` in load.py`) and have `BUILDER_CONFIGS` attribute filled with `BuilderConfig` objects in the same way as for datasets with script. ## load_dataset 1. If there is single config in meta and it doesn’t have a name, the name becomes β€œdefault” (as we do for β€œdataset_info”), [example](https://huggingface.co/datasets/polinaeterna/audiofolder_one_default_config_in_metadata/blob/main/README.md): ```python load_dataset("ds") == load_dataset("ds", "default") # load with the params provided in metadata load_dataset("ds", "random name") # ValueError: BuilderConfig 'random_name' not found. Available: ['default'] ``` 2. If there is single config in metadata with `config_name` provided, it becomes a default one (loaded when no `config_name` is specified, [example](https://huggingface.co/datasets/polinaeterna/audiofolder_one_nondefault_config_in_metadata) ```python load_dataset("ds") == load_dataset("ds", "custom") # load with the params provided in meta load_dataset("ds", "random name") # ValueError: BuilderConfig 'random_name' not found. Available: ['custom'] ``` 3. If there are several configs in metadata with names [example](https://huggingface.co/datasets/polinaeterna/audiofolder_two_configs_in_metadata/blob/main/README.md) ```python load_dataset("ds", "v1") # load with "v1" params load_dataset("ds", "v2") # load with "v2" params load_dataset("ds") # ValueError: BuilderConfig 'default' not found. Available: ['v1', 'v2'] ``` Thanks to @lhoestq and [this change](https://github.com/polinaeterna/datasets/pull/1), it's possible to add `"default"` field in yaml and set it to True, to make the config a default one (loaded when no config is specified): ```yaml configs: - config_name: v1 drop_labels: true default: true - config_name: v2 ... ``` then `load_dataset("ds") == load_dataset("ds", "v1")`. ## dataset_name and cache I decided that it’s reasonable to add a `dataset_name` attribute to `DatasetBuilder` class which would be equal to `name` for scripts dataset but reflect a real dataset name for packaged builders (last part of path/name from hub). This is mostly to reorganize cache structure (I believe we can do this in the major release?) because otherwise, with custom configs for packaged builders which were all stored in the same directory, i it was becoming a mess. And in general it makes much more sense like this, from datasets server perspective too, though it’s a breaking change So the cache dir has the following structure: `"{namespace__}<dataset_name>/<config_name>/<version>/<hash>/"` and arrow/parquet filenames are also `"<dataset_name>-<split>.arrow"`. For example `polinaeterna___audiofolder_two_configs_in_metadata/v1-5532fac9443ea252/0.0.0/6cbdd16f8688354c63b4e2a36e1585d05de285023ee6443ffd71c4182055c0fc/` for `polinaeterna/audiofolder_two_configs_in_metadata` Hub dataset, train arrow file is `audiofolder_two_configs_in_metadata-train.arrow`. For script datasets it remains unchanged. ## push_to_hub To support custom configs with `push_to_hub`, the data is put under directory named either as `<config_name>` if `config_name` is **not** "default" or "data" if `config_name` is omitted or "default" (for backward compatibility). `"builder_config"` field is added to README.md, with `config_name` (optional) and `data_files` fields. for `"data_files"`, `"pattern"` parameter is introduced, to resolve data files correctly, see https://github.com/polinaeterna/datasets/pull/1. - `ds.push_to_hub("ds")` --> one config ("default"), put under "data" directory, [example](https://huggingface.co/datasets/polinaeterna/push_to_hub_single_config/blob/main/README.md) ```yaml dataset_info: ... configs: data_files: - split: train pattern: data/train-* ... ``` - `ds.push_to_hub("ds", "custom")` --> put under "custom" directory, [example](https://huggingface.co/datasets/polinaeterna/push_to_hub_singe_nondefault_config/blob/main/README.md) ```yaml configs: config_name: custom data_files: - split: train path: custom/train-* ... ``` - for many configs, [example](https://huggingface.co/datasets/polinaeterna/push_to_hub_many_configs/blob/main/README.md): ```yaml configs: - config_name: v1 data_files: - split: train path: v1/train-* ... - config_name: v2 data_files: - split: train path: v2/train-* ... ``` Thanks to @lhoestq and https://github.com/polinaeterna/datasets/pull/1, when pushing to datasets created **before** this change, README.md is updated accordingly (config for old data is added along with the one that is being pushed). `"dataset_info"` yaml field is updated accordingly (new configs are added). This shouldn't break anything! TODO in separate PRs: - [x] docs - [ ] probably update test cli util (make --save_info not rewrite `builder_config` in readme)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5331/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5331/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5329
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5329/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5329/comments
https://api.github.com/repos/huggingface/datasets/issues/5329/events
https://github.com/huggingface/datasets/pull/5329
1,471,999,125
PR_kwDODunzps5EGK3y
5,329
Clarify imagefolder is for small datasets
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I think it's also reasonable to add the same note to the AudioFolder decription", "Thank you ! I think \"regular\" is more appropriate than \"small\". It can easily scale to a few thousands of images - just not millions x)", "Replaced \"small\" with \"several thousand\" since what is considered \"regular\" and even \"small\" can be kind of vague!" ]
2022-12-01T21:47:29
2022-12-06T17:20:04
2022-12-06T17:16:53
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5329", "html_url": "https://github.com/huggingface/datasets/pull/5329", "diff_url": "https://github.com/huggingface/datasets/pull/5329.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5329.patch", "merged_at": "2022-12-06T17:16:53" }
Based on feedback from [here](https://github.com/huggingface/datasets/issues/5317#issuecomment-1334108824), this PR adds a note to the `imagefolder` loading and creating docs that `imagefolder` is designed for small scale image datasets.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5329/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5329/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5328
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5328/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5328/comments
https://api.github.com/repos/huggingface/datasets/issues/5328/events
https://github.com/huggingface/datasets/pull/5328
1,471,661,437
PR_kwDODunzps5EFAyT
5,328
Fix docs building for main
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "EDIT\r\nAt least the docs for ~~main~~ PR branch are now built:\r\n- https://github.com/huggingface/datasets/actions/runs/3594847760/jobs/6053620813", "Build documentation for main branch was triggered after this PR being merged: https://github.com/huggingface/datasets/actions/runs/3603370082/jobs/6071482470" ]
2022-12-01T17:07:45
2022-12-02T16:29:00
2022-12-02T16:26:00
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5328", "html_url": "https://github.com/huggingface/datasets/pull/5328", "diff_url": "https://github.com/huggingface/datasets/pull/5328.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5328.patch", "merged_at": "2022-12-02T16:26:00" }
This PR reverts the triggering event for building documentation introduced by: - #5250 Fix #5326.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5328/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5328/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5327
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5327/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5327/comments
https://api.github.com/repos/huggingface/datasets/issues/5327/events
https://github.com/huggingface/datasets/pull/5327
1,471,657,247
PR_kwDODunzps5EE_3Q
5,327
Avoid unwanted behaviour when splits from script and metadata are not matching because of outdated metadata
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false } ]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5327). All of your documentation changes will be reflected on that endpoint." ]
2022-12-01T17:05:23
2023-01-23T12:48:29
null
CONTRIBUTOR
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5327", "html_url": "https://github.com/huggingface/datasets/pull/5327", "diff_url": "https://github.com/huggingface/datasets/pull/5327.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5327.patch", "merged_at": null }
will fix #5315
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5327/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5327/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5326
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5326/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5326/comments
https://api.github.com/repos/huggingface/datasets/issues/5326/events
https://github.com/huggingface/datasets/issues/5326
1,471,634,168
I_kwDODunzps5Xt1r4
5,326
No documentation for main branch is built
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
2022-12-01T16:50:58
2022-12-02T16:26:01
2022-12-02T16:26:01
MEMBER
null
null
null
Since: - #5250 - Commit: 703b84311f4ead83c7f79639f2dfa739295f0be6 the docs for main branch are no longer built. The change introduced only triggers the docs building for releases.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5326/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5326/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5325
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5325/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5325/comments
https://api.github.com/repos/huggingface/datasets/issues/5325/events
https://github.com/huggingface/datasets/issues/5325
1,471,536,822
I_kwDODunzps5Xtd62
5,325
map(...batch_size=None) for IterableDataset
{ "login": "frankier", "id": 299380, "node_id": "MDQ6VXNlcjI5OTM4MA==", "avatar_url": "https://avatars.githubusercontent.com/u/299380?v=4", "gravatar_id": "", "url": "https://api.github.com/users/frankier", "html_url": "https://github.com/frankier", "followers_url": "https://api.github.com/users/frankier/followers", "following_url": "https://api.github.com/users/frankier/following{/other_user}", "gists_url": "https://api.github.com/users/frankier/gists{/gist_id}", "starred_url": "https://api.github.com/users/frankier/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/frankier/subscriptions", "organizations_url": "https://api.github.com/users/frankier/orgs", "repos_url": "https://api.github.com/users/frankier/repos", "events_url": "https://api.github.com/users/frankier/events{/privacy}", "received_events_url": "https://api.github.com/users/frankier/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[ { "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi! I agree it makes sense for `IterableDataset.map` to support the `batch_size=None` case. This should be super easy to fix.", "@mariosasko as this is something simple maybe I can include it as part of https://github.com/huggingface/datasets/pull/5311? Let me know :+1:", "#self-assign", "Feel free to close this @lhoestq as part of https://github.com/huggingface/datasets/pull/5336 :hugs:", "Thanks again :)\r\n\r\n> For practical usages, an alternative to this would be to convert from an iterable dataset to a map-style dataset, but it is not obvious how to do this.\r\n\r\nThis is interesting as well, if anyone wants to explore" ]
2022-12-01T15:43:42
2022-12-07T15:54:43
2022-12-07T15:54:42
CONTRIBUTOR
null
null
null
### Feature request Dataset.map(...) allows batch_size to be None. It would be nice if IterableDataset did too. ### Motivation Although it may seem a bit of a spurious request given that `IterableDataset` is meant for larger than memory datasets, but there are a couple of reasons why this might be nice. One is that load_dataset(...) can return either IterableDataset or Dataset. mypy will then complain if batch_size=None even if we know it is Dataset. Of course we can do: assert isinstance(d, datasets.DatasetDict) But it is a mild inconvenience. What's more annoying is that whenever we use something like e.g. `combine_datasets(...)`, we end up with the union again, and so have to do the assert again. Another is that we could actually end up with an IterableDataset small enough for memory in normal/correct usage, e.g. by filtering a massive IterableDataset. For practical usages, an alternative to this would be to convert from an iterable dataset to a map-style dataset, but it is not obvious how to do this. ### Your contribution Not this time.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5325/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5325/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5324
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5324/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5324/comments
https://api.github.com/repos/huggingface/datasets/issues/5324/events
https://github.com/huggingface/datasets/issues/5324
1,471,524,512
I_kwDODunzps5Xta6g
5,324
Fix docstrings and types in documentation that appears on the website
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
open
false
null
[]
null
[ "I agree we have a mess with docstrings...", "Ok, I believe we've cleaned up most of the old syntax we were using for the user-facing docs! There are still a couple of `:obj:`'s and `:class:` floating around in the docstrings we don't expose that I'll track down :)" ]
2022-12-01T15:34:53
2022-12-13T19:03:55
null
CONTRIBUTOR
null
null
null
While I was working on https://github.com/huggingface/datasets/pull/5313 I've noticed that we have a mess in how we annotate types and format args and return values in the code. And some of it is displayed in the [Reference section](https://huggingface.co/docs/datasets/package_reference/builder_classes) of the documentation on the website. Would be nice someday, maybe before releasing datasets 3.0.0, to unify it......
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5324/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5324/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5323
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5323/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5323/comments
https://api.github.com/repos/huggingface/datasets/issues/5323/events
https://github.com/huggingface/datasets/issues/5323
1,471,518,803
I_kwDODunzps5XtZhT
5,323
Duplicated Keys in Taskmaster-2 Dataset
{ "login": "liaeh", "id": 52380283, "node_id": "MDQ6VXNlcjUyMzgwMjgz", "avatar_url": "https://avatars.githubusercontent.com/u/52380283?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liaeh", "html_url": "https://github.com/liaeh", "followers_url": "https://api.github.com/users/liaeh/followers", "following_url": "https://api.github.com/users/liaeh/following{/other_user}", "gists_url": "https://api.github.com/users/liaeh/gists{/gist_id}", "starred_url": "https://api.github.com/users/liaeh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liaeh/subscriptions", "organizations_url": "https://api.github.com/users/liaeh/orgs", "repos_url": "https://api.github.com/users/liaeh/repos", "events_url": "https://api.github.com/users/liaeh/events{/privacy}", "received_events_url": "https://api.github.com/users/liaeh/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @liaeh.\r\n\r\nWe are having a look at it. ", "I have transferred the discussion to the Community tab of the dataset: https://huggingface.co/datasets/taskmaster2/discussions/1" ]
2022-12-01T15:31:06
2022-12-01T16:26:06
2022-12-01T16:26:06
NONE
null
null
null
### Describe the bug Loading certain splits () of the taskmaster-2 dataset fails because of a DuplicatedKeysError. This occurs for the following domains: `'hotels', 'movies', 'music', 'sports'`. The domains `'flights', 'food-ordering', 'restaurant-search'` load fine. Output: ### Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset("taskmaster2", "music") ``` Output: ``` --------------------------------------------------------------------------- DuplicatedKeysError Traceback (most recent call last) File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:1532, in GeneratorBasedBuilder._prepare_split_single(self, arg) [1531](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1530) example = self.info.features.encode_example(record) if self.info.features is not None else record -> [1532](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1531) writer.write(example, key) [1533](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1532) num_examples_progress_update += 1 File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py:475, in ArrowWriter.write(self, example, key, writer_batch_size) [474](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=473) if self._check_duplicates: --> [475](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=474) self.check_duplicate_keys() [476](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=475) # Re-intializing to empty list for next batch File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py:492, in ArrowWriter.check_duplicate_keys(self) [486](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=485) duplicate_key_indices = [ [487](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=486) str(self._num_examples + index) [488](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=487) for index, (duplicate_hash, _) in enumerate(self.hkey_record) [489](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=488) if duplicate_hash == hash [490](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=489) ] --> [492](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=491) raise DuplicatedKeysError(key, duplicate_key_indices) [493](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=492) else: DuplicatedKeysError: Found multiple examples generated with the same key The examples at index 858, 859 have the key dlg-89174425-d57a-4db7-a92b-165c3bff6735 During handling of the above exception, another exception occurred: DuplicatedKeysError Traceback (most recent call last) File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:1541, in GeneratorBasedBuilder._prepare_split_single(self, arg) [1540](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1539) num_shards = shard_id + 1 -> [1541](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1540) num_examples, num_bytes = writer.finalize() [1542](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1541) writer.close() File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py:563, in ArrowWriter.finalize(self, close_stream) [562](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=561) if self._check_duplicates: --> [563](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=562) self.check_duplicate_keys() [564](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=563) # Re-intializing to empty list for next batch File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py:492, in ArrowWriter.check_duplicate_keys(self) [486](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=485) duplicate_key_indices = [ [487](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=486) str(self._num_examples + index) [488](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=487) for index, (duplicate_hash, _) in enumerate(self.hkey_record) [489](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=488) if duplicate_hash == hash [490](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=489) ] --> [492](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=491) raise DuplicatedKeysError(key, duplicate_key_indices) [493](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=492) else: DuplicatedKeysError: Found multiple examples generated with the same key The examples at index 858, 859 have the key dlg-89174425-d57a-4db7-a92b-165c3bff6735 The above exception was the direct cause of the following exception: DatasetGenerationError Traceback (most recent call last) Cell In[23], line 1 ----> 1 dataset = load_dataset("taskmaster2", "music") File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py:1741, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, **config_kwargs) [1738](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1737) try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES [1740](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1739) # Download and prepare data -> [1741](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1740) builder_instance.download_and_prepare( [1742](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1741) download_config=download_config, [1743](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1742) download_mode=download_mode, [1744](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1743) ignore_verifications=ignore_verifications, [1745](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1744) try_from_hf_gcs=try_from_hf_gcs, [1746](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1745) use_auth_token=use_auth_token, [1747](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1746) num_proc=num_proc, [1748](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1747) ) [1750](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1749) # Build dataset for splits [1751](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1750) keep_in_memory = ( [1752](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1751) keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) [1753](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1752) ) File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:822, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) [820](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=819) if num_proc is not None: [821](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=820) prepare_split_kwargs["num_proc"] = num_proc --> [822](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=821) self._download_and_prepare( [823](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=822) dl_manager=dl_manager, [824](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=823) verify_infos=verify_infos, [825](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=824) **prepare_split_kwargs, [826](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=825) **download_and_prepare_kwargs, [827](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=826) ) [828](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=827) # Sync info [829](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=828) self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:1555, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_splits_kwargs) [1554](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1553) def _download_and_prepare(self, dl_manager, verify_infos, **prepare_splits_kwargs): -> [1555](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1554) super()._download_and_prepare( [1556](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1555) dl_manager, verify_infos, check_duplicate_keys=verify_infos, **prepare_splits_kwargs [1557](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1556) ) File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:913, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) [909](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=908) split_dict.add(split_generator.split_info) [911](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=910) try: [912](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=911) # Prepare split will record examples associated to the split --> [913](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=912) self._prepare_split(split_generator, **prepare_split_kwargs) [914](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=913) except OSError as e: [915](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=914) raise OSError( [916](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=915) "Cannot find data file. " [917](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=916) + (self.manual_download_instructions or "") [918](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=917) + "\nOriginal error:\n" [919](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=918) + str(e) [920](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=919) ) from None File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:1396, in GeneratorBasedBuilder._prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size) [1394](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1393) gen_kwargs = split_generator.gen_kwargs [1395](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1394) job_id = 0 -> [1396](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1395) for job_id, done, content in self._prepare_split_single( [1397](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1396) {"gen_kwargs": gen_kwargs, "job_id": job_id, **_prepare_split_args} [1398](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1397) ): [1399](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1398) if done: [1400](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1399) result = content File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:1550, in GeneratorBasedBuilder._prepare_split_single(self, arg) [1548](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1547) if isinstance(e, SchemaInferenceError) and e.__context__ is not None: [1549](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1548) e = e.__context__ -> [1550](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1549) raise DatasetGenerationError("An error occurred while generating the dataset") from e [1552](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1551) yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths) DatasetGenerationError: An error occurred while generating the dataset ``` ### Expected behavior Loads the dataset ### Environment info - `datasets` version: 2.7.1 - Platform: Linux-5.13.0-40-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyArrow version: 10.0.1 - Pandas version: 1.5.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5323/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5323/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5322
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5322/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5322/comments
https://api.github.com/repos/huggingface/datasets/issues/5322/events
https://github.com/huggingface/datasets/pull/5322
1,471,502,162
PR_kwDODunzps5EEeQP
5,322
Raise error for `.tar` archives in the same way as for `.tar.gz` and `.tgz` in `_get_extraction_protocol`
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-12-01T15:19:28
2022-12-14T16:37:16
2022-12-14T16:33:30
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5322", "html_url": "https://github.com/huggingface/datasets/pull/5322", "diff_url": "https://github.com/huggingface/datasets/pull/5322.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5322.patch", "merged_at": "2022-12-14T16:33:30" }
Currently `download_and_extract` doesn't throw an error when it is used with files with `.tar` extension in streaming mode because `_get_extraction_protocol` doesn't do it (like it does for `tar.gz` and `tgz`). `_get_extraction_protocol` returns formatted url as if we support tar protocol but we don't. That means that in dataset scripts `.tar` files would be attempted to load and fail during examples generation (after `download_and_extract` execution). So this PR raises error for `tar` files too.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5322/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5322/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5321
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5321/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5321/comments
https://api.github.com/repos/huggingface/datasets/issues/5321/events
https://github.com/huggingface/datasets/pull/5321
1,471,430,667
PR_kwDODunzps5EEOhE
5,321
Fix loading from HF GCP cache
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Do you know why this stopped working?\r\n\r\nIt comes from the changes in https://github.com/huggingface/datasets/pull/5107/files#diff-355ae5c229f95f86895404b72378ecd6e966c41cbeebb674af6fe6e9611bc126" ]
2022-12-01T14:39:06
2022-12-01T16:10:09
2022-12-01T16:07:02
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5321", "html_url": "https://github.com/huggingface/datasets/pull/5321", "diff_url": "https://github.com/huggingface/datasets/pull/5321.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5321.patch", "merged_at": "2022-12-01T16:07:02" }
As reported in https://discuss.huggingface.co/t/error-loading-wikipedia-dataset/26599/4 it's not possible to download a cached version of Wikipedia from the HF GCP cache I fixed it and added an integration test (runs in 10sec)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5321/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5321/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5320
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5320/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5320/comments
https://api.github.com/repos/huggingface/datasets/issues/5320/events
https://github.com/huggingface/datasets/pull/5320
1,471,360,910
PR_kwDODunzps5ED_UQ
5,320
[Extract] Place the lock file next to the destination directory
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-12-01T13:55:49
2022-12-01T15:36:44
2022-12-01T15:33:58
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5320", "html_url": "https://github.com/huggingface/datasets/pull/5320", "diff_url": "https://github.com/huggingface/datasets/pull/5320.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5320.patch", "merged_at": "2022-12-01T15:33:58" }
Previously it was placed next to the archive to extract, but the archive can be in a read-only directory as noticed in https://github.com/huggingface/datasets/issues/5295 Therefore I moved the lock location to be next to the destination directory, which is required to have write permissions
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5320/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5320/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5319
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5319/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5319/comments
https://api.github.com/repos/huggingface/datasets/issues/5319/events
https://github.com/huggingface/datasets/pull/5319
1,470,945,515
PR_kwDODunzps5ECkfc
5,319
Fix Text sample_by paragraph
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-12-01T09:08:09
2022-12-01T15:21:44
2022-12-01T15:19:00
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5319", "html_url": "https://github.com/huggingface/datasets/pull/5319", "diff_url": "https://github.com/huggingface/datasets/pull/5319.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5319.patch", "merged_at": "2022-12-01T15:19:00" }
Fix #5316.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5319/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5319/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5318
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5318/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5318/comments
https://api.github.com/repos/huggingface/datasets/issues/5318/events
https://github.com/huggingface/datasets/pull/5318
1,470,749,750
PR_kwDODunzps5EB6RM
5,318
Origin/fix missing features error
{ "login": "eunseojo", "id": 12104720, "node_id": "MDQ6VXNlcjEyMTA0NzIw", "avatar_url": "https://avatars.githubusercontent.com/u/12104720?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eunseojo", "html_url": "https://github.com/eunseojo", "followers_url": "https://api.github.com/users/eunseojo/followers", "following_url": "https://api.github.com/users/eunseojo/following{/other_user}", "gists_url": "https://api.github.com/users/eunseojo/gists{/gist_id}", "starred_url": "https://api.github.com/users/eunseojo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eunseojo/subscriptions", "organizations_url": "https://api.github.com/users/eunseojo/orgs", "repos_url": "https://api.github.com/users/eunseojo/repos", "events_url": "https://api.github.com/users/eunseojo/events{/privacy}", "received_events_url": "https://api.github.com/users/eunseojo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "please review :) @lhoestq @ola13 thankoo", "Thanks :) I just updated the test to make sure it works even when there's a column missing, and did a minor change to json.py to add the missing columns for the other kinds of JSON files as well (I moved the code to`self._cast_table`)", "Thanks Unso! If @lhoestq is happy then I'm also happy :D", "When I noticed the ping, this PR had already been merged...\r\n\r\nLuckily, PyArrow's `read_json` behaves the same when `explicit_schema` is given via `ParseOptions`, so I'm okay with this change (our JSON loader doesn't use `read_json` for decoding JSON in some scenarios, so this manual approach is the right one).\r\n" ]
2022-12-01T06:18:39
2022-12-12T19:06:42
2022-12-04T05:49:39
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5318", "html_url": "https://github.com/huggingface/datasets/pull/5318", "diff_url": "https://github.com/huggingface/datasets/pull/5318.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5318.patch", "merged_at": "2022-12-04T05:49:39" }
This fixes the problem of when the dataset_load function reads a function with "features" provided but some read batches don't have columns that later show up. For instance, the provided "features" requires columns A,B,C but only columns B,C show. This fixes this by adding the column A with nulls.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5318/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5318/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5317
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5317/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5317/comments
https://api.github.com/repos/huggingface/datasets/issues/5317/events
https://github.com/huggingface/datasets/issues/5317
1,470,390,164
I_kwDODunzps5XpF-U
5,317
`ImageFolder` performs poorly with large datasets
{ "login": "salieri", "id": 1086393, "node_id": "MDQ6VXNlcjEwODYzOTM=", "avatar_url": "https://avatars.githubusercontent.com/u/1086393?v=4", "gravatar_id": "", "url": "https://api.github.com/users/salieri", "html_url": "https://github.com/salieri", "followers_url": "https://api.github.com/users/salieri/followers", "following_url": "https://api.github.com/users/salieri/following{/other_user}", "gists_url": "https://api.github.com/users/salieri/gists{/gist_id}", "starred_url": "https://api.github.com/users/salieri/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/salieri/subscriptions", "organizations_url": "https://api.github.com/users/salieri/orgs", "repos_url": "https://api.github.com/users/salieri/repos", "events_url": "https://api.github.com/users/salieri/events{/privacy}", "received_events_url": "https://api.github.com/users/salieri/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! ImageFolder is made for small scale datasets indeed. For large scale image datasets you better group your images in TAR archives or Arrow/Parquet files. This is true not just for ImageFolder loading performance, but also because having millions of files is not ideal for your filesystem or when moving the data around.\r\n\r\nOption 1. use TAR archives\r\n\r\nI'd suggest you to take a look at how we load [Imagenet](https://huggingface.co/datasets/imagenet-1k/tree/main) for example. The dataset is sharded in multiple TAR archives and there is a [script](https://huggingface.co/datasets/imagenet-1k/blob/main/imagenet-1k.py) that iterates over the archives to load the images.\r\n\r\nOption 2. use Arrow/Parquet\r\n\r\nYou can load your images as an Arrow Dataset with\r\n```python\r\nfrom datasets import Dataset, Image, load_from_disk, load_dataset\r\n\r\nds = Dataset.from_dict({\"image\": list(glob.glob(\"path/to/dir/**/*.jpg\"))})\r\n\r\ndef add_metadata(example):\r\n ...\r\n\r\nds = ds.map(add_metadata, num_proc=16) # num_proc for multiprocessing\r\nds = ds.cast_column(\"image\", Image())\r\n\r\n# save as Arrow locally\r\nds.save_to_disk(\"output_dir\")\r\nreloaded = load_from_disk(\"output_dir\")\r\n\r\n# OR save as Parquet on the HF Hub\r\nds.push_to_hub(\"username/dataset_name\")\r\nreloaded = load_dataset(\"username/dataset_name\")\r\n# reloaded = load_dataset(\"username/dataset_name\", num_proc=16) # to use multiprocessing\r\n```\r\n\r\nPS: maybe we can actually have something similar to ImageFolder but for image archives at one point ?", "@lhoestq Thanks!\r\n\r\nPerhaps it'd be worth adding a note on the documentation that `ImageFolder` is not intended for large datasets? This limitation is not intuitively obvious to someone who has not used it before, I think.", "Thanks for the feedback @salieri! I opened #5329 to make it clear `ImageFolder` is not intended for large datasets. Please feel free to comment if you have any other feedback! πŸ™‚ " ]
2022-12-01T00:04:21
2022-12-01T21:49:26
null
NONE
null
null
null
### Describe the bug While testing image dataset creation, I'm seeing significant performance bottlenecks with imagefolders when scanning a directory structure with large number of images. ## Setup * Nested directories (5 levels deep) * 3M+ images * 1 `metadata.jsonl` file ## Performance Degradation Point 1 Degradation occurs because [`get_data_files_patterns`](https://github.com/huggingface/datasets/blob/main/src/datasets/data_files.py#L231-L243) runs the exact same scan for many different types of patterns, and there doesn't seem to be a way to easily limit this. It's controlled by the definition of [`ALL_DEFAULT_PATTERNS`](https://github.com/huggingface/datasets/blob/main/src/datasets/data_files.py#L82-L85). One scan with 3M+ files takes about 10-15 minutes to complete on my setup, so having those extra scans really slows things down – from 10 minutes to 60+. Most of the scans return no matches, but they still take a significant amount of time to complete – hence the poor performance. As a side effect, when this scan is run on 3M+ image files, Python also consumes up to 12 GB of RAM, which is not ideal. ## Performance Degradation Point 2 The second performance bottleneck is in [`PackagedDatasetModuleFactory.get_module`](https://github.com/huggingface/datasets/blob/d7dfbc83d68e87ba002c5eb2555f7a932e59038a/src/datasets/load.py#L707-L711), which calls `DataFilesDict.from_local_or_remote`. It runs for a long time (60min+), consuming significant amounts of RAM – even more than the point 1 above. Based on `iostat -d 2`, it performs **zero** disk operations, which to me suggests that there is a code based bottleneck there that could be sorted out. ### Steps to reproduce the bug ```python from datasets import load_dataset import os import huggingface_hub dataset = load_dataset( 'imagefolder', data_dir='/some/path', # just to spell it out: split=None, drop_labels=True, keep_in_memory=False ) dataset.push_to_hub('account/dataset', private=True) ``` ### Expected behavior While it's certainly possible to write a custom loader to replace `ImageFolder` with, it'd be great if the off-the-shelf `ImageFolder` would by default have a setup that can scale to large datasets. Or perhaps there could be a dedicated loader just for large datasets that trades off flexibility for performance? As in, maybe you have to define explicitly how you want it to work rather than it trying to guess your data structure like `_get_data_files_patterns()` does? ### Environment info - `datasets` version: 2.7.1 - Platform: Linux-4.14.296-222.539.amzn2.x86_64-x86_64-with-glibc2.2.5 - Python version: 3.7.10 - PyArrow version: 10.0.1 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5317/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5317/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5316
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5316/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5316/comments
https://api.github.com/repos/huggingface/datasets/issues/5316/events
https://github.com/huggingface/datasets/issues/5316
1,470,115,681
I_kwDODunzps5XoC9h
5,316
Bug in sample_by="paragraph"
{ "login": "adampauls", "id": 1243668, "node_id": "MDQ6VXNlcjEyNDM2Njg=", "avatar_url": "https://avatars.githubusercontent.com/u/1243668?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adampauls", "html_url": "https://github.com/adampauls", "followers_url": "https://api.github.com/users/adampauls/followers", "following_url": "https://api.github.com/users/adampauls/following{/other_user}", "gists_url": "https://api.github.com/users/adampauls/gists{/gist_id}", "starred_url": "https://api.github.com/users/adampauls/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adampauls/subscriptions", "organizations_url": "https://api.github.com/users/adampauls/orgs", "repos_url": "https://api.github.com/users/adampauls/repos", "events_url": "https://api.github.com/users/adampauls/events{/privacy}", "received_events_url": "https://api.github.com/users/adampauls/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @adampauls.\r\n\r\nWe are having a look at it. " ]
2022-11-30T19:24:13
2022-12-01T15:19:02
2022-12-01T15:19:02
NONE
null
null
null
### Describe the bug I think [this line](https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/text/text.py#L96) is wrong and should be `batch = f.read(self.config.chunksize)`. Otherwise it will never terminate because even when `f` is finished reading, `batch` will still be truthy from the last iteration. ### Steps to reproduce the bug ``` > cat test.txt a b c d e f ```` ```python >>> import datasets >>> datasets.load_dataset("text", data_files={"train":"test.txt"}, sample_by="paragraph") ``` This will go on forever. ### Expected behavior Terminates very quickly. ### Environment info `version = "2.6.1"` but I think the bug is still there on main.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5316/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5316/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5315
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5315/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5315/comments
https://api.github.com/repos/huggingface/datasets/issues/5315/events
https://github.com/huggingface/datasets/issues/5315
1,470,026,797
I_kwDODunzps5XntQt
5,315
Adding new splits to a dataset script with existing old splits info in metadata's `dataset_info` fails
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false } ]
null
[ "EDIT:\r\nI think in this case, the metadata files (either README or JSON) should not be read (i.e. `self.info.splits` should be None).\r\n\r\nOne idea: \r\n- I think ideally we should set this behavior when we pass `--save_info` to the CLI `test`\r\n- However, currently, the builder is unaware of this: `save_info` arg is not passed to it", "> I think in this case\r\n\r\n@albertvillanova You mean in cases when the script was changed? \r\n\r\nI suggest that we:\r\n* add a check on the slice (like 'split_name[n%]) kind of format here: https://github.com/huggingface/datasets/blob/main/src/datasets/splits.py#L523 to catch things like this. \r\n* Error here happens before splits verification, but in `_prepare_split`, and `_prepare_split` doesn't perform any verification and don't know about it. so we can pass this parameter and take splits from `split_generator`, not from `split.info` in case when `verify_infos` is False\r\n* we can check if split **names** from split_generators and self.info.splits are the same **before** preparing splits (if `verify_info=True`) so that we don't spend time on generating unwanted data. \r\n* provide some user-friendly warnings about `ignore_verifications` parameter so that users know that if something is not matching they can ignore it\r\n\r\nI started it here: https://github.com/huggingface/datasets/pull/5327/files\r\n\r\nWhat do you think @albertvillanova ?", "I edited my previous comment:\r\n- First I proposed setting `self.info.splits` to None when `ignore_verifications=True`\r\n - I thought it was the easiest implementation because `ignore_verifications` is passed to `DatasetBuilder.download_and_prepare`\r\n - However, afterwards, I realized this might not be a good idea for this use case:\r\n - A user wants to optimize the loading of the dataset, and passes `ignore_verifications=False` to avoid all the verifications\r\n - In this case, we want `self.info.splits` to be read from metadata file\r\n- Then, I thought that it might be better to set `self.info.splits` to None when we pass `--save_info` to the CLI test: if we are going to save the info to the metadata file, it makes no sense to read the info from the metadata file\r\n - This implementation is not so easy because the Builder knows nothing about `--save_info`\r\n\r\nI agree with you there are 2 things to be addressed here:\r\n- One is what I have just commented: `self.info.splits` should be None in this case\r\n- The other, a validation should be implemented when calling `make_file_instructions` and/or `SplitDict.__getitem__`, so that when passing \"training\" to it, we get a more descriptive error other than `TypeError: expected str, bytes or os.PathLike object, not NoneType` " ]
2022-11-30T18:02:15
2022-12-02T07:02:53
null
CONTRIBUTOR
null
null
null
### Describe the bug If you first create a custom dataset with a specific set of splits, generate metadata with `datasets-cli test ... --save_info`, then change your script to include more splits, it fails. That's what happened in https://huggingface.co/datasets/mrdbourke/food_vision_199_classes/discussions/2#6385fd1269634850f8ddff48. ### Steps to reproduce the bug 1. create a dataset with a custom split that returns, for example, only `"train"` split in `_splits_generators'`. specifically, if really want to reproduce, copy `https://huggingface.co/datasets/mrdbourke/food_vision_199_classes/blob/main/food_vision_199_classes.py 2. run `datasets-cli test dataset_script.py --save_info --all_configs` - this would generate metadata yaml in `README.md` that would contain info about splits, for example, like this: ``` splits: - name: train num_bytes: 2973286 num_examples: 19747 ``` 3. make changes to your script so that it returns another set of splits, for example, `"train"` and `"test"` (uncomment [these lines](https://huggingface.co/datasets/mrdbourke/food_vision_199_classes/blob/main/food_vision_199_classes.py#L271)) 4. run `load_dataset` and get the following error: ```python Traceback (most recent call last): File "/home/daniel/code/pytorch/env/bin/datasets-cli", line 8, in <module> sys.exit(main()) File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/commands/datasets_cli.py", line 39, in main service.run() File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/commands/test.py", line 141, in run builder.download_and_prepare( File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/builder.py", line 822, in download_and_prepare self._download_and_prepare( File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/builder.py", line 1555, in _download_and_prepare super()._download_and_prepare( File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/builder.py", line 913, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/builder.py", line 1356, in _prepare_split split_info = self.info.splits[split_generator.name] File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/splits.py", line 525, in __getitem__ instructions = make_file_instructions( File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/arrow_reader.py", line 111, in make_file_instructions name2filenames = { File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/arrow_reader.py", line 112, in <dictcomp> info.name: filenames_for_dataset_split( File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/naming.py", line 78, in filenames_for_dataset_split prefix = filename_prefix_for_split(dataset_name, split) File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/naming.py", line 57, in filename_prefix_for_split if os.path.basename(name) != name: File "/home/daniel/code/pytorch/env/lib/python3.8/posixpath.py", line 143, in basename p = os.fspath(p) TypeError: expected str, bytes or os.PathLike object, not NoneType ``` 5. bonus: try to regenerate metadata in `README.md` with `datasets-cli` as in step 2 and get the same error. This is because `dataset.info.splits` contains only `"train"` split so when we are doing `self.info.splits[split_generator.name]` it tries to infer smth like `info.splits['train[50%]']` and that's not the case and it fails. ### Expected behavior to be discussed? This can be solved by removing splits information from metadata file first. But I wonder if there is a better way. ### Environment info - Datasets version: 2.7.1 - Python version: 3.8.13
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5315/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5315/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5314
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5314/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5314/comments
https://api.github.com/repos/huggingface/datasets/issues/5314/events
https://github.com/huggingface/datasets/issues/5314
1,469,685,118
I_kwDODunzps5XmZ1-
5,314
Datasets: classification_report() got an unexpected keyword argument 'suffix'
{ "login": "JonathanAlis", "id": 42126634, "node_id": "MDQ6VXNlcjQyMTI2NjM0", "avatar_url": "https://avatars.githubusercontent.com/u/42126634?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JonathanAlis", "html_url": "https://github.com/JonathanAlis", "followers_url": "https://api.github.com/users/JonathanAlis/followers", "following_url": "https://api.github.com/users/JonathanAlis/following{/other_user}", "gists_url": "https://api.github.com/users/JonathanAlis/gists{/gist_id}", "starred_url": "https://api.github.com/users/JonathanAlis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JonathanAlis/subscriptions", "organizations_url": "https://api.github.com/users/JonathanAlis/orgs", "repos_url": "https://api.github.com/users/JonathanAlis/repos", "events_url": "https://api.github.com/users/JonathanAlis/events{/privacy}", "received_events_url": "https://api.github.com/users/JonathanAlis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "This seems similar to https://github.com/huggingface/datasets/issues/2512 Can you try to update seqeval ? ", "@JonathanAlis also note that the metrics are deprecated in our `datasets` library.\r\n\r\nPlease, use the new library πŸ€— Evaluate instead: https://huggingface.co/docs/evaluate" ]
2022-11-30T14:01:03
2023-07-21T14:40:31
2023-07-21T14:40:31
NONE
null
null
null
https://github.com/huggingface/datasets/blob/main/metrics/seqeval/seqeval.py > import datasets predictions = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] references = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] seqeval = datasets.load_metric("seqeval") results = seqeval.compute(predictions=predictions, references=references) print(list(results.keys())) print(results["overall_f1"]) print(results["PER"]["f1"]) It raises the error: > TypeError: classification_report() got an unexpected keyword argument 'suffix' For context, versions on my pip list -v > datasets 1.12.1 seqeval 1.2.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5314/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5314/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5313
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5313/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5313/comments
https://api.github.com/repos/huggingface/datasets/issues/5313/events
https://github.com/huggingface/datasets/pull/5313
1,468,484,136
PR_kwDODunzps5D6Qfb
5,313
Fix description of streaming in the docs
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-11-29T18:00:28
2022-12-01T14:55:30
2022-12-01T14:00:34
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5313", "html_url": "https://github.com/huggingface/datasets/pull/5313", "diff_url": "https://github.com/huggingface/datasets/pull/5313.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5313.patch", "merged_at": "2022-12-01T14:00:34" }
We say that "the data is being downloaded progressively" which is not true, it's just streamed, so I fixed it. Probably I missed some other places where it is written? Also changed docstrings for `StreamingDownloadManager`'s `download` and `extract` to reflect the same, as these docstrings are displayed in the documentation cc @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5313/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5313/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5312
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5312/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5312/comments
https://api.github.com/repos/huggingface/datasets/issues/5312/events
https://github.com/huggingface/datasets/pull/5312
1,468,352,562
PR_kwDODunzps5D5zxI
5,312
Add DatasetDict.to_pandas
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The current implementation is what I had in mind, i.e. concatenate all splits by default.\r\n\r\nHowever, I think most tabular datasets would come as a single split. So for that usecase, it wouldn't change UX if we raise when there are more than one splits.\r\n\r\nAnd for multiple splits, the user either passes a list, or they can pass `splits=\"all\"` to have all splits concatenated.", "I think it's better to raise an error in cases when there are multiple splits but no split is specified so that users know for sure with which data they are working. I imagine a case when a user loads a dataset that they don't know much about (like what splits it has), and if they get a concatenation of everything, it might lead to incorrect processing or interpretations and it would be hard to notice it.\r\n(\"explicit is better than implicit\")", "I just changed to raise an error if there are multiple splits. The error shows an example of how to choose a split to convert.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5312). All of your documentation changes will be reflected on that endpoint.", "Thanks for the review, I've updated the type hint and added a line to raise an error on bad splits :)", "Merging https://github.com/huggingface/datasets/pull/5301 would eliminate the need for this PR, no?\r\n\r\nIn the meantime, I find the current API cleaner.", "This solution is simpler than https://github.com/huggingface/datasets/pull/5301 and covers most cases for tabular datasets, so I'm in favor of merging this one and put https://github.com/huggingface/datasets/pull/5301 on stand by", "Let me know if it sounds good to you @mariosasko @albertvillanova :)", "I'm still not convinced. If `DatasetDict` needs this method and there is no other way, then IMO it would make more sense to return a dictionary with the splits converted to `pd.DataFrame`. ", "@mariosasko the issue we're dealing with is that in tabular scenarios, we often don't have splits in the dataset, and imposing that concept to people dealing with the library hampers adoption.", "@adrinjalali This PR proposes a solution inconsistent with the existing API (in other words, a solution that clutters our API πŸ™‚). Moreover, our library primarily focuses on larger-than-RAM datasets, and tabular datasets don't (directly) fall into this group.\r\n\r\nInstead of the temporary \"fix\" proposed here, it makes much more sense to align `load_dataset` with both tabular and DL workflows \"in a consistent way\", so I suggest we continue our discussion from https://github.com/huggingface/datasets/issues/5189 to have this resolved by version 3.0.", "closing this one for now" ]
2022-11-29T16:30:02
2023-01-25T17:33:43
2023-01-25T17:33:42
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5312", "html_url": "https://github.com/huggingface/datasets/pull/5312", "diff_url": "https://github.com/huggingface/datasets/pull/5312.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5312.patch", "merged_at": null }
From discussions in https://github.com/huggingface/datasets/issues/5189, for tabular data it doesn't really make sense to have to do ```python df = load_dataset(...)["train"].to_pandas() ``` because many datasets are not split. In this PR I added `to_pandas` to `DatasetDict` which returns the DataFrame: If there's only one split, you don't need to specify the split name: ```python df = load_dataset(...).to_pandas() ``` EDIT: and if a dataset has multiple splits: ```python df = load_dataset(...).to_pandas(splits=["train", "test"]) # or df = load_dataset(...).to_pandas(splits="all") # raises an error because you need to select the split(s) to convert load_dataset(...).to_pandas() ``` I do have one question though @merveenoyan @adrinjalali @mariosasko: Should we raise an error if there are multiple splits and ask the user to choose one explicitly ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5312/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5312/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5311
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5311/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5311/comments
https://api.github.com/repos/huggingface/datasets/issues/5311/events
https://github.com/huggingface/datasets/pull/5311
1,467,875,153
PR_kwDODunzps5D4Mm3
5,311
Add `features` param to `IterableDataset.map`
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-11-29T11:08:34
2022-12-06T15:45:02
2022-12-06T15:42:04
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5311", "html_url": "https://github.com/huggingface/datasets/pull/5311", "diff_url": "https://github.com/huggingface/datasets/pull/5311.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5311.patch", "merged_at": "2022-12-06T15:42:04" }
## Description As suggested by @lhoestq in #3888, we should be adding the param `features` to `IterableDataset.map` so that the features can be preserved (not turned into `None` as that's the default behavior) whenever the user passes those as param, so as to be consistent with `Dataset.map`, as it provides the `features` param so that those are not inferred by default, but specified by the user, and later validated by `ArrowWriter`. This is internally handled already by the functions relying on `IterableDataset.map` such as `rename_column`, `rename_columns`, and `remove_columns` as described in #5287. ## Usage Example ```python from datasets import load_dataset, Features ds = load_dataset("rotten_tomatoes", split="validation", streaming=True) print(ds.info.features) ds = ds.map( lambda x: {"target": x["label"]}, features=Features( {"target": ds.info.features["label"], "label": ds.info.features["label"], "text": ds.info.features["text"]} ), ) print(ds.info.features) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5311/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5311/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5310
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5310/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5310/comments
https://api.github.com/repos/huggingface/datasets/issues/5310/events
https://github.com/huggingface/datasets/pull/5310
1,467,719,635
PR_kwDODunzps5D3rGw
5,310
Support xPath for Windows pathnames
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-11-29T09:20:47
2022-11-30T12:00:09
2022-11-30T11:57:16
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5310", "html_url": "https://github.com/huggingface/datasets/pull/5310", "diff_url": "https://github.com/huggingface/datasets/pull/5310.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5310.patch", "merged_at": "2022-11-30T11:57:16" }
This PR implements a string representation of `xPath`, which is valid for local paths (also windows) and remote URLs. Additionally, some `os.path` methods are fixed for remote URLs on Windows machines. Now, on Windows machines: ```python In [2]: str(xPath("C:\\dir\\file.txt")) Out[2]: 'C:\\dir\\file.txt' In [3]: str(xPath("http://domain.com/file.txt")) Out[3]: 'http://domain.com/file.txt' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5310/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5310/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5309
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5309/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5309/comments
https://api.github.com/repos/huggingface/datasets/issues/5309/events
https://github.com/huggingface/datasets/pull/5309
1,466,758,987
PR_kwDODunzps5D0g1y
5,309
Close stream in `ArrowWriter.finalize` before inference error
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-11-28T16:59:39
2022-12-07T12:55:20
2022-12-07T12:52:15
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5309", "html_url": "https://github.com/huggingface/datasets/pull/5309", "diff_url": "https://github.com/huggingface/datasets/pull/5309.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5309.patch", "merged_at": "2022-12-07T12:52:15" }
Ensure the file stream is closed in `ArrowWriter.finalize` before raising the `SchemaInferenceError` to avoid the `PermissionError` on Windows in `incomplete_dir`'s `shutil.rmtree`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5309/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5309/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5308
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5308/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5308/comments
https://api.github.com/repos/huggingface/datasets/issues/5308/events
https://github.com/huggingface/datasets/pull/5308
1,466,552,281
PR_kwDODunzps5Dz0Tv
5,308
Support `topdown` parameter in `xwalk`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I like the `kwargs` approach, thanks!" ]
2022-11-28T14:42:41
2022-12-09T12:58:55
2022-12-09T12:55:59
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5308", "html_url": "https://github.com/huggingface/datasets/pull/5308", "diff_url": "https://github.com/huggingface/datasets/pull/5308.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5308.patch", "merged_at": "2022-12-09T12:55:59" }
Add support for the `topdown` parameter in `xwalk` when `fsspec>=2022.11.0` is installed.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5308/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5308/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5307
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5307/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5307/comments
https://api.github.com/repos/huggingface/datasets/issues/5307/events
https://github.com/huggingface/datasets/pull/5307
1,466,477,427
PR_kwDODunzps5Dzj8r
5,307
Use correct dataset type in `from_generator` docs
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-11-28T13:59:10
2022-11-28T15:30:37
2022-11-28T15:27:26
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5307", "html_url": "https://github.com/huggingface/datasets/pull/5307", "diff_url": "https://github.com/huggingface/datasets/pull/5307.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5307.patch", "merged_at": "2022-11-28T15:27:26" }
Use the correct dataset type in the `from_generator` docs (example with sharding).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5307/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5307/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5306
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5306/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5306/comments
https://api.github.com/repos/huggingface/datasets/issues/5306/events
https://github.com/huggingface/datasets/issues/5306
1,465,968,639
I_kwDODunzps5XYOf_
5,306
Can't use custom feature description when loading a dataset
{ "login": "clefourrier", "id": 22726840, "node_id": "MDQ6VXNlcjIyNzI2ODQw", "avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4", "gravatar_id": "", "url": "https://api.github.com/users/clefourrier", "html_url": "https://github.com/clefourrier", "followers_url": "https://api.github.com/users/clefourrier/followers", "following_url": "https://api.github.com/users/clefourrier/following{/other_user}", "gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}", "starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions", "organizations_url": "https://api.github.com/users/clefourrier/orgs", "repos_url": "https://api.github.com/users/clefourrier/repos", "events_url": "https://api.github.com/users/clefourrier/events{/privacy}", "received_events_url": "https://api.github.com/users/clefourrier/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Forgot to actually convert the feature dict to a Feature object. Closing." ]
2022-11-28T07:55:44
2022-11-28T08:11:45
2022-11-28T08:11:44
CONTRIBUTOR
null
null
null
### Describe the bug I have created a feature dictionary to describe my datasets' column types, to use when loading the dataset, following [the doc](https://huggingface.co/docs/datasets/main/en/about_dataset_features). It crashes at dataset load. ### Steps to reproduce the bug ```python # Creating features task_list = [f"motif_G{i}" for i in range(19, 53)] features = {t: Sequence(feature=Value(dtype="float64")) for t in task_list} for col_name in ["class_label"]: features[col_name] = Sequence(feature=Value(dtype="int64")) for col_name in ["num_nodes"]: features[col_name] = Value(dtype="int64") for col_name in ["num_bridges", "num_cycles", "avg_shortest_path_len"]: features[col_name] = Sequence(feature=Value(dtype="float64")) for col_name in ["edge_attr", "node_feat", "edge_index"]: features[col_name] = Sequence(feature=Sequence(feature=Value(dtype="int64"))) print(features) dataset = load_dataset(path=f"graphs-datasets/unbalanced-motifs-500K", split="train", features=features) ``` Last line will crash and say 'TypeError: argument of type 'Sequence' is not iterable'. Full stack: ``` Traceback (most recent call last): File "pretrain_tokengt.py", line 131, in <module> main(output_folder = "../workspace/pretraining", File "pretrain_tokengt.py", line 52, in main dataset = load_dataset(path=f"graphs-datasets/{dataset_name}", split="train", features=features) File "huggingface_env/lib/python3.8/site-packages/datasets/load.py", line 1718, in load_dataset builder_instance = load_dataset_builder( File "huggingface_env/lib/python3.8/site-packages/datasets/load.py", line 1514, in load_dataset_builder builder_instance: DatasetBuilder = builder_cls( File "huggingface_env/lib/python3.8/site-packages/datasets/builder.py", line 321, in __init__ info.update(self._info()) File "huggingface_env/lib/python3.8/site-packages/datasets/packaged_modules/json/json.py", line 62, in _info return datasets.DatasetInfo(features=self.config.features) File "<string>", line 20, in __init__ File "huggingface_env/lib/python3.8/site-packages/datasets/info.py", line 155, in __post_init__ self.features = Features.from_dict(self.features) File "huggingface_env/lib/python3.8/site-packages/datasets/features/features.py", line 1599, in from_dict obj = generate_from_dict(dic) File "huggingface_env/lib/python3.8/site-packages/datasets/features/features.py", line 1282, in generate_from_dict return {key: generate_from_dict(value) for key, value in obj.items()} File "huggingface_env/lib/python3.8/site-packages/datasets/features/features.py", line 1282, in <dictcomp> return {key: generate_from_dict(value) for key, value in obj.items()} File "huggingface_env/lib/python3.8/site-packages/datasets/features/features.py", line 1281, in generate_from_dict if "_type" not in obj or isinstance(obj["_type"], dict): TypeError: argument of type 'Sequence' is not iterable ``` ### Expected behavior For it not to crash. ### Environment info - `datasets` version: 2.7.1 - Platform: Linux-5.14.0-1054-oem-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 8.0.0 - Pandas version: 1.4.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5306/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5306/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5305
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5305/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5305/comments
https://api.github.com/repos/huggingface/datasets/issues/5305/events
https://github.com/huggingface/datasets/issues/5305
1,465,627,826
I_kwDODunzps5XW7Sy
5,305
Dataset joelito/mc4_legal does not work with multiple files
{ "login": "JoelNiklaus", "id": 3775944, "node_id": "MDQ6VXNlcjM3NzU5NDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/3775944?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JoelNiklaus", "html_url": "https://github.com/JoelNiklaus", "followers_url": "https://api.github.com/users/JoelNiklaus/followers", "following_url": "https://api.github.com/users/JoelNiklaus/following{/other_user}", "gists_url": "https://api.github.com/users/JoelNiklaus/gists{/gist_id}", "starred_url": "https://api.github.com/users/JoelNiklaus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JoelNiklaus/subscriptions", "organizations_url": "https://api.github.com/users/JoelNiklaus/orgs", "repos_url": "https://api.github.com/users/JoelNiklaus/repos", "events_url": "https://api.github.com/users/JoelNiklaus/events{/privacy}", "received_events_url": "https://api.github.com/users/JoelNiklaus/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting @JoelNiklaus.\r\n\r\nPlease note that since we moved all dataset loading scripts to the Hub, the issues and pull requests relative to specific datasets are directly handled on the Hub, in their Community tab. I'm transferring this issue there: https://huggingface.co/datasets/joelito/mc4_legal/discussions\r\n\r\nI am also having a look at the bug in your script.", "Issue transferred to: https://huggingface.co/datasets/joelito/mc4_legal/discussions/1" ]
2022-11-28T00:16:16
2022-11-28T07:22:42
2022-11-28T07:22:42
CONTRIBUTOR
null
null
null
### Describe the bug The dataset https://huggingface.co/datasets/joelito/mc4_legal works for languages like bg with a single data file, but not for languages with multiple files like de. It shows zero rows for the de dataset. joelniklaus@Joels-MacBook-Pro ~/N/P/C/L/p/m/mc4_legal (main) [1]> python test_mc4_legal.py (debug) Found cached dataset mc4_legal (/Users/joelniklaus/.cache/huggingface/datasets/mc4_legal/de/0.0.0/fb6952a097180f8c936e2a7605525ff670354a344fc1a2c70107684d3f7cb02f) Dataset({ features: ['index', 'url', 'timestamp', 'matches', 'text'], num_rows: 0 }) joelniklaus@Joels-MacBook-Pro ~/N/P/C/L/p/m/mc4_legal (main)> python test_mc4_legal.py (debug) Downloading and preparing dataset mc4_legal/bg to /Users/joelniklaus/.cache/huggingface/datasets/mc4_legal/bg/0.0.0/fb6952a097180f8c936e2a7605525ff670354a344fc1a2c70107684d3f7cb02f... Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 1240.55it/s] Dataset mc4_legal downloaded and prepared to /Users/joelniklaus/.cache/huggingface/datasets/mc4_legal/bg/0.0.0/fb6952a097180f8c936e2a7605525ff670354a344fc1a2c70107684d3f7cb02f. Subsequent calls will reuse this data. Dataset({ features: ['index', 'url', 'timestamp', 'matches', 'text'], num_rows: 204 }) ### Steps to reproduce the bug import datasets from datasets import load_dataset, get_dataset_config_names language = "bg" test = load_dataset("joelito/mc4_legal", language, split='train') ### Expected behavior It should display the correct number of rows for the de dataset which should be a large number (thousands or more). ### Environment info Package Version ------------------------ -------------- absl-py 1.3.0 aiohttp 3.8.1 aiosignal 1.2.0 astunparse 1.6.3 async-timeout 4.0.2 attrs 22.1.0 beautifulsoup4 4.11.1 blinker 1.4 blis 0.7.8 Bottleneck 1.3.4 brotlipy 0.7.0 cachetools 5.2.0 catalogue 2.0.7 certifi 2022.5.18.1 cffi 1.15.1 chardet 4.0.0 charset-normalizer 2.1.0 click 8.0.4 conllu 4.5.2 cryptography 38.0.1 cymem 2.0.6 datasets 2.6.1 dill 0.3.5.1 docker-pycreds 0.4.0 fasttext 0.9.2 fasttext-langdetect 1.0.3 filelock 3.0.12 flatbuffers 20210226132247 frozenlist 1.3.0 fsspec 2022.5.0 gast 0.4.0 gcloud 0.18.3 gitdb 4.0.9 GitPython 3.1.27 google-auth 2.9.0 google-auth-oauthlib 0.4.6 google-pasta 0.2.0 googleapis-common-protos 1.57.0 grpcio 1.47.0 h5py 3.7.0 httplib2 0.21.0 huggingface-hub 0.8.1 idna 3.4 importlib-metadata 4.12.0 Jinja2 3.1.2 joblib 1.0.1 keras 2.9.0 Keras-Preprocessing 1.1.2 langcodes 3.3.0 lxml 4.9.1 Markdown 3.3.7 MarkupSafe 2.1.1 mkl-fft 1.3.1 mkl-random 1.2.2 mkl-service 2.4.0 multidict 6.0.2 multiprocess 0.70.13 murmurhash 1.0.7 numexpr 2.8.1 numpy 1.22.3 oauth2client 4.1.3 oauthlib 3.2.1 opt-einsum 3.3.0 packaging 21.3 pandas 1.4.2 pathtools 0.1.2 pathy 0.6.1 pip 21.1.2 preshed 3.0.6 promise 2.3 protobuf 4.21.9 psutil 5.9.1 pyarrow 8.0.0 pyasn1 0.4.8 pyasn1-modules 0.2.8 pybind11 2.9.2 pycountry 22.3.5 pycparser 2.21 pydantic 1.8.2 PyJWT 2.4.0 pylzma 0.5.0 pyOpenSSL 22.0.0 pyparsing 3.0.4 PySocks 1.7.1 python-dateutil 2.8.2 pytz 2021.3 PyYAML 6.0 regex 2021.4.4 requests 2.28.1 requests-oauthlib 1.3.1 responses 0.18.0 rsa 4.8 sacremoses 0.0.45 scikit-learn 1.1.1 scipy 1.8.1 sentencepiece 0.1.96 sentry-sdk 1.6.0 setproctitle 1.2.3 setuptools 65.5.0 shortuuid 1.0.9 six 1.16.0 smart-open 5.2.1 smmap 5.0.0 soupsieve 2.3.2.post1 spacy 3.3.1 spacy-legacy 3.0.9 spacy-loggers 1.0.2 srsly 2.4.3 tabulate 0.8.9 tensorboard 2.9.1 tensorboard-data-server 0.6.1 tensorboard-plugin-wit 1.8.1 tensorflow 2.9.1 tensorflow-estimator 2.9.0 termcolor 2.1.0 thinc 8.0.17 threadpoolctl 3.1.0 tokenizers 0.12.1 torch 1.13.0 tqdm 4.64.0 transformers 4.20.1 typer 0.4.1 typing-extensions 4.3.0 Unidecode 1.3.6 urllib3 1.26.12 wandb 0.12.20 wasabi 0.9.1 web-anno-tsv 0.0.1 Werkzeug 2.1.2 wget 3.2 wheel 0.35.1 wrapt 1.14.1 xxhash 3.0.0 yarl 1.8.1 zipp 3.8.0 Python 3.8.10
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5305/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5305/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5304
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5304/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5304/comments
https://api.github.com/repos/huggingface/datasets/issues/5304/events
https://github.com/huggingface/datasets/issues/5304
1,465,110,367
I_kwDODunzps5XU89f
5,304
timit_asr doesn't load the test split.
{ "login": "seyong92", "id": 17842800, "node_id": "MDQ6VXNlcjE3ODQyODAw", "avatar_url": "https://avatars.githubusercontent.com/u/17842800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/seyong92", "html_url": "https://github.com/seyong92", "followers_url": "https://api.github.com/users/seyong92/followers", "following_url": "https://api.github.com/users/seyong92/following{/other_user}", "gists_url": "https://api.github.com/users/seyong92/gists{/gist_id}", "starred_url": "https://api.github.com/users/seyong92/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/seyong92/subscriptions", "organizations_url": "https://api.github.com/users/seyong92/orgs", "repos_url": "https://api.github.com/users/seyong92/repos", "events_url": "https://api.github.com/users/seyong92/events{/privacy}", "received_events_url": "https://api.github.com/users/seyong92/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The [timit_asr.py](https://huggingface.co/datasets/timit_asr/blob/main/timit_asr.py) script iterates over the WAV files per split directory using this:\r\n```python\r\nwav_paths = sorted(Path(data_dir).glob(f\"**/{split}/**/*.wav\"))\r\nwav_paths = wav_paths if wav_paths else sorted(Path(data_dir).glob(f\"**/{split.upper()}/**/*.WAV\"))\r\n```\r\n\r\nCan you check that there is a directory named \"test\" somewhere in your timit data directory ?" ]
2022-11-26T10:18:22
2023-02-10T16:33:21
2023-02-10T16:33:21
NONE
null
null
null
### Describe the bug When I use the function ```timit = load_dataset('timit_asr', data_dir=data_dir)```, it only loads train split, not test split. I tried to change the directory and filename to lower case to upper case for the test split, but it does not work at all. ```python DatasetDict({ train: Dataset({ features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'], num_rows: 4620 }) test: Dataset({ features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'], num_rows: 0 }) }) ``` The directory structure of both splits are same. (DIALECT_REGION / SPEAKER_CODE / DATA_FILES) ### Steps to reproduce the bug 1. just use ```timit = load_dataset('timit_asr', data_dir=data_dir)``` ### Expected behavior ```python DatasetDict({ train: Dataset({ features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'], num_rows: 4620 }) test: Dataset({ features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'], num_rows: 1680 }) }) ``` ### Environment info - ubuntu 20.04 - python 3.9.13 - datasets 2.7.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5304/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5304/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5303
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5303/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5303/comments
https://api.github.com/repos/huggingface/datasets/issues/5303/events
https://github.com/huggingface/datasets/pull/5303
1,464,837,251
PR_kwDODunzps5DuVTa
5,303
Skip dataset verifications by default
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "100% agree that the checksum verification is overkill and not super useful. But I think this PR would also disable the check on num_examples no ?\r\n \r\nAs a user I would like to know if the dataset I'm loading changed significantly.\r\nAnd I also think it can be useful to make sure the metadata are up to date.\r\n\r\nWhat do you think ?\r\n\r\nWe could have a default `ignore_verifications=\"ignore_checksums\"`", "> We could have a default `ignore_verifications=\"ignore_checksums\"`\r\n\r\nAccepting multiple types (booleans and strings) at the same time is not the best design. Maybe we could define an enum for this parameter?", "Yes an enum sounds good !", "so we can have three verification levels, - smth like \"ignore_all\" (to skip both checksums and all other info like num_examples verification), \"ignore_checksums\" (to skip only checksums verification), and \"verify_all\" (to perform all verification)?\r\nand deprecate `ignore_verifications` param.\r\n\r\n@mariosasko if you're not going to work on this PR in the coming days, I can take over it if you want (this PR will help me with [this issue](https://github.com/huggingface/datasets/issues/5315), not super urgent though).", "Okay, I propose deprecating `ignore_verifications` in favor of `verification_mode` (`load_dataset` already has `download_mode`; some other projects use this name for verification control). `verification_mode` would accept the following enum (or strings in the same manner as `download_mode` does):\r\n\r\n```python\r\nclass VerificationMode(enum.Enum):\r\n FULL = \"full\" # runs all verification checks \r\n BASIC = \"basic\" # default, runs only the cheap ones (skips the checksum check)\r\n NONE = \"none\" # skips all the checks\r\n```\r\n\r\nWDTY?", "(copy paste from my message on slack)\r\n\r\nWhat do you think of a config variable in config.py to switch from one verification mode to another ? This way we don’t deprecate anything\r\n\r\nMany users are familiar with ignore_verifications=True, it might be overkill to deprecate it", "@lhoestq So we have \"basic\" verification mode in `config.py` and continue to have `False` as a default \r\nvalue for `ignore_verifications`? That way running all verifications including checksums would not be possible without switching the config var, right? \r\n\r\nI like having a `VerificationMode` enum because it's aligned with `DownloadMode` and sounds more natural to me (`ignore_verifications` feels a bit semantically reverted but this is probably just my feeling) and it's flexible (no need to worry about `config.py`, I'm not sure that users even know it exists, wdyt?).\r\n\r\nThe usage point seems also valid to me, but cases when users are stuck with NonMatchingX errors also happen from time to time and to figure out what's wrong is non-trivial here. \r\n\r\nAs a note aside - I suggest to add instructions to the NonMatchingX error message (how to use `ignore_verifications` / `verification_mode`), this would save users who don't know about this param a lot of time.", "Ok I see. I'm fine with the new parameter then (even though I had a small pref for the config variable) :)", "I like the idea of an enum and the `verification_mode` parameter. \r\n\r\nIn relation with the config parameter, we could additionally add a `DEFAULT_VERIFICATION_MODE`, maybe only if users require it. Note that until now there wasn't any config parameter for a default `ignore_verifications` value: I guess people are explicitly passing `ignore_verifications=True`...\r\n\r\nAs a note aside, I like the suggestion by @polinaeterna: we could give actionable messages when verifying checksums. This could be done in other PR.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012891 / 0.011353 (0.001538) | 0.006474 / 0.011008 (-0.004535) | 0.144038 / 0.038508 (0.105530) | 0.036151 / 0.023109 (0.013042) | 0.404366 / 0.275898 (0.128468) | 0.479988 / 0.323480 (0.156508) | 0.010219 / 0.007986 (0.002233) | 0.005319 / 0.004328 (0.000990) | 0.099705 / 0.004250 (0.095455) | 0.046639 / 0.037052 (0.009586) | 0.398997 / 0.258489 (0.140508) | 0.478431 / 0.293841 (0.184590) | 0.069125 / 0.128546 (-0.059421) | 0.019603 / 0.075646 (-0.056043) | 0.400829 / 0.419271 (-0.018443) | 0.066549 / 0.043533 (0.023016) | 0.398343 / 0.255139 (0.143204) | 0.417928 / 0.283200 (0.134728) | 0.121124 / 0.141683 (-0.020559) | 1.751513 / 1.452155 (0.299358) | 1.821239 / 1.492716 (0.328523) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.251603 / 0.018006 (0.233597) | 0.579916 / 0.000490 (0.579427) | 0.003257 / 0.000200 (0.003058) | 0.000109 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031502 / 0.037411 (-0.005909) | 0.134688 / 0.014526 (0.120162) | 0.152306 / 0.176557 (-0.024251) | 0.198943 / 0.737135 (-0.538192) | 0.142551 / 0.296338 (-0.153788) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.634672 / 0.215209 (0.419463) | 6.370215 / 2.077655 (4.292561) | 2.548123 / 1.504120 (1.044003) | 2.184263 / 1.541195 (0.643069) | 2.239026 / 1.468490 (0.770536) | 1.233340 / 4.584777 (-3.351437) | 5.791824 / 3.745712 (2.046112) | 5.093032 / 5.269862 (-0.176830) | 2.849833 / 4.565676 (-1.715844) | 0.143787 / 0.424275 (-0.280488) | 0.015279 / 0.007607 (0.007672) | 0.757984 / 0.226044 (0.531939) | 7.883604 / 2.268929 (5.614675) | 3.321591 / 55.444624 (-52.123033) | 2.671777 / 6.876477 (-4.204700) | 2.685215 / 2.142072 (0.543142) | 1.546709 / 4.805227 (-3.258519) | 0.247186 / 6.500664 (-6.253478) | 0.085117 / 0.075469 (0.009648) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.679809 / 1.841788 (-0.161979) | 18.528893 / 8.074308 (10.454585) | 23.168590 / 10.191392 (12.977198) | 0.277618 / 0.680424 (-0.402806) | 0.045109 / 0.534201 (-0.489092) | 0.568873 / 0.579283 (-0.010410) | 0.695017 / 0.434364 (0.260653) | 0.671024 / 0.540337 (0.130687) | 0.823817 / 1.386936 (-0.563119) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009809 / 0.011353 (-0.001544) | 0.006890 / 0.011008 (-0.004118) | 0.099211 / 0.038508 (0.060703) | 0.035387 / 0.023109 (0.012278) | 0.507603 / 0.275898 (0.231705) | 0.535553 / 0.323480 (0.212073) | 0.007346 / 0.007986 (-0.000640) | 0.007559 / 0.004328 (0.003231) | 0.099132 / 0.004250 (0.094882) | 0.048048 / 0.037052 (0.010996) | 0.518096 / 0.258489 (0.259607) | 0.561134 / 0.293841 (0.267294) | 0.057580 / 0.128546 (-0.070966) | 0.023665 / 0.075646 (-0.051982) | 0.138409 / 0.419271 (-0.280862) | 0.061989 / 0.043533 (0.018456) | 0.510568 / 0.255139 (0.255429) | 0.552722 / 0.283200 (0.269522) | 0.115990 / 0.141683 (-0.025693) | 1.884900 / 1.452155 (0.432745) | 1.990604 / 1.492716 (0.497888) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.280638 / 0.018006 (0.262632) | 0.592837 / 0.000490 (0.592347) | 0.000465 / 0.000200 (0.000265) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030253 / 0.037411 (-0.007158) | 0.141580 / 0.014526 (0.127054) | 0.135114 / 0.176557 (-0.041443) | 0.190003 / 0.737135 (-0.547133) | 0.160230 / 0.296338 (-0.136109) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.699762 / 0.215209 (0.484553) | 6.632344 / 2.077655 (4.554689) | 2.718803 / 1.504120 (1.214683) | 2.485294 / 1.541195 (0.944099) | 2.579889 / 1.468490 (1.111399) | 1.268795 / 4.584777 (-3.315982) | 5.777745 / 3.745712 (2.032033) | 3.232551 / 5.269862 (-2.037311) | 2.127699 / 4.565676 (-2.437977) | 0.146570 / 0.424275 (-0.277705) | 0.015971 / 0.007607 (0.008364) | 0.803181 / 0.226044 (0.577137) | 8.377192 / 2.268929 (6.108264) | 3.551242 / 55.444624 (-51.893382) | 2.865228 / 6.876477 (-4.011249) | 2.774869 / 2.142072 (0.632797) | 1.553856 / 4.805227 (-3.251371) | 0.264510 / 6.500664 (-6.236154) | 0.087918 / 0.075469 (0.012449) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.653396 / 1.841788 (-0.188391) | 18.703863 / 8.074308 (10.629555) | 22.067331 / 10.191392 (11.875939) | 0.257424 / 0.680424 (-0.422999) | 0.026448 / 0.534201 (-0.507753) | 0.550100 / 0.579283 (-0.029183) | 0.647296 / 0.434364 (0.212932) | 0.657476 / 0.540337 (0.117138) | 0.781119 / 1.386936 (-0.605817) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8c4a9cb95f8742a2850f11d59abbef71d6c1f60c \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008889 / 0.011353 (-0.002464) | 0.004563 / 0.011008 (-0.006445) | 0.101627 / 0.038508 (0.063118) | 0.030526 / 0.023109 (0.007417) | 0.297175 / 0.275898 (0.021277) | 0.368454 / 0.323480 (0.044974) | 0.007246 / 0.007986 (-0.000740) | 0.003565 / 0.004328 (-0.000763) | 0.078644 / 0.004250 (0.074394) | 0.038616 / 0.037052 (0.001564) | 0.310521 / 0.258489 (0.052032) | 0.348014 / 0.293841 (0.054173) | 0.033463 / 0.128546 (-0.095083) | 0.011544 / 0.075646 (-0.064102) | 0.323281 / 0.419271 (-0.095990) | 0.040187 / 0.043533 (-0.003346) | 0.298015 / 0.255139 (0.042876) | 0.326392 / 0.283200 (0.043193) | 0.088730 / 0.141683 (-0.052952) | 1.503387 / 1.452155 (0.051233) | 1.548704 / 1.492716 (0.055988) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185983 / 0.018006 (0.167977) | 0.451889 / 0.000490 (0.451400) | 0.001433 / 0.000200 (0.001233) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023396 / 0.037411 (-0.014015) | 0.118236 / 0.014526 (0.103710) | 0.124594 / 0.176557 (-0.051962) | 0.159089 / 0.737135 (-0.578047) | 0.129369 / 0.296338 (-0.166969) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.423161 / 0.215209 (0.207952) | 4.228211 / 2.077655 (2.150556) | 1.853862 / 1.504120 (0.349742) | 1.649471 / 1.541195 (0.108276) | 1.708631 / 1.468490 (0.240141) | 0.697456 / 4.584777 (-3.887321) | 3.473244 / 3.745712 (-0.272468) | 1.942586 / 5.269862 (-3.327275) | 1.291592 / 4.565676 (-3.274084) | 0.082758 / 0.424275 (-0.341517) | 0.012256 / 0.007607 (0.004649) | 0.528355 / 0.226044 (0.302311) | 5.277620 / 2.268929 (3.008691) | 2.299604 / 55.444624 (-53.145020) | 1.954940 / 6.876477 (-4.921537) | 2.055543 / 2.142072 (-0.086529) | 0.814723 / 4.805227 (-3.990505) | 0.149937 / 6.500664 (-6.350727) | 0.064529 / 0.075469 (-0.010941) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.266240 / 1.841788 (-0.575547) | 14.144016 / 8.074308 (6.069708) | 14.331733 / 10.191392 (4.140340) | 0.138963 / 0.680424 (-0.541461) | 0.029034 / 0.534201 (-0.505167) | 0.397325 / 0.579283 (-0.181958) | 0.405293 / 0.434364 (-0.029071) | 0.480745 / 0.540337 (-0.059592) | 0.573386 / 1.386936 (-0.813550) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007214 / 0.011353 (-0.004139) | 0.004569 / 0.011008 (-0.006439) | 0.078718 / 0.038508 (0.040209) | 0.031104 / 0.023109 (0.007995) | 0.342562 / 0.275898 (0.066664) | 0.387802 / 0.323480 (0.064322) | 0.005378 / 0.007986 (-0.002608) | 0.003414 / 0.004328 (-0.000915) | 0.077249 / 0.004250 (0.072999) | 0.044337 / 0.037052 (0.007285) | 0.341397 / 0.258489 (0.082907) | 0.385536 / 0.293841 (0.091695) | 0.033257 / 0.128546 (-0.095289) | 0.011825 / 0.075646 (-0.063821) | 0.086723 / 0.419271 (-0.332549) | 0.045951 / 0.043533 (0.002418) | 0.340914 / 0.255139 (0.085775) | 0.367126 / 0.283200 (0.083926) | 0.096326 / 0.141683 (-0.045357) | 1.608612 / 1.452155 (0.156458) | 1.687251 / 1.492716 (0.194534) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227595 / 0.018006 (0.209589) | 0.418502 / 0.000490 (0.418013) | 0.000392 / 0.000200 (0.000192) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026232 / 0.037411 (-0.011179) | 0.101020 / 0.014526 (0.086494) | 0.110017 / 0.176557 (-0.066539) | 0.153497 / 0.737135 (-0.583639) | 0.110602 / 0.296338 (-0.185737) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433789 / 0.215209 (0.218579) | 4.329350 / 2.077655 (2.251696) | 2.052136 / 1.504120 (0.548016) | 1.848457 / 1.541195 (0.307262) | 1.936791 / 1.468490 (0.468301) | 0.700609 / 4.584777 (-3.884168) | 3.391983 / 3.745712 (-0.353729) | 1.903220 / 5.269862 (-3.366642) | 1.179463 / 4.565676 (-3.386213) | 0.084025 / 0.424275 (-0.340250) | 0.012743 / 0.007607 (0.005136) | 0.536816 / 0.226044 (0.310772) | 5.420230 / 2.268929 (3.151302) | 2.507438 / 55.444624 (-52.937187) | 2.178907 / 6.876477 (-4.697570) | 2.228586 / 2.142072 (0.086514) | 0.812527 / 4.805227 (-3.992701) | 0.153382 / 6.500664 (-6.347282) | 0.069932 / 0.075469 (-0.005537) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.256861 / 1.841788 (-0.584927) | 14.309236 / 8.074308 (6.234928) | 13.740323 / 10.191392 (3.548931) | 0.142698 / 0.680424 (-0.537726) | 0.016998 / 0.534201 (-0.517203) | 0.385489 / 0.579283 (-0.193794) | 0.391515 / 0.434364 (-0.042849) | 0.472704 / 0.540337 (-0.067633) | 0.565042 / 1.386936 (-0.821894) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4b0713ddf2e2e7129d9ccda791d265684c96675c \"CML watermark\")\n", "This is ready for review. \r\n\r\nIf `verification_mode` is None, it defaults to `VerificationMode.BASIC` instead of `VerificationMode.NONE`, so maybe we should find a better name for the latter to avoid confusion.\r\n\r\nPS: `ignore_verifications` is still present in the `test`/`run_beam` commands for simplicity. Let me know if you think these commands should support all three modes.", "> I would also prefer to change the name for the NONE verification mode, but don't have really good ideas in mind. maybe smth like SKIP_ALL ?\r\n\r\nI decided to go with the following names:\r\n* `no_checks` (previously `none`)\r\n* `basic_checks` (previously `basic`)\r\n* `all_checks` (previously `full`)\r\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008900 / 0.011353 (-0.002453) | 0.004492 / 0.011008 (-0.006516) | 0.100957 / 0.038508 (0.062449) | 0.030145 / 0.023109 (0.007036) | 0.302531 / 0.275898 (0.026633) | 0.344072 / 0.323480 (0.020592) | 0.007032 / 0.007986 (-0.000953) | 0.004150 / 0.004328 (-0.000178) | 0.078272 / 0.004250 (0.074021) | 0.034142 / 0.037052 (-0.002910) | 0.310798 / 0.258489 (0.052308) | 0.350077 / 0.293841 (0.056236) | 0.034497 / 0.128546 (-0.094050) | 0.011417 / 0.075646 (-0.064230) | 0.323427 / 0.419271 (-0.095844) | 0.045664 / 0.043533 (0.002132) | 0.304688 / 0.255139 (0.049549) | 0.336591 / 0.283200 (0.053391) | 0.086116 / 0.141683 (-0.055567) | 1.519278 / 1.452155 (0.067123) | 1.576728 / 1.492716 (0.084011) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242482 / 0.018006 (0.224476) | 0.403548 / 0.000490 (0.403058) | 0.001217 / 0.000200 (0.001017) | 0.000073 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023466 / 0.037411 (-0.013945) | 0.095220 / 0.014526 (0.080694) | 0.104119 / 0.176557 (-0.072438) | 0.141107 / 0.737135 (-0.596029) | 0.107236 / 0.296338 (-0.189102) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416290 / 0.215209 (0.201081) | 4.159068 / 2.077655 (2.081413) | 1.846014 / 1.504120 (0.341894) | 1.634789 / 1.541195 (0.093594) | 1.724687 / 1.468490 (0.256196) | 0.696887 / 4.584777 (-3.887890) | 3.313861 / 3.745712 (-0.431851) | 1.907239 / 5.269862 (-3.362622) | 1.266815 / 4.565676 (-3.298861) | 0.081660 / 0.424275 (-0.342615) | 0.012290 / 0.007607 (0.004683) | 0.522866 / 0.226044 (0.296822) | 5.237356 / 2.268929 (2.968428) | 2.294645 / 55.444624 (-53.149979) | 1.946407 / 6.876477 (-4.930069) | 1.995441 / 2.142072 (-0.146632) | 0.808340 / 4.805227 (-3.996887) | 0.149670 / 6.500664 (-6.350994) | 0.065162 / 0.075469 (-0.010307) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.219476 / 1.841788 (-0.622312) | 13.868709 / 8.074308 (5.794401) | 14.115783 / 10.191392 (3.924391) | 0.149403 / 0.680424 (-0.531021) | 0.028514 / 0.534201 (-0.505686) | 0.398194 / 0.579283 (-0.181089) | 0.410898 / 0.434364 (-0.023466) | 0.485763 / 0.540337 (-0.054574) | 0.574924 / 1.386936 (-0.812012) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006906 / 0.011353 (-0.004447) | 0.004446 / 0.011008 (-0.006562) | 0.075936 / 0.038508 (0.037428) | 0.027693 / 0.023109 (0.004584) | 0.339505 / 0.275898 (0.063607) | 0.383315 / 0.323480 (0.059835) | 0.005138 / 0.007986 (-0.002847) | 0.004636 / 0.004328 (0.000308) | 0.074829 / 0.004250 (0.070578) | 0.040327 / 0.037052 (0.003274) | 0.340516 / 0.258489 (0.082027) | 0.388569 / 0.293841 (0.094729) | 0.031562 / 0.128546 (-0.096984) | 0.011585 / 0.075646 (-0.064061) | 0.084753 / 0.419271 (-0.334518) | 0.041310 / 0.043533 (-0.002223) | 0.338272 / 0.255139 (0.083133) | 0.367243 / 0.283200 (0.084043) | 0.092653 / 0.141683 (-0.049029) | 1.515973 / 1.452155 (0.063818) | 1.582869 / 1.492716 (0.090152) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229366 / 0.018006 (0.211360) | 0.414404 / 0.000490 (0.413914) | 0.002922 / 0.000200 (0.002723) | 0.000075 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026391 / 0.037411 (-0.011020) | 0.106754 / 0.014526 (0.092228) | 0.110718 / 0.176557 (-0.065839) | 0.145786 / 0.737135 (-0.591350) | 0.113180 / 0.296338 (-0.183159) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.446340 / 0.215209 (0.231131) | 4.499756 / 2.077655 (2.422101) | 2.071485 / 1.504120 (0.567365) | 1.873223 / 1.541195 (0.332029) | 1.931562 / 1.468490 (0.463071) | 0.699270 / 4.584777 (-3.885507) | 3.452383 / 3.745712 (-0.293329) | 2.970630 / 5.269862 (-2.299232) | 1.300859 / 4.565676 (-3.264817) | 0.083971 / 0.424275 (-0.340304) | 0.012489 / 0.007607 (0.004882) | 0.544190 / 0.226044 (0.318146) | 5.460097 / 2.268929 (3.191169) | 2.700244 / 55.444624 (-52.744380) | 2.396694 / 6.876477 (-4.479783) | 2.376334 / 2.142072 (0.234262) | 0.812845 / 4.805227 (-3.992382) | 0.154441 / 6.500664 (-6.346223) | 0.069510 / 0.075469 (-0.005959) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.278836 / 1.841788 (-0.562952) | 14.153158 / 8.074308 (6.078850) | 13.821290 / 10.191392 (3.629898) | 0.160464 / 0.680424 (-0.519960) | 0.016742 / 0.534201 (-0.517459) | 0.379840 / 0.579283 (-0.199443) | 0.391903 / 0.434364 (-0.042461) | 0.461646 / 0.540337 (-0.078691) | 0.550691 / 1.386936 (-0.836245) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#aeb637daab938d51b8b15ad4d175d06817e99512 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009858 / 0.011353 (-0.001495) | 0.005383 / 0.011008 (-0.005625) | 0.100527 / 0.038508 (0.062019) | 0.037176 / 0.023109 (0.014067) | 0.295204 / 0.275898 (0.019306) | 0.364511 / 0.323480 (0.041031) | 0.008486 / 0.007986 (0.000500) | 0.004273 / 0.004328 (-0.000055) | 0.076538 / 0.004250 (0.072288) | 0.046250 / 0.037052 (0.009197) | 0.307102 / 0.258489 (0.048613) | 0.339313 / 0.293841 (0.045472) | 0.040783 / 0.128546 (-0.087763) | 0.012323 / 0.075646 (-0.063323) | 0.336216 / 0.419271 (-0.083055) | 0.050480 / 0.043533 (0.006947) | 0.293689 / 0.255139 (0.038550) | 0.315034 / 0.283200 (0.031834) | 0.113775 / 0.141683 (-0.027908) | 1.438738 / 1.452155 (-0.013416) | 1.499874 / 1.492716 (0.007157) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.202392 / 0.018006 (0.184386) | 0.442784 / 0.000490 (0.442295) | 0.003004 / 0.000200 (0.002804) | 0.000087 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027792 / 0.037411 (-0.009620) | 0.110886 / 0.014526 (0.096360) | 0.121041 / 0.176557 (-0.055515) | 0.166803 / 0.737135 (-0.570333) | 0.127617 / 0.296338 (-0.168722) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.409762 / 0.215209 (0.194553) | 4.073297 / 2.077655 (1.995643) | 1.836375 / 1.504120 (0.332255) | 1.651507 / 1.541195 (0.110312) | 1.734134 / 1.468490 (0.265644) | 0.690900 / 4.584777 (-3.893877) | 3.812045 / 3.745712 (0.066333) | 2.101378 / 5.269862 (-3.168483) | 1.438242 / 4.565676 (-3.127434) | 0.083256 / 0.424275 (-0.341020) | 0.012436 / 0.007607 (0.004829) | 0.501702 / 0.226044 (0.275658) | 5.007679 / 2.268929 (2.738751) | 2.315158 / 55.444624 (-53.129466) | 2.003934 / 6.876477 (-4.872543) | 2.154658 / 2.142072 (0.012586) | 0.831749 / 4.805227 (-3.973478) | 0.165058 / 6.500664 (-6.335606) | 0.062166 / 0.075469 (-0.013303) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.212435 / 1.841788 (-0.629353) | 15.022673 / 8.074308 (6.948365) | 14.649631 / 10.191392 (4.458239) | 0.172121 / 0.680424 (-0.508303) | 0.028791 / 0.534201 (-0.505410) | 0.440290 / 0.579283 (-0.138993) | 0.437359 / 0.434364 (0.002995) | 0.543603 / 0.540337 (0.003265) | 0.643241 / 1.386936 (-0.743695) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007572 / 0.011353 (-0.003781) | 0.005207 / 0.011008 (-0.005801) | 0.074427 / 0.038508 (0.035919) | 0.033384 / 0.023109 (0.010275) | 0.334538 / 0.275898 (0.058640) | 0.371556 / 0.323480 (0.048076) | 0.006453 / 0.007986 (-0.001532) | 0.004010 / 0.004328 (-0.000319) | 0.073488 / 0.004250 (0.069238) | 0.048082 / 0.037052 (0.011030) | 0.337325 / 0.258489 (0.078836) | 0.395143 / 0.293841 (0.101302) | 0.036714 / 0.128546 (-0.091832) | 0.012089 / 0.075646 (-0.063557) | 0.086008 / 0.419271 (-0.333263) | 0.049277 / 0.043533 (0.005744) | 0.333848 / 0.255139 (0.078709) | 0.354003 / 0.283200 (0.070803) | 0.105012 / 0.141683 (-0.036671) | 1.450769 / 1.452155 (-0.001386) | 1.554538 / 1.492716 (0.061821) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208407 / 0.018006 (0.190400) | 0.438778 / 0.000490 (0.438288) | 0.000399 / 0.000200 (0.000199) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030180 / 0.037411 (-0.007232) | 0.115432 / 0.014526 (0.100906) | 0.126106 / 0.176557 (-0.050451) | 0.167508 / 0.737135 (-0.569627) | 0.130566 / 0.296338 (-0.165772) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.421408 / 0.215209 (0.206198) | 4.208492 / 2.077655 (2.130838) | 2.024177 / 1.504120 (0.520057) | 1.834356 / 1.541195 (0.293161) | 1.923234 / 1.468490 (0.454744) | 0.699548 / 4.584777 (-3.885229) | 3.933775 / 3.745712 (0.188063) | 2.124526 / 5.269862 (-3.145336) | 1.360934 / 4.565676 (-3.204742) | 0.086568 / 0.424275 (-0.337707) | 0.012351 / 0.007607 (0.004744) | 0.517431 / 0.226044 (0.291387) | 5.175428 / 2.268929 (2.906499) | 2.471031 / 55.444624 (-52.973593) | 2.131529 / 6.876477 (-4.744948) | 2.202512 / 2.142072 (0.060440) | 0.849364 / 4.805227 (-3.955863) | 0.171505 / 6.500664 (-6.329159) | 0.065864 / 0.075469 (-0.009605) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.270054 / 1.841788 (-0.571734) | 15.254502 / 8.074308 (7.180194) | 13.874969 / 10.191392 (3.683577) | 0.144131 / 0.680424 (-0.536293) | 0.017743 / 0.534201 (-0.516458) | 0.421990 / 0.579283 (-0.157293) | 0.423924 / 0.434364 (-0.010439) | 0.522560 / 0.540337 (-0.017778) | 0.626159 / 1.386936 (-0.760777) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#05bd726a575a3c1c337022424fa7d226f1a2ebee \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008643 / 0.011353 (-0.002710) | 0.004479 / 0.011008 (-0.006529) | 0.102372 / 0.038508 (0.063864) | 0.029703 / 0.023109 (0.006594) | 0.301479 / 0.275898 (0.025581) | 0.370970 / 0.323480 (0.047490) | 0.007044 / 0.007986 (-0.000942) | 0.004868 / 0.004328 (0.000540) | 0.079568 / 0.004250 (0.075318) | 0.035344 / 0.037052 (-0.001708) | 0.308091 / 0.258489 (0.049602) | 0.353812 / 0.293841 (0.059971) | 0.033406 / 0.128546 (-0.095140) | 0.011476 / 0.075646 (-0.064170) | 0.324343 / 0.419271 (-0.094929) | 0.040293 / 0.043533 (-0.003240) | 0.300007 / 0.255139 (0.044868) | 0.334410 / 0.283200 (0.051210) | 0.086553 / 0.141683 (-0.055130) | 1.463814 / 1.452155 (0.011659) | 1.501580 / 1.492716 (0.008864) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.198032 / 0.018006 (0.180025) | 0.409970 / 0.000490 (0.409480) | 0.001075 / 0.000200 (0.000875) | 0.000076 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022941 / 0.037411 (-0.014471) | 0.097320 / 0.014526 (0.082794) | 0.106445 / 0.176557 (-0.070111) | 0.139073 / 0.737135 (-0.598063) | 0.108408 / 0.296338 (-0.187930) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419315 / 0.215209 (0.204106) | 4.199273 / 2.077655 (2.121618) | 1.877689 / 1.504120 (0.373569) | 1.670442 / 1.541195 (0.129247) | 1.735034 / 1.468490 (0.266544) | 0.694691 / 4.584777 (-3.890086) | 3.323644 / 3.745712 (-0.422069) | 2.884349 / 5.269862 (-2.385513) | 1.518882 / 4.565676 (-3.046794) | 0.082390 / 0.424275 (-0.341886) | 0.012884 / 0.007607 (0.005277) | 0.525103 / 0.226044 (0.299058) | 5.277297 / 2.268929 (3.008369) | 2.328639 / 55.444624 (-53.115985) | 1.983210 / 6.876477 (-4.893267) | 2.037985 / 2.142072 (-0.104088) | 0.809520 / 4.805227 (-3.995707) | 0.150150 / 6.500664 (-6.350514) | 0.065578 / 0.075469 (-0.009891) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.221971 / 1.841788 (-0.619817) | 13.692361 / 8.074308 (5.618052) | 13.874582 / 10.191392 (3.683190) | 0.138182 / 0.680424 (-0.542242) | 0.028618 / 0.534201 (-0.505583) | 0.395104 / 0.579283 (-0.184179) | 0.397169 / 0.434364 (-0.037195) | 0.457509 / 0.540337 (-0.082829) | 0.537275 / 1.386936 (-0.849661) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006835 / 0.011353 (-0.004518) | 0.004585 / 0.011008 (-0.006423) | 0.076877 / 0.038508 (0.038369) | 0.027305 / 0.023109 (0.004196) | 0.349085 / 0.275898 (0.073187) | 0.401416 / 0.323480 (0.077936) | 0.004912 / 0.007986 (-0.003074) | 0.003315 / 0.004328 (-0.001014) | 0.075676 / 0.004250 (0.071425) | 0.038960 / 0.037052 (0.001907) | 0.346196 / 0.258489 (0.087707) | 0.403185 / 0.293841 (0.109344) | 0.032054 / 0.128546 (-0.096493) | 0.011742 / 0.075646 (-0.063905) | 0.086631 / 0.419271 (-0.332640) | 0.041633 / 0.043533 (-0.001900) | 0.343519 / 0.255139 (0.088380) | 0.385413 / 0.283200 (0.102213) | 0.091430 / 0.141683 (-0.050253) | 1.478886 / 1.452155 (0.026731) | 1.546873 / 1.492716 (0.054156) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.167882 / 0.018006 (0.149876) | 0.396464 / 0.000490 (0.395974) | 0.003629 / 0.000200 (0.003429) | 0.000085 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024829 / 0.037411 (-0.012583) | 0.099607 / 0.014526 (0.085081) | 0.106187 / 0.176557 (-0.070370) | 0.142379 / 0.737135 (-0.594756) | 0.109307 / 0.296338 (-0.187032) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.442276 / 0.215209 (0.227067) | 4.427099 / 2.077655 (2.349444) | 2.093407 / 1.504120 (0.589287) | 1.880973 / 1.541195 (0.339778) | 1.915592 / 1.468490 (0.447102) | 0.708196 / 4.584777 (-3.876581) | 3.417649 / 3.745712 (-0.328063) | 2.859953 / 5.269862 (-2.409909) | 1.528380 / 4.565676 (-3.037297) | 0.084054 / 0.424275 (-0.340221) | 0.012585 / 0.007607 (0.004978) | 0.537614 / 0.226044 (0.311569) | 5.409915 / 2.268929 (3.140987) | 2.555853 / 55.444624 (-52.888771) | 2.195075 / 6.876477 (-4.681402) | 2.232775 / 2.142072 (0.090703) | 0.814994 / 4.805227 (-3.990233) | 0.152882 / 6.500664 (-6.347782) | 0.067467 / 0.075469 (-0.008002) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.306007 / 1.841788 (-0.535780) | 13.923981 / 8.074308 (5.849673) | 13.385881 / 10.191392 (3.194489) | 0.150712 / 0.680424 (-0.529712) | 0.016731 / 0.534201 (-0.517470) | 0.376557 / 0.579283 (-0.202726) | 0.379396 / 0.434364 (-0.054968) | 0.456251 / 0.540337 (-0.084087) | 0.545731 / 1.386936 (-0.841205) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cc637d107ef3e3b9948691379312a8099b6476aa \"CML watermark\")\n" ]
2022-11-25T18:39:09
2023-02-13T16:50:42
2023-02-13T16:43:47
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5303", "html_url": "https://github.com/huggingface/datasets/pull/5303", "diff_url": "https://github.com/huggingface/datasets/pull/5303.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5303.patch", "merged_at": "2023-02-13T16:43:47" }
Skip the dataset verifications (split and checksum verifications, duplicate keys check) by default unless a dataset is being tested (`datasets-cli test/run_beam`). The main goal is to avoid running the checksum check in the default case due to how expensive it can be for large datasets. PS: Maybe we should deprecate `ignore_verifications`, which is `True` now by default, and give it a different name?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5303/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5303/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5302
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5302/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5302/comments
https://api.github.com/repos/huggingface/datasets/issues/5302/events
https://github.com/huggingface/datasets/pull/5302
1,464,778,901
PR_kwDODunzps5DuJJp
5,302
Improve `use_auth_token` docstring and deprecate `use_auth_token` in `download_and_prepare`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-11-25T17:09:21
2022-12-09T14:20:15
2022-12-09T14:17:20
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5302", "html_url": "https://github.com/huggingface/datasets/pull/5302", "diff_url": "https://github.com/huggingface/datasets/pull/5302.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5302.patch", "merged_at": "2022-12-09T14:17:20" }
Clarify in the docstrings what happens when `use_auth_token` is `None` and deprecate the `use_auth_token` param in `download_and_prepare`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5302/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5302/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5301
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5301/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5301/comments
https://api.github.com/repos/huggingface/datasets/issues/5301/events
https://github.com/huggingface/datasets/pull/5301
1,464,749,156
PR_kwDODunzps5DuCzR
5,301
Return a split Dataset in load_dataset
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5301). All of your documentation changes will be reflected on that endpoint.", "Just noticed that now we have to deal with indexed & split datasets. The remaining tests are failing because one should be able to get an indexed dataset when accessing the split of a dataset made of indexed splits (right now the index is just trashed)" ]
2022-11-25T16:35:54
2023-02-21T13:13:13
2023-02-21T13:13:13
MEMBER
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5301", "html_url": "https://github.com/huggingface/datasets/pull/5301", "diff_url": "https://github.com/huggingface/datasets/pull/5301.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5301.patch", "merged_at": null }
...instead of a DatasetDict. ```python # now supported ds = load_dataset("squad") ds[0] for example in ds: pass # still works ds["train"] ds["validation"] # new ds.splits # Dict[str, Dataset] | None # soon to be supported (not in this PR) ds = load_dataset("dataset_with_no_splits") ds[0] for example in ds: pass ``` I implemented `Dataset.__getitem__` and `IterableDataset.__getitem__` to be able to get a split from a dataset. The splits are defined by the `ds.info.splits` dictionary. Therefore a dataset is a table that optionally has some splits defined in the dataset info. And a split dataset is the concatenation of all its splits. I made as little breaking changes as possible. Notable breaking changes: - `load_dataset("potato").keys() / .items() / .values() /` don't work anymore, since we don't return a dict - same for `for split_name in load_dataset("potato")`, since we now iterate on the examples - .. TODO: - [x] Update push_to_hub - [x] Update save_to_disk/load_from_disk - [ ] check for other breaking changes - [ ] fix existing tests - [ ] add new tests - [ ] docs This is related to https://github.com/huggingface/datasets/issues/5189, to extend `load_dataset` to return datasets without splits
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5301/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5301/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5300
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5300/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5300/comments
https://api.github.com/repos/huggingface/datasets/issues/5300/events
https://github.com/huggingface/datasets/pull/5300
1,464,697,136
PR_kwDODunzps5Dt3uK
5,300
Use same `num_proc` for dataset download and generation
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I noticed this bug the other day and was going to look into it! \"Where are these processes coming from?\" ;-)" ]
2022-11-25T15:37:42
2022-12-07T12:55:39
2022-12-07T12:52:51
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5300", "html_url": "https://github.com/huggingface/datasets/pull/5300", "diff_url": "https://github.com/huggingface/datasets/pull/5300.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5300.patch", "merged_at": "2022-12-07T12:52:50" }
Use the same `num_proc` value for data download and generation. Additionally, do not set `num_proc` to 16 in `DownloadManager` by default (`num_proc` now has to be specified explicitly).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5300/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5300/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5299
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5299/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5299/comments
https://api.github.com/repos/huggingface/datasets/issues/5299/events
https://github.com/huggingface/datasets/pull/5299
1,464,695,091
PR_kwDODunzps5Dt3Sk
5,299
Fix xopen for Windows pathnames
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-11-25T15:35:28
2022-11-29T08:23:58
2022-11-29T08:21:24
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5299", "html_url": "https://github.com/huggingface/datasets/pull/5299", "diff_url": "https://github.com/huggingface/datasets/pull/5299.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5299.patch", "merged_at": "2022-11-29T08:21:24" }
This PR fixes a bug in `xopen` function for Windows pathnames. Fix #5298.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5299/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5299/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5298
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5298/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5298/comments
https://api.github.com/repos/huggingface/datasets/issues/5298/events
https://github.com/huggingface/datasets/issues/5298
1,464,681,871
I_kwDODunzps5XTUWP
5,298
Bug in xopen with Windows pathnames
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
2022-11-25T15:21:32
2022-11-29T08:21:25
2022-11-29T08:21:25
MEMBER
null
null
null
Currently, `xopen` function has a bug with local Windows pathnames: From its implementation: ```python def xopen(file: str, mode="r", *args, **kwargs): file = _as_posix(PurePath(file)) main_hop, *rest_hops = file.split("::") if is_local_path(main_hop): return open(file, mode, *args, **kwargs) ``` On a Windows machine, if we pass the argument: ```python xopen("C:\\Users\\USERNAME\\filename.txt") ``` it returns ```python open("C:/Users/USERNAME/filename.txt") ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5298/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5298/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5297
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5297/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5297/comments
https://api.github.com/repos/huggingface/datasets/issues/5297/events
https://github.com/huggingface/datasets/pull/5297
1,464,554,491
PR_kwDODunzps5DtZjg
5,297
Fix xjoin for Windows pathnames
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-11-25T13:30:17
2022-11-29T08:07:39
2022-11-29T08:05:12
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5297", "html_url": "https://github.com/huggingface/datasets/pull/5297", "diff_url": "https://github.com/huggingface/datasets/pull/5297.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5297.patch", "merged_at": "2022-11-29T08:05:12" }
This PR fixes a bug in `xjoin` function with Windows pathnames. Fix #5296.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5297/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5297/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5296
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5296/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5296/comments
https://api.github.com/repos/huggingface/datasets/issues/5296/events
https://github.com/huggingface/datasets/issues/5296
1,464,553,580
I_kwDODunzps5XS1Bs
5,296
Bug in xjoin with Windows pathnames
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
2022-11-25T13:29:33
2022-11-29T08:05:13
2022-11-29T08:05:13
MEMBER
null
null
null
Currently, `xjoin` function has a bug with local Windows pathnames: instead of returning the OS-dependent join pathname, it always returns it in POSIX format. ```python from datasets.download.streaming_download_manager import xjoin path = xjoin("C:\\Users\\USERNAME", "filename.txt") ``` Join path should be: ```python "C:\\Users\\USERNAME\\filename.txt" ``` However it is: ```python "C:/Users/USERNAME/filename.txt" ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5296/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5296/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5295
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5295/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5295/comments
https://api.github.com/repos/huggingface/datasets/issues/5295/events
https://github.com/huggingface/datasets/issues/5295
1,464,006,743
I_kwDODunzps5XQvhX
5,295
Extractions failed when .zip file located on read-only path (e.g., SageMaker FastFile mode)
{ "login": "verdimrc", "id": 2340781, "node_id": "MDQ6VXNlcjIzNDA3ODE=", "avatar_url": "https://avatars.githubusercontent.com/u/2340781?v=4", "gravatar_id": "", "url": "https://api.github.com/users/verdimrc", "html_url": "https://github.com/verdimrc", "followers_url": "https://api.github.com/users/verdimrc/followers", "following_url": "https://api.github.com/users/verdimrc/following{/other_user}", "gists_url": "https://api.github.com/users/verdimrc/gists{/gist_id}", "starred_url": "https://api.github.com/users/verdimrc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/verdimrc/subscriptions", "organizations_url": "https://api.github.com/users/verdimrc/orgs", "repos_url": "https://api.github.com/users/verdimrc/repos", "events_url": "https://api.github.com/users/verdimrc/events{/privacy}", "received_events_url": "https://api.github.com/users/verdimrc/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi ! Thanks for reporting. Indeed the lock file should be placed in a directory with write permission (e.g. in the directory where the archive is extracted).", "I opened https://github.com/huggingface/datasets/pull/5320 to fix this - it places the lock file in the cache directory instead of trying to put in next to the ZIP where it's read-only" ]
2022-11-25T03:59:43
2023-07-21T14:39:09
2023-07-21T14:39:09
NONE
null
null
null
### Describe the bug Hi, `load_dataset()` does not work .zip files located on a read-only directory. Looks like it's because Dataset creates a lock file in the [same directory](https://github.com/huggingface/datasets/blob/df4bdd365f2abb695f113cbf8856a925bc70901b/src/datasets/utils/extract.py) as the .zip file. Encountered this when attempting `load_dataset()` on a datadir with SageMaker FastFile mode. ### Steps to reproduce the bug ```python # Showing relevant lines only. hyperparameters = { "dataset_name": "ydshieh/coco_dataset_script", "dataset_config_name": 2017, "data_dir": "/opt/ml/input/data/coco", "cache_dir": "/tmp/huggingface-cache", # Fix dataset complains out-of-space. ... } estimator = PyTorch( base_job_name="clip", source_dir="../src/sm-entrypoint", entry_point="run_clip.py", # Transformers/src/examples/pytorch/contrastive-image-text/run_clip.py framework_version="1.12", py_version="py38", hyperparameters=hyperparameters, instance_count=1, instance_type="ml.p3.16xlarge", volume_size=100, distribution={"smdistributed": {"dataparallel": {"enabled": True}}}, ) fast_file = lambda x: TrainingInput(x, input_mode='FastFile') estimator.fit( { "pre-trained": fast_file("s3://vm-sagemakerr-us-east-1/clip/pre-trained-checkpoint/"), "coco": fast_file("s3://vm-sagemakerr-us-east-1/clip/coco-zip-files/"), } ) ``` Error message: ```text ErrorMessage "OSError: [Errno 30] Read-only file system: '/opt/ml/input/data/coco/image_info_test2017.zip.lock' """ The above exception was the direct cause of the following exception Traceback (most recent call last) File "/opt/conda/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/opt/conda/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/opt/conda/lib/python3.8/site-packages/mpi4py/__main__.py", line 7, in <module> main() File "/opt/conda/lib/python3.8/site-packages/mpi4py/run.py", line 198, in main run_command_line(args) File "/opt/conda/lib/python3.8/site-packages/mpi4py/run.py", line 47, in run_command_line run_path(sys.argv[0], run_name='__main__') File "/opt/conda/lib/python3.8/runpy.py", line 265, in run_path return _run_module_code(code, init_globals, run_name, File "/opt/conda/lib/python3.8/runpy.py", line 97, in _run_module_code _run_code(code, mod_globals, init_globals, File "run_clip_smddp.py", line 594, in <module> File "run_clip_smddp.py", line 327, in main dataset = load_dataset( File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1741, in load_dataset builder_instance.download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 822, in download_and_prepare self._download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 1555, in _download_and_prepare super()._download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 891, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/ydshieh--coco_dataset_script/e033205c0266a54c10be132f9264f2a39dcf893e798f6756d224b1ff5078998f/coco_dataset_script.py", line 123, in _split_generators archive_path = dl_manager.download_and_extract(_DL_URLS) File "/opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py", line 447, in download_and_extract return self.extract(self.download(url_or_urls)) File "/opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py", line 419, in extract extracted_paths = map_nested( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 472, in map_nested mapped = pool.map(_single_map_nested, split_kwds) File "/opt/conda/lib/python3.8/multiprocessing/pool.py", line 364, in map return self._map_async(func, iterable, mapstar, chunksize).get() File "/opt/conda/lib/python3.8/multiprocessing/pool.py", line 771, in get raise self._value OSError: [Errno 30] Read-only file system: '/opt/ml/input/data/coco/image_info_test2017.zip.lock'" ``` ### Expected behavior `load_dataset()` to succeed, just like when .zip file is passed in SageMaker File mode. ### Environment info * datasets-2.7.1 * transformers-4.24.0 * python-3.8 * torch-1.12 * SageMaker PyTorch DLC
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5295/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5295/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5294
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5294/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5294/comments
https://api.github.com/repos/huggingface/datasets/issues/5294/events
https://github.com/huggingface/datasets/pull/5294
1,463,679,582
PR_kwDODunzps5DqgLW
5,294
Support streaming datasets with pathlib.Path.with_suffix
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-11-24T18:04:38
2022-11-29T07:09:08
2022-11-29T07:06:32
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5294", "html_url": "https://github.com/huggingface/datasets/pull/5294", "diff_url": "https://github.com/huggingface/datasets/pull/5294.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5294.patch", "merged_at": "2022-11-29T07:06:32" }
This PR extends the support in streaming mode for datasets that use `pathlib.Path.with_suffix`. Fix #5293.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5294/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5294/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5293
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5293/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5293/comments
https://api.github.com/repos/huggingface/datasets/issues/5293/events
https://github.com/huggingface/datasets/issues/5293
1,463,669,201
I_kwDODunzps5XPdHR
5,293
Support streaming datasets with pathlib.Path.with_suffix
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
2022-11-24T17:52:08
2022-11-29T07:06:33
2022-11-29T07:06:33
MEMBER
null
null
null
Extend support for streaming datasets that use `pathlib.Path.with_suffix`. This feature will be useful e.g. for datasets containing text files and annotated files with the same name but different extension.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5293/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5293/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5292
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5292/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5292/comments
https://api.github.com/repos/huggingface/datasets/issues/5292/events
https://github.com/huggingface/datasets/issues/5292
1,463,053,832
I_kwDODunzps5XNG4I
5,292
Missing documentation build for versions 2.7.1 and 2.6.2
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 4296013012, "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance", "name": "maintenance", "color": "d4c5f9", "default": false, "description": "Maintenance tasks" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "- Build docs for 2.6.2:\r\n - Commit: a6a5a1cf4cdf1e0be65168aed5a327f543001fe8\r\n - Build docs GH Action: https://github.com/huggingface/datasets/actions/runs/3539470622/jobs/5941404044\r\n- Build docs for 2.7.1:\r\n - Commit: 5ef1ab1cc06c2b7a574bf2df454cd9fcb071ccb2\r\n - Build docs GH Action: https://github.com/huggingface/datasets/actions/runs/3539574442/jobs/5941636792" ]
2022-11-24T09:42:10
2022-11-24T10:10:02
2022-11-24T10:10:02
MEMBER
null
null
null
After the patch releases [2.7.1](https://github.com/huggingface/datasets/releases/tag/2.7.1) and [2.6.2](https://github.com/huggingface/datasets/releases/tag/2.6.2), the online docs were not properly built (the build_documentation workflow was not triggered). There was a fix by: - #5291 However, both documentations were built from main branch, instead of their corresponding version branch. We are rebuilding them.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5292/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5292/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5291
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5291/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5291/comments
https://api.github.com/repos/huggingface/datasets/issues/5291/events
https://github.com/huggingface/datasets/pull/5291
1,462,983,472
PR_kwDODunzps5DoKNC
5,291
[build doc] for v2.7.1 & v2.6.2
{ "login": "mishig25", "id": 11827707, "node_id": "MDQ6VXNlcjExODI3NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mishig25", "html_url": "https://github.com/mishig25", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "organizations_url": "https://api.github.com/users/mishig25/orgs", "repos_url": "https://api.github.com/users/mishig25/repos", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "received_events_url": "https://api.github.com/users/mishig25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "doc versions are built https://huggingface.co/docs/datasets/index" ]
2022-11-24T08:54:47
2022-11-24T09:14:10
2022-11-24T09:11:15
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5291", "html_url": "https://github.com/huggingface/datasets/pull/5291", "diff_url": "https://github.com/huggingface/datasets/pull/5291.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5291.patch", "merged_at": null }
Do NOT merge. Using this PR to build docs for [v2.7.1](https://github.com/huggingface/datasets/pull/5291/commits/f4914af20700f611b9331a9e3ba34743bbeff934) & [v2.6.2](https://github.com/huggingface/datasets/pull/5291/commits/025f85300a0874eeb90a20393c62f25ac0accaa0)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5291/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5291/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5290
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5290/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5290/comments
https://api.github.com/repos/huggingface/datasets/issues/5290/events
https://github.com/huggingface/datasets/pull/5290
1,462,716,766
PR_kwDODunzps5DnQsS
5,290
fix error where reading breaks when batch missing an assigned column feature
{ "login": "eunseojo", "id": 12104720, "node_id": "MDQ6VXNlcjEyMTA0NzIw", "avatar_url": "https://avatars.githubusercontent.com/u/12104720?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eunseojo", "html_url": "https://github.com/eunseojo", "followers_url": "https://api.github.com/users/eunseojo/followers", "following_url": "https://api.github.com/users/eunseojo/following{/other_user}", "gists_url": "https://api.github.com/users/eunseojo/gists{/gist_id}", "starred_url": "https://api.github.com/users/eunseojo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eunseojo/subscriptions", "organizations_url": "https://api.github.com/users/eunseojo/orgs", "repos_url": "https://api.github.com/users/eunseojo/repos", "events_url": "https://api.github.com/users/eunseojo/events{/privacy}", "received_events_url": "https://api.github.com/users/eunseojo/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5290). All of your documentation changes will be reflected on that endpoint." ]
2022-11-24T03:53:46
2022-11-25T03:21:54
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5290", "html_url": "https://github.com/huggingface/datasets/pull/5290", "diff_url": "https://github.com/huggingface/datasets/pull/5290.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5290.patch", "merged_at": null }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5290/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5290/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5289
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5289/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5289/comments
https://api.github.com/repos/huggingface/datasets/issues/5289/events
https://github.com/huggingface/datasets/pull/5289
1,462,543,139
PR_kwDODunzps5Dmrk9
5,289
Added support for JXL images.
{ "login": "alexjc", "id": 445208, "node_id": "MDQ6VXNlcjQ0NTIwOA==", "avatar_url": "https://avatars.githubusercontent.com/u/445208?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexjc", "html_url": "https://github.com/alexjc", "followers_url": "https://api.github.com/users/alexjc/followers", "following_url": "https://api.github.com/users/alexjc/following{/other_user}", "gists_url": "https://api.github.com/users/alexjc/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexjc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexjc/subscriptions", "organizations_url": "https://api.github.com/users/alexjc/orgs", "repos_url": "https://api.github.com/users/alexjc/repos", "events_url": "https://api.github.com/users/alexjc/events{/privacy}", "received_events_url": "https://api.github.com/users/alexjc/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "I'm fine with the addition of jxl in the list of known image extensions, this way users that have the plugin can work with their JXL datasets. WDYT @mariosasko ?", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5289). All of your documentation changes will be reflected on that endpoint.", "I think we should wait for official support from Pillow. Plus, the linked plugin doesn't support `Image.save`, which is one of the requirements for a format to be included in `IMAGE_EXTENSIONS`.\r\n\r\n@alexjc In the meantime, one option is to add these lines to the card:\r\n```python\r\nimport importlib\r\nimport datasets\r\n\r\nif \".jxl\" not in datasets.packaged_modules.imagefolder.IMAGE_EXTENSIONS:\r\n datasets.packaged_modules.imagefolder.IMAGE_EXTENSIONS.append(\".jxl\")\r\n\r\nif \"jxl\" not in datasets.packaged_modules._EXTENSION_TO_MODULE:\r\n datasets.packaged_modules._EXTENSION_TO_MODULE[\"jxl\"] = (\"imagefolder\", {})\r\n\r\nimportlib.reload(datasets.load)\r\nds = datasets.load_dataset(\"texturedesign/td01_natural-ground-textures\")\r\n```\r\nAnd you can add a note to the card that this dataset requires the \"jxlpy\" package to work. \r\n\r\nIn this case, you can also disable the viewer to avoid the discrepancy between the data displayed in the preview and the loaded data.\r\n\r\nAnother option is to define the loading script and add `jxlpy` to the list of dependencies [here](https://github.com/huggingface/datasets-server/blob/3012da62054a025467616abc14b0b46e1f11ea13/workers/first_rows/pyproject.toml#L8) to enable the viewer. This option requires more work, so let us know if you need help.", "Thank you both for your thoughtful replies!\r\n\r\nOne questions and and update:\r\n* The jxlpy plugin does support saving, in the `_save` function of the JXLImagePlugin file. Did it not work? I'm working on the upgrade to the latest JXL, so it'd be good to know if it failed so I can fix it.\r\n* I wrote to the Pillow maintainer and the preferred solution would be to keep JXL as a separate plugin because they're a small team don't have the resources to maintain more code.\r\n\r\nWith that in mind, let me share the minimal set of features I'd need for this to work within the `datasets` library:\r\n1. Using `load_dataset()` with the HuggingFace dataset name correctly downloads the JXL files so they are available locally. Even if the `file_name` field is left intact and not loaded as a PIL image, this is the first step.\r\n2. With minimal monkey-patching, having the `load_dataset` correctly expand `file_name` into PIL `image` fields if JXL support is available.\r\n\r\nIf both of these work, then I can use HuggingFace's hub and the `datasets` library for an MVP even if not all features are there. I don't need automatic thumbnails or previews of the dataset on the server.\r\n\r\n\r\nGiven the reply from the Pillow maintainer, what solution can we come up with that works in a more permanent way than waiting for Pillow integration (which may not happen) β€” assuming users install the `jxlpy` plugin separately?", "Link to my upgrade for the latest `libjxl`, pending review and merge. I tested load/save via Pillow extensively for this: https://github.com/olokelo/jxlpy/pull/13", "After more research, here's my latest suggestion:\r\n* Depending on the build of pillow, the source (pip or conda), the platform even, certain formats may or may not be available β€” despite them being in the list. For example, webp support is not consistently available.\r\n* I'd suggest adding JXL to the list and simply catching the `PIL.UnidentifiedImageError` β€” printing a useful error message that sends them to a Wiki page to find out what to do.\r\n* On that page would be included instructions how to install support for the format and what to do for the dataset to load correctly on any platform, both with or without conda, etc.\r\n\r\nWhat do you think?", "> The jxlpy plugin does support saving, in the _save function of the JXLImagePlugin file. Did it not work? I'm working on the upgrade to the latest JXL, so it'd be good to know if it failed so I can fix it.\r\n\r\nMy bad, I was referring to [this](https://github.com/google/brunsli/blob/2dd949e53ed05796eb44a31cc759fbf9e6c53e2f/contrib/py/jxl_library_patches/jxl_pillow.py) version of the plugin.\r\n\r\nI still think this involves too much work:\r\n* would require a new doc page\r\n* unofficial plugins have to be imported explicitly, leading to messier code on our side\r\n* etc.\r\n\r\nFor now, it seems more reasonable to create a loading script (faster than ImageFolder, as ImageFolder has to resolve the image files first) for this particular case and add `jxlpy` to the list of the `datasets-server`'s dependencies. Also, one additional advantage of this approach is that it reports if any of the modules imported in a script is missing, which is handy in your case for the plugin lib. WDYT?", "OK, let me try it it and I'll report back.\r\n\r\nWill the JXL files (even if unknown format) be automatically downloaded if they are linked from the `.jsonl` file?\r\n\r\n(I had trouble getting that working before this patch.)", "> Will the JXL files (even if unknown format) be automatically downloaded if they are linked from the .jsonl file?\r\n\r\nNo, they need to be downloaded explicitly.\r\n\r\nFeel free to use πŸ€— Hub discussions in your dataset repo to ping us for help (our usernames are the same there)", "Is it possible to add support for JXL files being downloaded without needing to add server-side rendering support?", "In the loading script, data files are downloaded with `DownloadManager` (`dl_manager` in `_split_generators`), which doesn't have any requirements regarding the actual type of the downloaded files.\r\n\r\nPS: Let's use the forum or Hub discussions for further questions to avoid pinging other participants" ]
2022-11-23T23:16:33
2022-11-29T18:49:46
null
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5289", "html_url": "https://github.com/huggingface/datasets/pull/5289", "diff_url": "https://github.com/huggingface/datasets/pull/5289.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5289.patch", "merged_at": null }
JPEG-XL is the most advanced of the next-generation of image codecs, supporting both lossless and lossy files β€” with better compression and quality than PNG and JPG respectively. It has reduced the disk sizes and bandwidth required for many of the datasets I use. Pillow does not yet support JXL, but there's a plugin as a separate Python library that does (`pip install jxlpy`), and I've tested that this change works as expected when the plugin is imported. Dataset used for testing, you must `git pull` as loading it from Python won't work until `datasets-server` is also changed to support JXL files: https://huggingface.co/datasets/texturedesign/td01_natural-ground-textures The case where the plugin is not imported first raises an error: ``` PIL.UnidentifiedImageError: cannot identify image file 'td01/train/set01/01_145523.jxl' ``` In order to enable support for JXL even before pillow supports this, should this exception be handled with a better error message? I'd expect/hope JXL support to follow in one of the pillow quarterly releases in the next 6-9 months.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5289/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5289/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5288
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5288/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5288/comments
https://api.github.com/repos/huggingface/datasets/issues/5288/events
https://github.com/huggingface/datasets/issues/5288
1,462,134,067
I_kwDODunzps5XJmUz
5,288
Lossy json serialization - deserialization of dataset info
{ "login": "anuragprat1k", "id": 57542204, "node_id": "MDQ6VXNlcjU3NTQyMjA0", "avatar_url": "https://avatars.githubusercontent.com/u/57542204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anuragprat1k", "html_url": "https://github.com/anuragprat1k", "followers_url": "https://api.github.com/users/anuragprat1k/followers", "following_url": "https://api.github.com/users/anuragprat1k/following{/other_user}", "gists_url": "https://api.github.com/users/anuragprat1k/gists{/gist_id}", "starred_url": "https://api.github.com/users/anuragprat1k/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anuragprat1k/subscriptions", "organizations_url": "https://api.github.com/users/anuragprat1k/orgs", "repos_url": "https://api.github.com/users/anuragprat1k/repos", "events_url": "https://api.github.com/users/anuragprat1k/events{/privacy}", "received_events_url": "https://api.github.com/users/anuragprat1k/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! JSON is a lossy format indeed. If you want to keep the feature types or other metadata I'd encourage you to store them as well. For example you can use `dataset.info.write_to_directory` and `DatasetInfo.from_directory` to store the feature types, split info, description, license etc." ]
2022-11-23T17:20:15
2022-11-25T12:53:51
null
NONE
null
null
null
### Describe the bug Saving a dataset to disk as json (using `to_json`) and then loading it again (using `load_dataset`) results in features whose labels are not type-cast correctly. In the code snippet below, `features.label` should have a label of type `ClassLabel` but has type `Value` instead. ### Steps to reproduce the bug ``` from datasets import load_dataset def test_serdes_from_json(d): dataset = load_dataset(d, split="train") dataset.to_json('_test') dataset_loaded = load_dataset("json", data_files='_test', split='train') try: assert dataset_loaded.info.features == dataset.info.features, "features unequal!" except Exception as ex: print(f'{ex}') print(f'expected {dataset.info.features}, \nactual { dataset_loaded.info.features }') test_serdes_from_json('rotten_tomatoes') ``` Output ``` features unequal! expected {'text': Value(dtype='string', id=None), 'label': ClassLabel(names=['neg', 'pos'], id=None)}, actual {'text': Value(dtype='string', id=None), 'label': Value(dtype='int64', id=None)} ``` ### Expected behavior The deserialized `features.label` should have type `ClassLabel`. ### Environment info - `datasets` version: 2.6.1 - Platform: Linux-5.10.144-127.601.amzn2.x86_64-x86_64-with-glibc2.17 - Python version: 3.7.13 - PyArrow version: 7.0.0 - Pandas version: 1.2.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5288/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5288/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5287
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5287/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5287/comments
https://api.github.com/repos/huggingface/datasets/issues/5287/events
https://github.com/huggingface/datasets/pull/5287
1,461,971,889
PR_kwDODunzps5Dkttf
5,287
Fix methods using `IterableDataset.map` that lead to `features=None`
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "_The documentation is not available anymore as the PR was closed or merged._", "Maybe other options are:\r\n* Keep the `info.features` to `None` if those were initially `None`\r\n* Infer the features with pre-fetching just if the `info.features` is `None`\r\n* If the `info.features` are there, make sure that after `map` features is not `None`", "Hi @lhoestq something that's still not clear to me is: should we infer the features always when applying a `map` if those are initially `None`, or just assume that if the features are initially `None` those should be left that way unless the user specifically sets those (or during iter)?\r\n\r\nIn this PR I'm using `from datasets.iterable_dataset import _infer_features_from_batch` to infer the features when those are `None` using pre-fetch of `self._head()`, but I'm not sure if that's the expected behavior.\r\n\r\nThanks in advance for your help!", "Also, the PR still has some more work to do, but probably the most relevant thing to fix right now is that the `features` are being set to `None` in the functions `IterableDataset.rename_column`, `IterableDataset.rename_columns`, and `IterableDataset.remove_columns` when the `features` originally had a value. So once that's fixed maybe we can focus on improving the current `map`'s behavior, so as to avoid this from happening also when the user uses `map` directly and not through the functions mentioned above.", "> Cool thank you ! Resolving the features can be expensive sometimes, so maybe we don't resolve the features and we can just rename/remove columns if the features are known (i.e. if they're not None). What do you think ?\r\n\r\nThanks for the feedback! Makes sense to me πŸ‘πŸ» I'll commit the comments now!", "Already done @lhoestq, feel free to merge whenever you want! Also before merging, can you please link the following issues https://github.com/huggingface/datasets/issues/3888, https://github.com/huggingface/datasets/issues/5245, and https://github.com/huggingface/datasets/issues/5284, so that those are closed upon merge? Thanks!" ]
2022-11-23T15:33:25
2022-11-28T15:43:14
2022-11-28T12:53:22
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5287", "html_url": "https://github.com/huggingface/datasets/pull/5287", "diff_url": "https://github.com/huggingface/datasets/pull/5287.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5287.patch", "merged_at": "2022-11-28T12:53:22" }
As currently `IterableDataset.map` is setting the `info.features` to `None` every time as we don't know the output of the dataset in advance, `IterableDataset` methods such as `rename_column`, `rename_columns`, and `remove_columns`. that internally use `map` lead to the features being `None`. This PR is related to #3888, #5245, and #5284 ## βœ… Current solution The code in this PR is basically making sure that if the features were there since the beginning and a `rename_column`/`rename_columns` happens, those are kept and the rename is applied to the `Features` too. Also, if the features were not there before applying `rename_column`, `rename_columns` or `remove_columns`, a batch is prefetched and the features are being inferred (that could potentially be part of `IterableDataset.__init__` in case the `info.features` value is `None`). ## πŸ’‘ Ideas Some ideas were proposed in https://github.com/huggingface/datasets/issues/3888, but probably the most consistent solution even though it may take some time is to actually do the type inferencing during the `IterableDataset.__init__` in case the provided `info.features` is `None`, otherwise, we can just use the provided features. Additionally, as mentioned at https://github.com/huggingface/datasets/issues/3888, we could also include a `features` parameter to the `map` function, but that's probably more tedious. Also thanks to @lhoestq for sharing some ideas in both https://github.com/huggingface/datasets/issues/3888 and https://github.com/huggingface/datasets/issues/5245 :hugs:
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5287/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5287/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5286
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5286/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5286/comments
https://api.github.com/repos/huggingface/datasets/issues/5286/events
https://github.com/huggingface/datasets/issues/5286
1,461,908,087
I_kwDODunzps5XIvJ3
5,286
FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/enwiki/20220301/dumpstatus.json
{ "login": "roritol", "id": 32490135, "node_id": "MDQ6VXNlcjMyNDkwMTM1", "avatar_url": "https://avatars.githubusercontent.com/u/32490135?v=4", "gravatar_id": "", "url": "https://api.github.com/users/roritol", "html_url": "https://github.com/roritol", "followers_url": "https://api.github.com/users/roritol/followers", "following_url": "https://api.github.com/users/roritol/following{/other_user}", "gists_url": "https://api.github.com/users/roritol/gists{/gist_id}", "starred_url": "https://api.github.com/users/roritol/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/roritol/subscriptions", "organizations_url": "https://api.github.com/users/roritol/orgs", "repos_url": "https://api.github.com/users/roritol/repos", "events_url": "https://api.github.com/users/roritol/events{/privacy}", "received_events_url": "https://api.github.com/users/roritol/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I found a solution \r\n\r\nIf you specifically install datasets==1.18 and then run\r\n\r\nimport datasets\r\nwiki = datasets.load_dataset('wikipedia', '20200501.en')\r\nthen this should work (it worked for me.)" ]
2022-11-23T14:54:15
2022-11-25T11:33:14
2022-11-25T11:33:14
NONE
null
null
null
### Describe the bug I follow the steps provided on the website [https://huggingface.co/datasets/wikipedia](https://huggingface.co/datasets/wikipedia) $ pip install apache_beam mwparserfromhell >>> from datasets import load_dataset >>> load_dataset("wikipedia", "20220301.en") however this results in the following error: raise MissingBeamOptions( datasets.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/ If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory). Example of usage: `load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner')` If I then prompt the system with: >>> load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner') the following error occurs: raise FileNotFoundError(f"Couldn't find file at {url}") FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/enwiki/20220301/dumpstatus.json Here is the exact code: Python 3.10.6 (main, Nov 2 2022, 18:53:38) [GCC 11.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from datasets import load_dataset >>> load_dataset('wikipedia', '20220301.en') Downloading and preparing dataset wikipedia/20220301.en to /home/[EDITED]/.cache/huggingface/datasets/wikipedia/20220301.en/2.0.0/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559... Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 15.3k/15.3k [00:00<00:00, 22.2MB/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 1741, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 822, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1879, in _download_and_prepare raise MissingBeamOptions( datasets.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/ If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory). Example of usage: `load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner')` >>> load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner') Downloading and preparing dataset wikipedia/20220301.en to /home/[EDITED]/.cache/huggingface/datasets/wikipedia/20220301.en/2.0.0/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559... Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 15.3k/15.3k [00:00<00:00, 18.8MB/s] Downloading data files: 0%| | 0/1 [00:00<?, ?it/s]Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 1741, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 822, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1909, in _download_and_prepare super()._download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 891, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/rorytol/.cache/huggingface/modules/datasets_modules/datasets/wikipedia/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559/wikipedia.py", line 945, in _split_generators downloaded_files = dl_manager.download_and_extract({"info": info_url}) File "/usr/local/lib/python3.10/dist-packages/datasets/download/download_manager.py", line 447, in download_and_extract return self.extract(self.download(url_or_urls)) File "/usr/local/lib/python3.10/dist-packages/datasets/download/download_manager.py", line 311, in download downloaded_path_or_paths = map_nested( File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 444, in map_nested mapped = [ File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 445, in <listcomp> _single_map_nested((function, obj, types, None, True, None)) File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 346, in _single_map_nested return function(data_struct) File "/usr/local/lib/python3.10/dist-packages/datasets/download/download_manager.py", line 338, in _download return cached_path(url_or_filename, download_config=download_config) File "/usr/local/lib/python3.10/dist-packages/datasets/utils/file_utils.py", line 183, in cached_path output_path = get_from_cache( File "/usr/local/lib/python3.10/dist-packages/datasets/utils/file_utils.py", line 530, in get_from_cache raise FileNotFoundError(f"Couldn't find file at {url}") FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/enwiki/20220301/dumpstatus.json ### Steps to reproduce the bug $ pip install apache_beam mwparserfromhell >>> from datasets import load_dataset >>> load_dataset("wikipedia", "20220301.en") >>> load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner') ### Expected behavior Download the dataset ### Environment info Running linux on a remote workstation operated through a macbook terminal Python 3.10.6
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5286/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5286/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5285
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5285/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5285/comments
https://api.github.com/repos/huggingface/datasets/issues/5285/events
https://github.com/huggingface/datasets/pull/5285
1,461,521,215
PR_kwDODunzps5DjLgG
5,285
Save file name in embed_storage
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I updated the tests, met le know if it sounds good to you now :)" ]
2022-11-23T10:55:54
2022-11-24T14:11:41
2022-11-24T14:08:37
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5285", "html_url": "https://github.com/huggingface/datasets/pull/5285", "diff_url": "https://github.com/huggingface/datasets/pull/5285.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5285.patch", "merged_at": "2022-11-24T14:08:37" }
Having the file name is useful in case we need to check the extension of the file (e.g. mp3), or in general in case it includes some metadata information (track id, image id etc.) Related to https://github.com/huggingface/datasets/issues/5276
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5285/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5285/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5284
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5284/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5284/comments
https://api.github.com/repos/huggingface/datasets/issues/5284/events
https://github.com/huggingface/datasets/issues/5284
1,461,519,733
I_kwDODunzps5XHQV1
5,284
Features of IterableDataset set to None by remove column
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 3287858981, "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming", "name": "streaming", "color": "fef2c0", "default": false, "description": "" } ]
closed
false
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[ { "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false } ]
null
[ "Related to https://github.com/huggingface/datasets/issues/5245", "#self-assign", "Thanks @lhoestq and @alvarobartt!\r\n\r\nThis would be extremely helpful to have working for the Whisper fine-tuning event - we're **only** training using streaming mode, so it'll be quite important to have this feature working to make training as easy as possible!\r\n\r\n_c.f._ https://twitter.com/sanchitgandhi99/status/1592188332171493377", "> Thanks @lhoestq and @alvarobartt!\n> \n> \n> \n> This would be extremely helpful to have working for the Whisper fine-tuning event - we're **only** training using streaming mode, so it'll be quite important to have this feature working to make training as easy as possible!\n> \n> \n> \n> _c.f._ https://twitter.com/sanchitgandhi99/status/1592188332171493377\n\nI'm almost done with at least a temporary fix to `rename_column`, `rename_columns`, and `remove_columns`, just trying to figure out how to extend it to the `map` function itself!\n\nI'll probably open the PR for review either tomorrow or Sunday hopefully! Glad I can help you and HuggingFace πŸ€— ", "Awesome - thank you so much for this PR @alvarobartt! Is much appreciated!", "@sanchit-gandhi PR is ready and open for review at #5287, but there's still one issue I may need @lhoestq's input :hugs:", "Let us know @sanchit-gandhi if you need a new release of `datasets` soon with this fix included :)", "Thanks for the fix guys! We can direct people to install `datasets` from main if that's easier!", "Hey guys, any update around this? I'm facing the same issue with a streamable dataset. ", "Hi @asennoussi so this was already fixed and released as part of https://github.com/huggingface/datasets/releases/tag/2.8.0, so you should be able to install it as `pip install datasets==2.8.0` or just to use `pip install datasets --upgrade` to get the latest version, as of now, the https://github.com/huggingface/datasets/releases/tag/2.9.0 released last week! πŸ€—", "Still facing the same issue though: \r\n```\r\nfrom datasets import IterableDatasetDict, load_dataset\r\n\r\nraw_datasets = vectorized_datasets = IterableDatasetDict()\r\n\r\n\r\nraw_datasets[\"train\"] = load_dataset(\"asennoussi/private\", split=\"train\", use_auth_token=True, streaming=True)\r\nraw_datasets[\"test\"] = load_dataset(\"asennoussi/private\", split=\"test\", use_auth_token=True, streaming=True)\r\n\r\nprint(\"Original features: \", raw_datasets['train'].features.keys())\r\n\r\n...\r\n\r\ndef prepare_dataset(batch):\r\n\r\n # load and (possibly) resample audio datato 16kHz\r\n audio = batch[\"audio\"]\r\n\r\n # compute log-Mel input features from input audio array \r\n batch[\"input_features\"] = processor.feature_extractor(audio[\"array\"], sampling_rate=audio[\"sampling_rate\"]).input_features[0]\r\n # compute input length of audio sample in seconds\r\n batch[\"input_length\"] = len(audio[\"array\"]) / audio[\"sampling_rate\"]\r\n \r\n # optional pre-processing steps\r\n transcription = batch[\"sentence\"]\r\n \r\n # encode target text to label ids\r\n batch[\"labels\"] = processor.tokenizer(transcription).input_ids\r\n batch[\"labels_length\"] = len(batch[\"labels\"])\r\n return batch\r\n...\r\nvectorized_datasets = vectorized_datasets.remove_columns(['input_length', 'labels_length']+list(next(iter(raw_datasets.values())).features))\r\nprint(\"Processed features: \", vectorized_datasets['train'].features)\r\nprint(\"First sample:\", next(iter(vectorized_datasets['train'])))\r\n\r\n```\r\n\r\nOutput: \r\n```\r\nOriginal features: dict_keys(['path', 'audio', 'sentence'])\r\nProcessed features: None\r\n```", "Hmm weird, could you try to print\r\n\r\n```python\r\nprint(\"Processed features: \", vectorized_datasets['train'].features)\r\n```\r\n\r\nagain after iterating over the `vectorized_datasets`? In the code above, should be last line :)", "Didn't seem to fix it: \r\n```\r\nOriginal features: dict_keys(['path', 'audio', 'sentence'])\r\nProcessed features: None\r\nProcessed features: None\r\n```", "Actually the culprit looks to be this one: \r\n`vectorized_datasets = raw_datasets.map(prepare_dataset).with_format(\"torch\")`\r\nWhen I remove this line: `vectorized_datasets = vectorized_datasets.remove_columns(['input_length', 'labels_length']+list(next(iter(raw_datasets.values())).features))`\r\n\r\nI still get \r\n```\r\nProcessed features: None\r\n```", "The culprit is definitely `.map` \r\nJust validated it. \r\nAny idea please? ", "> The culprit is definitely `.map` Just validated it. Any idea please?\r\n\r\nYes, indeed `.map` losses the features, because AFAIK pre-fetching the data to infer the features is expensive and not ideal, that's part of this issue https://github.com/huggingface/datasets/issues/3888\r\n\r\nAnyway, now you can pass the `features` as a param to `.map` as follows:\r\n\r\n```python\r\nfrom datasets import Features\r\nvectorized_datasets = raw_datasets.map(\r\n prepare_dataset,\r\n features=Features(\r\n {\"path\": raw_datasets[\"train\"].info.features[\"path\"], \"audio\": raw_datasets[\"train\"].info.features[\"audio\"], \"sentence\": raw_datasets[\"train\"].info.features[\"sentence\"]}\r\n ),\r\n).with_format(\"torch\")\r\n```\r\n\r\nAlso, to let you know, when calling `.remove_columns` over an `IterableDataset`, the `features` are not lost, as well as `.rename_column` and `rename_columns` :)\r\n\r\nMore information about the latter at https://github.com/huggingface/datasets/pull/5287", "@asennoussi alternatively you can just call `._resolve_features()` from your `IterableDataset` and it will pre-fetch the data to resolve the features, but note that feature-inference is not as accurate as if you manually specify which features and feature-types the `IterableDataset` has, as mentioned in the comment above, the alternative is to provide `features` param to `.map` :hugs:", "Got it thanks a lot! " ]
2022-11-23T10:54:59
2023-02-02T09:05:51
2022-11-28T12:53:24
CONTRIBUTOR
null
null
null
### Describe the bug The `remove_column` method of the IterableDataset sets the dataset features to None. ### Steps to reproduce the bug ```python from datasets import Audio, load_dataset # load LS in streaming mode dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True) # check original features print("Original features: ", dataset.features.keys()) # define features to remove: we KEEP audio and text COLUMNS_TO_REMOVE = ['chapter_id', 'speaker_id', 'file', 'id'] dataset = dataset.remove_columns(COLUMNS_TO_REMOVE) # check processed features, uh-oh! print("Processed features: ", dataset.features) # streaming the first audio sample still works print("First sample:", next(iter(ds))) ``` **Print Output:** ``` Original features: dict_keys(['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id']) Processed features: None First sample: {'audio': {'path': '2277-149896-0000.flac', 'array': array([ 0.00186157, 0.0005188 , 0.00024414, ..., -0.00097656, -0.00109863, -0.00146484]), 'sampling_rate': 16000}, 'text': "HE WAS IN A FEVERED STATE OF MIND OWING TO THE BLIGHT HIS WIFE'S ACTION THREATENED TO CAST UPON HIS ENTIRE FUTURE"} ``` ### Expected behavior The features should be those **not** removed by the `remove_column` method, i.e. audio and text. ### Environment info - `datasets` version: 2.7.1 - Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.15 - PyArrow version: 9.0.0 - Pandas version: 1.3.5 (Running on Google Colab for a blog post: https://colab.research.google.com/drive/1ySCQREPZEl4msLfxb79pYYOWjUZhkr9y#scrollTo=8pRDGiVmH2ml) cc @polinaeterna @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5284/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5284/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5283
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5283/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5283/comments
https://api.github.com/repos/huggingface/datasets/issues/5283/events
https://github.com/huggingface/datasets/pull/5283
1,460,291,003
PR_kwDODunzps5De5M1
5,283
Release: 2.6.2
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-11-22T17:36:24
2022-11-22T17:50:12
2022-11-22T17:47:02
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5283", "html_url": "https://github.com/huggingface/datasets/pull/5283", "diff_url": "https://github.com/huggingface/datasets/pull/5283.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5283.patch", "merged_at": "2022-11-22T17:47:02" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5283/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5283/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5282
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5282/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5282/comments
https://api.github.com/repos/huggingface/datasets/issues/5282/events
https://github.com/huggingface/datasets/pull/5282
1,460,238,928
PR_kwDODunzps5Det2_
5,282
Release: 2.7.1
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2022-11-22T16:58:54
2022-11-22T17:21:28
2022-11-22T17:21:27
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5282", "html_url": "https://github.com/huggingface/datasets/pull/5282", "diff_url": "https://github.com/huggingface/datasets/pull/5282.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5282.patch", "merged_at": "2022-11-22T17:21:27" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5282/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5282/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5281
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5281/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5281/comments
https://api.github.com/repos/huggingface/datasets/issues/5281/events
https://github.com/huggingface/datasets/issues/5281
1,459,930,271
I_kwDODunzps5XBMSf
5,281
Support cloud storage in load_dataset
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 3761482852, "node_id": "LA_kwDODunzps7gM6xk", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue", "name": "good second issue", "color": "BDE59C", "default": false, "description": "Issues a bit more difficult than \"Good First\" issues" } ]
open
false
null
[]
null
[ "Or for example an archive on GitHub releases! Before I added support for JXL (locally only, PR still pending) I was considering hosting my files on GitHub instead...", "+1 to this. I would like to use 'audiofolder' with a data_dir that's on S3, for example. I don't want to upload my dataset to the Hub, but I would find all the fingerprinting/caching features useful.", "Adding to the conversation, Dask also uses `fsspec` for this feature.\r\n\r\n[Dask: How to connect to remote data](https://docs.dask.org/en/stable/how-to/connect-to-remote-data.html)\r\n\r\nHappy to help on this feature :D ", "+1 to this feature request since I think it also tackles my use-case. I am collaborating with a team, working with a loading script which takes some time to generate the dataset artifacts. It would be very handy to use this as a cloud cache to avoid duplicating the effort. \r\n\r\nCurrently we could use `builder.download_and_prepare(path_to_cloud_storage, storage_options, ...)` to cache the artifacts to cloud storage, but then `builder.as_dataset()` yields `NotImplementedError: Loading a dataset cached in SomeCloudFileSystem is not supported`", "Makes sense ! If you want to load locally a dataset that you download_and_prepared on a cloud storage, you would use `load_dataset(path_to_cloud_storage)` indeed. It would download the data from the cloud storage, cache them locally, and return a `Dataset`.", "It seems currently the `cached_path` function handles all URLs by `get_from_cache` that only supports `ftp` and `http(s)` here:\r\nhttps://github.com/huggingface/datasets/blob/b5672a956d5de864e6f5550e493527d962d6ae55/src/datasets/utils/file_utils.py#L181\r\n\r\nI guess one can add another condition that handles `s3://` or `gs://` URLs via `fsspec` here.", "I could use this functionality, so I put together a PR using @kyamagu's suggestion to use `fsspec` in `datasets.utils.file_utils`\r\n\r\nhttps://github.com/huggingface/datasets/pull/5580", "Thanks @dwyatte for adding support for fsspec urls\r\n\r\nLet me just reopen this since the original issue is not resolved", "I'm not yet understanding how to use https://github.com/huggingface/datasets/pull/5580 in order to use `load_dataset(data_files=\"s3://...\")`. Any help/example would be much appreciated :) thanks! ", "It's still not officially supported x) But you can try to update `request_etag` in `file_utils.py` to use `fsspec_head` instead of `http_head`. It is responsible of getting the ETags of the remote files for caching. This change may do the trick for S3 urls", "Thank you for your guys help on this and merging in #5580. I manually pulled the changes to my local datasets package (datasets.utils.file_utils.py) since it only seemed to be this file that was changed in the PR and I'm getting the error: \r\nInvalidSchema: No connection adapters were found for 's3://bucket/folder/'. I'm calling load_dataset using the S3 URI. When I use the S3 URL I get HTTPError: 403 Client Error. \r\nAm I not supposed to use the S3 URI? How do I pull in the changes from this merge? I'm running datasets 2.10.1. ", "The current implementation depends on gcsfs/s3fs being able to authenticate through some other means e.g., environmental variables. For AWS, it looks like you can set `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, and `AWS_SESSION_TOKEN`\r\n\r\nNote that while testing this just now, I did note a discrepancy between gcsfs and s3fs that we might want to address where gcsfs passes the timeout from `storage_options` [here](https://github.com/huggingface/datasets/blob/3e6269979fc80ae8939294d26298897f0db5b84d/src/datasets/utils/file_utils.py#L333) down into the `aiohttp.ClientSession.request`, but s3fs does not handle this (tries to pass to the `aiobotocore.session.AioSession` constructor raising `TypeError: __init__() got an unexpected keyword argument 'requests_timeout'`).\r\n\r\nIt seems like some work trying to unify kwargs across different fsspec implementations, so if the plan is to pass down `storage_options`, I wonder if we should just let users control the timeout (and other kwargs) using that and if not specified, use the default?", "> Note that while testing this just now, I did note a discrepancy between gcsfs and s3fs that we might want to address where gcsfs passes the timeout from storage_options [here](https://github.com/huggingface/datasets/blob/3e6269979fc80ae8939294d26298897f0db5b84d/src/datasets/utils/file_utils.py#L333) down into the aiohttp.ClientSession.request, but s3fs does not handle this (tries to pass to the aiobotocore.session.AioSession constructor raising TypeError: __init__() got an unexpected keyword argument 'requests_timeout').\r\n\r\n> It seems like some work trying to unify kwargs across different fsspec implementations, so if the plan is to pass down storage_options, I wonder if we should just let users control the timeout (and other kwargs) and if not specified, use the default?\r\n\r\n@lhoestq here's a small PR for this: https://github.com/huggingface/datasets/pull/5673\r\n\r\n" ]
2022-11-22T14:00:10
2023-05-10T12:20:44
null
MEMBER
null
null
null
Would be nice to be able to do ```python load_dataset("s3://...") ``` or even ```python data_files=["gs://..."] storage_options = {...} load_dataset(..., data_files=data_files, storage_options=storage_options) ``` The idea would be to use `fsspec` as in `download_and_prepare` and `save_to_disk`. This has been requested several times already. Some users want to use their data from private cloud storage to train models related: https://github.com/huggingface/datasets/issues/3490 https://github.com/huggingface/datasets/issues/5244 [forum](https://discuss.huggingface.co/t/how-to-use-s3-path-with-load-dataset-with-streaming-true/25739/2)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5281/reactions", "total_count": 27, "+1": 17, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 10, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5281/timeline
null
reopened
false
https://api.github.com/repos/huggingface/datasets/issues/5280
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5280/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5280/comments
https://api.github.com/repos/huggingface/datasets/issues/5280/events
https://github.com/huggingface/datasets/issues/5280
1,459,823,179
I_kwDODunzps5XAyJL
5,280
Import error
{ "login": "feketedavid1012", "id": 40760055, "node_id": "MDQ6VXNlcjQwNzYwMDU1", "avatar_url": "https://avatars.githubusercontent.com/u/40760055?v=4", "gravatar_id": "", "url": "https://api.github.com/users/feketedavid1012", "html_url": "https://github.com/feketedavid1012", "followers_url": "https://api.github.com/users/feketedavid1012/followers", "following_url": "https://api.github.com/users/feketedavid1012/following{/other_user}", "gists_url": "https://api.github.com/users/feketedavid1012/gists{/gist_id}", "starred_url": "https://api.github.com/users/feketedavid1012/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/feketedavid1012/subscriptions", "organizations_url": "https://api.github.com/users/feketedavid1012/orgs", "repos_url": "https://api.github.com/users/feketedavid1012/repos", "events_url": "https://api.github.com/users/feketedavid1012/events{/privacy}", "received_events_url": "https://api.github.com/users/feketedavid1012/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Can you \r\n```python\r\nimport platform\r\nprint(platform.python_version())\r\n```\r\nto see that it returns ?", "Hi,\n\n3.8.13\n\nGet Outlook for Android<https://aka.ms/AAb9ysg>\n________________________________\nFrom: Quentin Lhoest ***@***.***>\nSent: Tuesday, November 22, 2022 2:37:02 PM\nTo: huggingface/datasets ***@***.***>\nCc: feketedavid1012 ***@***.***>; Author ***@***.***>\nSubject: Re: [huggingface/datasets] Import error (Issue #5280)\n\n\nHi ! Can you\n\nimport platform\nprint(platform.python_version())\n\nto see that it returns ?\n\nβ€”\nReply to this email directly, view it on GitHub<https://github.com/huggingface/datasets/issues/5280#issuecomment-1323691385>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AJW7F5YGG32W6WABYC25NJTWJTD75ANCNFSM6AAAAAASHZJ2AU>.\nYou are receiving this because you authored the thread.Message ID: ***@***.***>\n", "Then it should work as expected if you use the same python when using `datasets`\r\n\r\nPlease make sure you're running your code in the right environment", "It's the right environment. But in if statement I have\n\"3.8.13\" < 3.7\nAnd in the error message is Python>=3.7 which is true in my case (3.8.13 is greater then 3.7), so I don't understand my python should be below the 3.7 which case the if statement is right, but the message is wrong, or above 3.7 which case if statement is wrong, error message is right.\n\nGet Outlook for Android<https://aka.ms/AAb9ysg>\n________________________________\nFrom: Quentin Lhoest ***@***.***>\nSent: Tuesday, November 22, 2022 2:41:43 PM\nTo: huggingface/datasets ***@***.***>\nCc: feketedavid1012 ***@***.***>; Author ***@***.***>\nSubject: Re: [huggingface/datasets] Import error (Issue #5280)\n\n\nThen it should work as expected if you use the same python when using datasets\n\nPlease make sure you're running your code in the right environment\n\nβ€”\nReply to this email directly, view it on GitHub<https://github.com/huggingface/datasets/issues/5280#issuecomment-1323697094>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AJW7F54JURTAJJWWDO2QGI3WJTERPANCNFSM6AAAAAASHZJ2AU>.\nYou are receiving this because you authored the thread.Message ID: ***@***.***>\n", "If you're having an error then you're not running your code in the right environment." ]
2022-11-22T12:56:43
2022-12-15T19:57:40
2022-12-15T19:57:40
NONE
null
null
null
https://github.com/huggingface/datasets/blob/cd3d8e637cfab62d352a3f4e5e60e96597b5f0e9/src/datasets/__init__.py#L28 Hy, I have error at the above line. I have python version 3.8.13, the message says I need python>=3.7, which is True, but I think the if statement not working properly (or the message wrong)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5280/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5280/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5279
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5279/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5279/comments
https://api.github.com/repos/huggingface/datasets/issues/5279/events
https://github.com/huggingface/datasets/pull/5279
1,459,635,002
PR_kwDODunzps5Dcoue
5,279
Warn about checksums
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I'm also in favor of disabling this by default - it's kinda impractical", "Great, thanks for the quick turnaround on this!" ]
2022-11-22T10:58:48
2022-11-23T11:43:50
2022-11-23T09:47:02
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5279", "html_url": "https://github.com/huggingface/datasets/pull/5279", "diff_url": "https://github.com/huggingface/datasets/pull/5279.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5279.patch", "merged_at": "2022-11-23T09:47:01" }
It takes a lot of time on big datasets to compute the checksums, we should at least add a warning to notify the user about this step. I also mentioned how to disable it, and added a tqdm bar (delay=5 seconds) cc @ola13
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5279/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5279/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5278
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5278/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5278/comments
https://api.github.com/repos/huggingface/datasets/issues/5278/events
https://github.com/huggingface/datasets/issues/5278
1,459,574,490
I_kwDODunzps5W_1ba
5,278
load_dataset does not read jsonl metadata file properly
{ "login": "065294847", "id": 81414263, "node_id": "MDQ6VXNlcjgxNDE0MjYz", "avatar_url": "https://avatars.githubusercontent.com/u/81414263?v=4", "gravatar_id": "", "url": "https://api.github.com/users/065294847", "html_url": "https://github.com/065294847", "followers_url": "https://api.github.com/users/065294847/followers", "following_url": "https://api.github.com/users/065294847/following{/other_user}", "gists_url": "https://api.github.com/users/065294847/gists{/gist_id}", "starred_url": "https://api.github.com/users/065294847/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/065294847/subscriptions", "organizations_url": "https://api.github.com/users/065294847/orgs", "repos_url": "https://api.github.com/users/065294847/repos", "events_url": "https://api.github.com/users/065294847/events{/privacy}", "received_events_url": "https://api.github.com/users/065294847/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Can you try to remove \"drop_labels=false\" ? It may force the loader to infer the labels instead of reading the metadata", "Hi, thanks for responding. I tried that, but it does not change anything.", "Can you try updating `datasets` ? Metadata support was added in `datasets` 2.4", "Probably the issue, will report back asap!", "Okay, now it seems to actually load the metadata and create the train_split, but it still says only returns \"image\" and \"label\", which is always 0 since all images are from same folder", "> Can you try updating `datasets` ? Metadata support was added in `datasets` 2.4\r\n\r\nUpdate: This was the issue." ]
2022-11-22T10:24:46
2023-02-14T14:48:16
2022-11-23T11:38:35
NONE
null
null
null
### Describe the bug Hi, I'm following [this page](https://huggingface.co/docs/datasets/image_dataset) to create a dataset of images and captions via an image folder and a metadata.json file, but I can't seem to get the dataloader to recognize the "text" column. It just spits out "image" and "label" as features. Below is code to reproduce my exact example/problem. ### Steps to reproduce the bug ```ruby dataset_link="19Unu89Ih_kP6zsE7f9Mkw8dy3NwHopRF" id = dataset_link output = 'Godardv01.zip' gdown.download(id=id, output=output, quiet=False) ds = load_dataset("imagefolder", data_dir="/kaggle/working/Volumes/TOSHIBA/Godard_imgs/Volumes/TOSHIBA/Godard_imgs/Full/train", split="train", drop_labels=False) print(ds) ``` ### Expected behavior I would expect that it returned "image" and "text" columns from the code above. ### Environment info - `datasets` version: 2.1.0 - Platform: Linux-5.15.65+-x86_64-with-debian-bullseye-sid - Python version: 3.7.12 - PyArrow version: 5.0.0 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5278/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5278/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5277
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5277/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5277/comments
https://api.github.com/repos/huggingface/datasets/issues/5277/events
https://github.com/huggingface/datasets/pull/5277
1,459,388,551
PR_kwDODunzps5Dbybu
5,277
Remove YAML integer keys from class_label metadata
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Also note that this approach is valid when metadata keys are str, but also if they are int.\r\n- This will be helpful for any community dataset using old integer keys in their metadata", "perfect !" ]
2022-11-22T08:34:07
2022-11-22T13:58:26
2022-11-22T13:55:49
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5277", "html_url": "https://github.com/huggingface/datasets/pull/5277", "diff_url": "https://github.com/huggingface/datasets/pull/5277.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5277.patch", "merged_at": "2022-11-22T13:55:49" }
Fix partially #5275.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5277/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5277/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5276
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5276/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5276/comments
https://api.github.com/repos/huggingface/datasets/issues/5276/events
https://github.com/huggingface/datasets/issues/5276
1,459,363,442
I_kwDODunzps5W_B5y
5,276
Bug in downloading common_voice data and snall chunk of it to one's own hub
{ "login": "capsabogdan", "id": 48530104, "node_id": "MDQ6VXNlcjQ4NTMwMTA0", "avatar_url": "https://avatars.githubusercontent.com/u/48530104?v=4", "gravatar_id": "", "url": "https://api.github.com/users/capsabogdan", "html_url": "https://github.com/capsabogdan", "followers_url": "https://api.github.com/users/capsabogdan/followers", "following_url": "https://api.github.com/users/capsabogdan/following{/other_user}", "gists_url": "https://api.github.com/users/capsabogdan/gists{/gist_id}", "starred_url": "https://api.github.com/users/capsabogdan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/capsabogdan/subscriptions", "organizations_url": "https://api.github.com/users/capsabogdan/orgs", "repos_url": "https://api.github.com/users/capsabogdan/repos", "events_url": "https://api.github.com/users/capsabogdan/events{/privacy}", "received_events_url": "https://api.github.com/users/capsabogdan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Sounds like one of the file is not a valid one, can you make sure you uploaded valid mp3 files ?", "Well I just sharded the original commonVoice dataset and pushed a small chunk of it in a private rep\n\nWhat did go wrong?\n\nHolen Sie sich Outlook fΓΌr iOS<https://aka.ms/o0ukef>\n________________________________\nVon: Quentin Lhoest ***@***.***>\nGesendet: Tuesday, November 22, 2022 3:03:40 PM\nAn: huggingface/datasets ***@***.***>\nCc: capsabogdan ***@***.***>; Author ***@***.***>\nBetreff: Re: [huggingface/datasets] Bug in downloading common_voice data and snall chunk of it to one's own hub (Issue #5276)\n\n\nSounds like one of the file is not a valid one, can you make sure you uploaded valid mp3 files ?\n\nβ€”\nReply to this email directly, view it on GitHub<https://github.com/huggingface/datasets/issues/5276#issuecomment-1323727434>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ALSIFOAPAL2V4TBJTSPMAULWJTHDZANCNFSM6AAAAAASHQJ63U>.\nYou are receiving this because you authored the thread.Message ID: ***@***.***>\n", "It should be all good then !\r\nCould you share a link to your repository for me to investigate what went wrong ?", "https://huggingface.co/datasets/DTU54DL/common-voice-test16k\n\nAm Di., 22. Nov. 2022 um 16:43 Uhr schrieb Quentin Lhoest <\n***@***.***>:\n\n> It should be all good then !\n> Could you share a link to your repository for me to investigate what went\n> wrong ?\n>\n> β€”\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/5276#issuecomment-1323876682>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ALSIFOEUJRZWXAM7DYA5VJDWJTS3NANCNFSM6AAAAAASHQJ63U>\n> .\n> You are receiving this because you authored the thread.Message ID:\n> ***@***.***>\n>\n", "I see ! This is a bug with MP3 files.\r\n\r\nWhen we store audio data in parquet, we store the bytes and the file name. From the file name extension we know if it's a WAV, an MP3 or else. But here it looks like the paths are all None.\r\n\r\nIt looks like it comes from here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/7feeb5648a63b6135a8259dedc3b1e19185ee4c7/src/datasets/features/audio.py#L212\r\n\r\nCc @polinaeterna maybe we should simply put the file name instead of None values ?", "@lhoestq I remember we wanted to avoid storing redundant data but maybe it's not that crucial indeed to store one more string value. \r\nOr we can store paths only for mp3s, considering that for other formats we don't have such a problem with reading from bytes without format specified. ", "It doesn't cost much to always store the file name IMO", "thanks for the help!\n\ncan I do anything on my side? we are doing a DL project and we need the\ndata really quick.\n\nthanks\nbogdan\n\n> Message ID: ***@***.***>\n>\n", "I opened a pull requests here: https://github.com/huggingface/datasets/pull/5285, we'll do a new release soon with this fix.\r\n\r\nOtherwise if you're really in a hurry you can install `datasets` from this PR", "[image: image.png]\n\n> Message ID: ***@***.***>\n>\n", "any idea on what's going wrong here?\n\nthanks\n\nAm So., 27. Nov. 2022 um 13:53 Uhr schrieb Bogdan Capsa <\n***@***.***>:\n\n> [image: image.png]\n>\n>> Message ID: ***@***.***>\n>>\n>\n", "hi @capsabogdan! \r\ncould you please share more specifically what problem do you have now?", "I have attached this screenshot above . can u pls help? So can not pip from pull request\r\n\r\n![image](https://user-images.githubusercontent.com/48530104/204354027-6173e6d1-e3d4-4085-a363-e924cfe1a7f4.png)\r\n", "The pull request has been merged on `main`.\r\nYou can install `datasets` from `main` using\r\n```\r\npip install git+https://github.com/huggingface/datasets.git\r\n```", "I've tried to load this dataset DTU54DL/common-voice-test16k, but am\ngetting the same error.\n\nSo the bug fix will fix only if I upload a new dataset, or also loading\npreviously uploaded datasets?\n\nthanks\n\nAm Mo., 28. Nov. 2022 um 19:51 Uhr schrieb Quentin Lhoest <\n***@***.***>:\n\n> The pull request has been merged on main.\n> You can install datasets from main using\n>\n> pip install git+https://github.com/huggingface/datasets.git\n>\n> β€”\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/5276#issuecomment-1329587334>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ALSIFOCNYYIGHM2EX3ZIO6DWKT5MXANCNFSM6AAAAAASHQJ63U>\n> .\n> You are receiving this because you were mentioned.Message ID:\n> ***@***.***>\n>\n", "> So the bug fix will fix only if I upload a new dataset, or also loading\r\npreviously uploaded datasets?\r\n\r\nYou have to reupload the dataset, sorry for the inconvenience", "thank you so much for the help! works like a charm!\n\nAm Di., 29. Nov. 2022 um 12:15 Uhr schrieb Quentin Lhoest <\n***@***.***>:\n\n> So the bug fix will fix only if I upload a new dataset, or also loading\n> previously uploaded datasets?\n>\n> You have to reupload the dataset, sorry for the inconvenience\n>\n> β€”\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/5276#issuecomment-1330468393>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ALSIFOBKEFZO57BAKY4IGW3WKXQUZANCNFSM6AAAAAASHQJ63U>\n> .\n> You are receiving this because you were mentioned.Message ID:\n> ***@***.***>\n>\n" ]
2022-11-22T08:17:53
2023-07-21T14:33:10
2023-07-21T14:33:10
NONE
null
null
null
### Describe the bug I'm trying to load the common voice dataset. Currently there is no implementation to download just par tof the data, and I need just one part of it, without downloading the entire dataset Help please? ![image](https://user-images.githubusercontent.com/48530104/203260511-26df766f-6013-4eaf-be26-8aa13794def2.png) ### Steps to reproduce the bug So here is what I have done: 1. Download common_voice data 2. Trim part of it and publish it to my own repo. 3. Download data from my own repo, but am getting this error. ### Expected behavior There shouldn't be an error in downloading part of the data and publishing it to one's own repo ### Environment info common_voice 11
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5276/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5276/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5275
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5275/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5275/comments
https://api.github.com/repos/huggingface/datasets/issues/5275/events
https://github.com/huggingface/datasets/issues/5275
1,459,358,919
I_kwDODunzps5W_AzH
5,275
YAML integer keys are not preserved Hub server-side
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "@huggingface/datasets if you agree, I can make the bulk edit on the Hub to fix integer keys into strings.", "Ok for me, and we can merge (internal) https://github.com/huggingface/moon-landing/pull/4609", "FYI there are still 2k+ weekly users on `datasets` 2.6.1 which doesn't support the string label format for class labels. And among those, some are using datasets with class labels like imdb (60 users), conllpp (40), msra_ner (40), peoples_daily_enr (40), weibo_ner (30), conll2003 (20), etc. And renaming to string would break these users code.", "but isn't `datasets 2.6.1` downloading files from the Hub with the corresponding tag? I thought we had something like this before", "We're using `main` as models do. Some datasets need to be updated from time to time, e.g. when a link to download the data is dead.\r\n\r\nBut yea a year ago we had those tags, we just ended up not using them", "I opened https://github.com/huggingface/datasets/issues/5406 to communicate on this. Let me know what you think, and if it sounds good to you I can pin this issue", "So, is it OK to make the bulk edit on the Hub now or should we wait longer? If the latter, how long?", "I think we can do it. If you want to be extra cautious you can do it for all datasets except imdb and conllpp for now which are actively used by 2.6.1 users. For those two we can keep the YAML like this for some more time, or alternatively use the old dataset_infos.json file", "The bulk edit of canonical datasets (except imdb and conllpp) is running. \r\n\r\nSee e.g.: https://huggingface.co/datasets/acronym_identification/discussions/3\r\n\r\nEDITED: \r\nDone, except for \"universal_morphologies\", where I get\r\n```\r\nHTTPError: 413 Client Error: Payload Too Large for url: https://huggingface.co/api/validate-yaml\r\n```\r\n\r\nAlso not done for the datasets missing matadata \"dataset_info\":\r\n- mc4: https://huggingface.co/datasets/mc4/discussions/3\r\n- the_pile: https://huggingface.co/datasets/the_pile/discussions/6\r\n- timit_asr: https://huggingface.co/datasets/timit_asr/discussions/1", "Thank you !", "@lhoestq, there are 6 community datasets with YAML integer keys in their `dataset_info` `class_label`:\r\n- indonlp/indonlu\r\n- rcds/swiss_judgment_prediction\r\n- Jean-Baptiste/wikiner_fr\r\n- Bingsu/Cat_and_Dog\r\n- taskydata/tasky_or_not\r\n- RCC-MSU/collection3\r\n\r\nMaybe we could open a PR on them as well?", "Let's do this then:\r\n\r\n- [x] [indonlp/indonlu](https://huggingface.co/datasets/indonlp/indonlu/discussions/3)\r\n- [x] rcds/swiss_judgment_prediction\r\n- [x] Jean-Baptiste/wikiner_fr\r\n- [x] Bingsu/Cat_and_Dog -> merged\r\n- [x] taskydata/tasky_or_not (was already using quotes)\r\n- [x] RCC-MSU/collection3\r\n\r\nEDIT: all done :)", "@lhoestq I was not asking you to do it, but asking if you agree me to do it... :man_facepalming: \r\nAs I self-assigned this issue... :sweat_smile: " ]
2022-11-22T08:14:47
2023-01-26T10:52:35
2023-01-26T10:40:21
MEMBER
null
null
null
After an internal discussion (https://github.com/huggingface/moon-landing/issues/4563): - YAML integer keys are not preserved server-side: they are transformed to strings - See for example this Hub PR: https://huggingface.co/datasets/acronym_identification/discussions/1/files - Original: ```yaml class_label: names: 0: B-long 1: B-short ``` - Returned by the server: ```yaml class_label: names: '0': B-long '1': B-short ``` - They are planning to enforce only string keys - Other projects already use interger-transformed-to string keys: e.g. `transformers` models `id2label`: https://huggingface.co/roberta-large-mnli/blob/main/config.json ```yaml "id2label": { "0": "CONTRADICTION", "1": "NEUTRAL", "2": "ENTAILMENT" } ``` On the other hand, at `datasets` we are currently using YAML integer keys for `dataset_info` `class_label`. Please note (thanks @lhoestq for pointing out) that previous versions (2.6 and 2.7) of `datasets` need being patched: ```python In [18]: Features._from_yaml_list([{'dtype': {'class_label': {'names': {'0': 'neg', '1': 'pos'}}}, 'name': 'label'}]) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-18-974f07eea526> in <module> ----> 1 Features._from_yaml_list(ry) ~/Desktop/hf/nlp/src/datasets/features/features.py in _from_yaml_list(cls, yaml_data) 1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}") 1744 -> 1745 return cls.from_dict(from_yaml_inner(yaml_data)) 1746 1747 def encode_example(self, example): ~/Desktop/hf/nlp/src/datasets/features/features.py in from_yaml_inner(obj) 1739 elif isinstance(obj, list): 1740 names = [_feature.pop("name") for _feature in obj] -> 1741 return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)} 1742 else: 1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}") ~/Desktop/hf/nlp/src/datasets/features/features.py in <dictcomp>(.0) 1739 elif isinstance(obj, list): 1740 names = [_feature.pop("name") for _feature in obj] -> 1741 return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)} 1742 else: 1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}") ~/Desktop/hf/nlp/src/datasets/features/features.py in from_yaml_inner(obj) 1734 return {"_type": snakecase_to_camelcase(obj["dtype"])} 1735 else: -> 1736 return from_yaml_inner(obj["dtype"]) 1737 else: 1738 return {"_type": snakecase_to_camelcase(_type), **unsimplify(obj)[_type]} ~/Desktop/hf/nlp/src/datasets/features/features.py in from_yaml_inner(obj) 1736 return from_yaml_inner(obj["dtype"]) 1737 else: -> 1738 return {"_type": snakecase_to_camelcase(_type), **unsimplify(obj)[_type]} 1739 elif isinstance(obj, list): 1740 names = [_feature.pop("name") for _feature in obj] ~/Desktop/hf/nlp/src/datasets/features/features.py in unsimplify(feature) 1704 if isinstance(feature.get("class_label"), dict) and isinstance(feature["class_label"].get("names"), dict): 1705 label_ids = sorted(feature["class_label"]["names"]) -> 1706 if label_ids and label_ids != list(range(label_ids[-1] + 1)): 1707 raise ValueError( 1708 f"ClassLabel expected a value for all label ids [0:{label_ids[-1] + 1}] but some ids are missing." TypeError: can only concatenate str (not "int") to str ``` TODO: - [x] Remove YAML integer keys from `dataset_info` metadata - [x] Make a patch release for affected `datasets` versions: 2.6 and 2.7 - [x] Communicate on the fix - [x] Wait for adoption - [x] Bulk edit the Hub to fix this in all canonical datasets
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5275/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5275/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5274
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5274/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5274/comments
https://api.github.com/repos/huggingface/datasets/issues/5274/events
https://github.com/huggingface/datasets/issues/5274
1,458,646,455
I_kwDODunzps5W8S23
5,274
load_dataset possibly broken for gated datasets?
{ "login": "TristanThrush", "id": 20826878, "node_id": "MDQ6VXNlcjIwODI2ODc4", "avatar_url": "https://avatars.githubusercontent.com/u/20826878?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TristanThrush", "html_url": "https://github.com/TristanThrush", "followers_url": "https://api.github.com/users/TristanThrush/followers", "following_url": "https://api.github.com/users/TristanThrush/following{/other_user}", "gists_url": "https://api.github.com/users/TristanThrush/gists{/gist_id}", "starred_url": "https://api.github.com/users/TristanThrush/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TristanThrush/subscriptions", "organizations_url": "https://api.github.com/users/TristanThrush/orgs", "repos_url": "https://api.github.com/users/TristanThrush/repos", "events_url": "https://api.github.com/users/TristanThrush/events{/privacy}", "received_events_url": "https://api.github.com/users/TristanThrush/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@BradleyHsu", "Btw, thanks very much for finding the hub rollback temporary fix and bringing the issue to our attention @KhoomeiK!", "I see the same issue when calling `load_dataset('poloclub/diffusiondb', 'large_random_1k')` with `datasets==2.7.1` and `huggingface-hub=0.11.0`. No issue with `datasets=2.6.1` and `huggingface_hub==0.10.1`.\r\n\r\nhttps://github.com/poloclub/diffusiondb/issues/7", "I fixed my issue by specifying `repo_type` in `hf_hub_url()`. https://github.com/poloclub/diffusiondb/commit/9eb91c79aaca98b0515a0ce45778b8af65b84652\r\n\r\nI opened a PR on the Winoground's repo: https://huggingface.co/datasets/facebook/winoground/discussions/2", "This is a bug in the script, indeed. The most robust fix is to use a relative path instead of `hf_hub_url`, which does not depend on `huggingface_hub`'s version πŸ™‚. I've opened a PR here: https://huggingface.co/datasets/facebook/winoground/discussions/3.", "Awesome, big thanks to both @xiaohk and @mariosasko!", "so, if i reproduce the bug, what should i do ? with huggingface_hub0.13.3 dataset2.6.1", "huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name':\r\n\r\n tokenizer = AutoTokenizer.from_pretrained(ARGS.model_path, trust_remote_code=True)\r\n\r\nPlease handle automatically for local path and repo name inside, otherwise users always get confused about this", "I think I'm also hitting this error, trying to load a model from a local path." ]
2022-11-21T21:59:53
2023-05-27T00:06:14
2022-11-28T02:50:42
CONTRIBUTOR
null
null
null
### Describe the bug When trying to download the [winoground dataset](https://huggingface.co/datasets/facebook/winoground), I get this error unless I roll back the version of huggingface-hub: ``` [/usr/local/lib/python3.7/dist-packages/huggingface_hub/utils/_validators.py](https://localhost:8080/#) in validate_repo_id(repo_id) 165 if repo_id.count("/") > 1: 166 raise HFValidationError( --> 167 "Repo id must be in the form 'repo_name' or 'namespace/repo_name':" 168 f" '{repo_id}'. Use `repo_type` argument if needed." 169 ) HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': 'datasets/facebook/winoground'. Use `repo_type` argument if needed ``` ### Steps to reproduce the bug Install requirements: ``` pip install transformers pip install datasets # It works if you uncomment the following line, rolling back huggingface hub: # pip install huggingface-hub==0.10.1 ``` Then: ``` from datasets import load_dataset auth_token = "" # Replace with an auth token, which you can get from your huggingface account: Profile -> Settings -> Access Tokens -> New Token winoground = load_dataset("facebook/winoground", use_auth_token=auth_token)["test"] ``` ### Expected behavior Downloading of the datset ### Environment info Just a google colab; see here: https://colab.research.google.com/drive/15wwOSte2CjTazdnCWYUm2VPlFbk2NGc0?usp=sharing
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5274/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/5274/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5273
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5273/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5273/comments
https://api.github.com/repos/huggingface/datasets/issues/5273/events
https://github.com/huggingface/datasets/issues/5273
1,458,018,050
I_kwDODunzps5W55cC
5,273
download_mode="force_redownload" does not refresh cached dataset
{ "login": "nomisto", "id": 28439912, "node_id": "MDQ6VXNlcjI4NDM5OTEy", "avatar_url": "https://avatars.githubusercontent.com/u/28439912?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nomisto", "html_url": "https://github.com/nomisto", "followers_url": "https://api.github.com/users/nomisto/followers", "following_url": "https://api.github.com/users/nomisto/following{/other_user}", "gists_url": "https://api.github.com/users/nomisto/gists{/gist_id}", "starred_url": "https://api.github.com/users/nomisto/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nomisto/subscriptions", "organizations_url": "https://api.github.com/users/nomisto/orgs", "repos_url": "https://api.github.com/users/nomisto/repos", "events_url": "https://api.github.com/users/nomisto/events{/privacy}", "received_events_url": "https://api.github.com/users/nomisto/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2022-11-21T14:12:43
2022-11-21T14:13:03
null
NONE
null
null
null
### Describe the bug `load_datasets` does not refresh dataset when features are imported from external file, even with `download_mode="force_redownload"`. The bug is not limited to nested fields, however it is more likely to occur with nested fields. ### Steps to reproduce the bug To reproduce the bug 3 files are needed: `dataset.py` (contains dataset loading script), `schema.py` (contains features of dataset) and `main.py` (to run `load_datasets`) `dataset.py` ```python import datasets from schema import features class NewDataset(datasets.GeneratorBasedBuilder): def _info(self): return datasets.DatasetInfo( features=features ) def _split_generators(self, dl_manager): return [ datasets.SplitGenerator( name=datasets.Split.TRAIN ) ] def _generate_examples(self): data = [ {"id": 0, "nested": []}, {"id": 1, "nested": []} ] for key, example in enumerate(data): yield key, example ``` `schema.py` ```python import datasets features = datasets.Features( { "id": datasets.Value("int32"), "nested": [ {"text": datasets.Value("string")} ] } ) ``` `main.py` ```python import datasets a = datasets.load_dataset("dataset.py") print(a["train"].info.features) ``` Now if `main.py` is run it prints the following correct output: `{'id': Value(dtype='int32', id=None), 'nested': [{'text': Value(dtype='string', id=None)}]}`. However, if f.e. the label of the feature "text" is changed to something else, f.e. to `schema.py` ```python import datasets features = datasets.Features( { "id": datasets.Value("int32"), "nested": [ {"textfoo": datasets.Value("string")} ] } ) ``` `main.py` still prints `{'id': Value(dtype='int32', id=None), 'nested': [{'text': Value(dtype='string', id=None)}]}`, even if run with `download_mode="force_redownload"`. The only fix is to delete the folder in the cache. ### Expected behavior The cached dataset is deleted and refreshed when using `load_datasets` with `download_mode="force_redownload"`. ### Environment info - `datasets` version: 2.7.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.7.9 - PyArrow version: 10.0.0 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5273/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5273/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5272
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5272/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5272/comments
https://api.github.com/repos/huggingface/datasets/issues/5272/events
https://github.com/huggingface/datasets/issues/5272
1,456,940,021
I_kwDODunzps5W1yP1
5,272
Use pyarrow Tensor dtype
{ "login": "franz101", "id": 18228395, "node_id": "MDQ6VXNlcjE4MjI4Mzk1", "avatar_url": "https://avatars.githubusercontent.com/u/18228395?v=4", "gravatar_id": "", "url": "https://api.github.com/users/franz101", "html_url": "https://github.com/franz101", "followers_url": "https://api.github.com/users/franz101/followers", "following_url": "https://api.github.com/users/franz101/following{/other_user}", "gists_url": "https://api.github.com/users/franz101/gists{/gist_id}", "starred_url": "https://api.github.com/users/franz101/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/franz101/subscriptions", "organizations_url": "https://api.github.com/users/franz101/orgs", "repos_url": "https://api.github.com/users/franz101/repos", "events_url": "https://api.github.com/users/franz101/events{/privacy}", "received_events_url": "https://api.github.com/users/franz101/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi ! We're using the Arrow format for the datasets, and PyArrow tensors are not part of the Arrow format AFAIK:\r\n\r\n> There is no direct support in the arrow columnar format to store Tensors as column values.\r\n\r\nsource: https://github.com/apache/arrow/issues/4802#issuecomment-508494694", "@wesm @rok its been around three years. any updates, regarding dataset arrow tensor support? πŸ™ I know you must be very busy, would appreciate to learn what is the state of art. I saw the PR is still open [#8510](https://github.com/apache/arrow/pull/8510)", "Hey @franz101 & @lhoestq!\r\nThere is a plan and a PR to create an [ExtensionArray of Tensors](https://github.com/apache/arrow/pull/8510) of equal sizes as well as a plan to do the same for Tensors of different sizes [ARROW-8714](https://issues.apache.org/jira/browse/ARROW-8714).", "The work stalled a little because it was not clear where TensorArray would live. However Arrow community recently agreed to make a [well-known-extension-type document](https://lists.apache.org/thread/sxd5fhc42hb6svs79t3fd79gkqj83pfh) and I would like https://github.com/apache/arrow/pull/8510 to land there and add an implementation to C++/Python + another language. Is that something you would find beneficial to you?", "that is a great update, thank you.\r\nit looks like this feature would benefit datasets implementation of [ArrayExtensionArray](https://github.com/huggingface/datasets/blob/9f2ff14673cac1f1ad56d80221a793f5938b68c7/src/datasets/features/features.py#L585-L641). Is that correct @eladsegal @lhoestq?\r\n\r\n", "TensorArray sounds great ! Looking forward to it :)\r\n\r\nWe've had our own ExtensionArray for fixed shape tensors for a while now, hoping to see something more standardized by the arrow community.\r\n\r\nAlso super interested in the extension array for tensors of different sizes cc @mariosasko ", "[FixedShapeTensor ExtensionType](https://github.com/apache/arrow/pull/8510) was merged and will be in Arrow 12.0.0 (release is planned mid April).\r\n", "@rok Thanks for keeping us updated! I think it's best to introduce a new feature type that would use this extension type under the hood. I'll create an issue to discuss the design with the community in the coming days.\r\n\r\nAlso, is there a tentative time frame for the variable-shape Tensor extension type?", "@mariosasko please tag me in the discussion, perhaps I can contribute.\r\n\r\nAs for the [variable shape tensor array](https://github.com/apache/arrow/issues/24868) - I'd be interested in working on it but didn't see much interest in community yet. Are you saying `huggingface/datasets` could use it?", "pyarrow 12 is out πŸŽ‰, will have a look if I can work on it for the ExtensionArray", "I think these two issues need to be fixed first on the Arrow side before adding the tensor feature type here: https://github.com/apache/arrow/issues/35573 and https://github.com/apache/arrow/issues/35599.\r\n\r\n@rok We've had a couple of requests for supporting variable-shape tensors on the forum/GH, but I did not manage to find the concrete issues using the search. TF/TFDS (and PyTorch with the `nested_tensor` API) support them, so it makes sense for us to do the same eventually (the Ray project has an [extension](https://github.com/ray-project/ray/blob/42a8d1489b37243f203120899a23d919dc85bf2a/python/ray/air/util/tensor_extensions/arrow.py#L634) type to support this case)", "> @rok We've had a couple of requests for supporting variable-shape tensors on the forum/GH, but I did not manage to find the concrete issues using the search. TF/TFDS (and PyTorch with the `nested_tensor` API) support them, so it makes sense for us to do the same eventually (the Ray project has an [extension](https://github.com/ray-project/ray/blob/42a8d1489b37243f203120899a23d919dc85bf2a/python/ray/air/util/tensor_extensions/arrow.py#L634) type to support this case)\r\n\r\nThat does make sense indeed. We should probably also be careful about memory layout to enable zero-copy interface to TF/PyTorch.", "So there is no way we can use [pyarrow.Tensor](https://arrow.apache.org/docs/python/generated/pyarrow.Tensor.html#pyarrow.Tensor) ?", "Not with with the Arrow format, and therefore not in `datasets`. But they released a new [FixedShapeTensorArray](https://arrow.apache.org/docs/python/extending_types.html#fixed-size-tensor) to store tensors in Arrow format. We plan to support this in `datasets` at one point !", "There is also an open issue to enable the conversion of `pyarrow.Tensor` to `pyarrow.FixedShapeTensorType`: https://github.com/apache/arrow/issues/35068. This way one could indirectly use `pyarrow.Tensor` in Arrow format." ]
2022-11-20T15:18:41
2023-07-04T04:57:50
null
NONE
null
null
null
### Feature request I was going the discussion of converting tensors to lists. Is there a way to leverage pyarrow's Tensors for nested arrays / embeddings? For example: ```python import pyarrow as pa import numpy as np x = np.array([[2, 2, 4], [4, 5, 100]], np.int32) pa.Tensor.from_numpy(x, dim_names=["dim1","dim2"]) ``` [Apache docs](https://arrow.apache.org/docs/python/generated/pyarrow.Tensor.html) Maybe this belongs into the pyarrow features / repo. ### Motivation Working with big data, we need to make sure to use the best data structures and IO out there ### Your contribution Can try to a PR if code changes necessary
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5272/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5272/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5271
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5271/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5271/comments
https://api.github.com/repos/huggingface/datasets/issues/5271/events
https://github.com/huggingface/datasets/pull/5271
1,456,807,738
PR_kwDODunzps5DTDX1
5,271
Fix #5269
{ "login": "Freed-Wu", "id": 32936898, "node_id": "MDQ6VXNlcjMyOTM2ODk4", "avatar_url": "https://avatars.githubusercontent.com/u/32936898?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Freed-Wu", "html_url": "https://github.com/Freed-Wu", "followers_url": "https://api.github.com/users/Freed-Wu/followers", "following_url": "https://api.github.com/users/Freed-Wu/following{/other_user}", "gists_url": "https://api.github.com/users/Freed-Wu/gists{/gist_id}", "starred_url": "https://api.github.com/users/Freed-Wu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Freed-Wu/subscriptions", "organizations_url": "https://api.github.com/users/Freed-Wu/orgs", "repos_url": "https://api.github.com/users/Freed-Wu/repos", "events_url": "https://api.github.com/users/Freed-Wu/events{/privacy}", "received_events_url": "https://api.github.com/users/Freed-Wu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "See <https://github.com/huggingface/datasets/issues/5269>" ]
2022-11-20T07:50:49
2022-11-21T15:07:19
2022-11-21T15:06:38
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5271", "html_url": "https://github.com/huggingface/datasets/pull/5271", "diff_url": "https://github.com/huggingface/datasets/pull/5271.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5271.patch", "merged_at": null }
``` $ datasets-cli convert --datasets_directory <TAB> datasets_directory benchmarks/ docs/ metrics/ notebooks/ src/ templates/ tests/ utils/ ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5271/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5271/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5270
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5270/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5270/comments
https://api.github.com/repos/huggingface/datasets/issues/5270/events
https://github.com/huggingface/datasets/issues/5270
1,456,508,990
I_kwDODunzps5W0JA-
5,270
When len(_URLS) > 16, download will hang
{ "login": "Freed-Wu", "id": 32936898, "node_id": "MDQ6VXNlcjMyOTM2ODk4", "avatar_url": "https://avatars.githubusercontent.com/u/32936898?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Freed-Wu", "html_url": "https://github.com/Freed-Wu", "followers_url": "https://api.github.com/users/Freed-Wu/followers", "following_url": "https://api.github.com/users/Freed-Wu/following{/other_user}", "gists_url": "https://api.github.com/users/Freed-Wu/gists{/gist_id}", "starred_url": "https://api.github.com/users/Freed-Wu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Freed-Wu/subscriptions", "organizations_url": "https://api.github.com/users/Freed-Wu/orgs", "repos_url": "https://api.github.com/users/Freed-Wu/repos", "events_url": "https://api.github.com/users/Freed-Wu/events{/privacy}", "received_events_url": "https://api.github.com/users/Freed-Wu/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "It can fix the bug temporarily.\r\n```python\r\nfrom datasets import DownloadConfig\r\nconfig = DownloadConfig(num_proc=8)\r\nIn [5]: dataset = load_dataset('Freed-Wu/kodak', split='test', download_config=config)\r\nDownloading and preparing dataset kodak/default to /home/wzy/.cache/huggingface/datasets/Freed-Wu___kodak/default/0.0.1/6cf51f2b3d686d24a33fe86945f9e16802def212325f9345cf3cbb1b9f5f4a57...\r\nDownloading data files #4: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:02<00:00, 1.39obj/s]\r\nDownloading data files #2: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:02<00:00, 1.38obj/s]\r\nDownloading data files #3: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:02<00:00, 1.13obj/s]\r\nDownloading data files #7: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:02<00:00, 1.09obj/s]\r\nDownloading data files #5: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:02<00:00, 1.08obj/s]\r\nDownloading data files #0: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:02<00:00, 1.08obj/s]\r\nDownloading data files #1: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:10<00:00, 3.36s/obj]\r\nDownloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 492k/492k [00:01<00:00, 253kB/s]\r\nDownloading data files #6: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:13<00:00, 4.63s/obj]\r\nExtracting data files #0: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 1407.17obj/s]\r\nExtracting data files #1: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 1325.91obj/s]\r\nExtracting data files #3: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 1524.46obj/s]\r\nExtracting data files #2: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 1404.66obj/s]\r\nExtracting data files #4: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 1538.63obj/s]\r\nExtracting data files #6: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 1711.73obj/s]\r\nExtracting data files #7: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 2144.33obj/s]\r\nExtracting data files #5: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 1964.85obj/s]\r\nDataset kodak downloaded and prepared to /home/wzy/.cache/huggingface/datasets/Freed-Wu___kodak/default/0.0.1/6cf51f2b3d686d24a33fe86945f9e16802def212325f9345cf3cbb1b9f5f4a57. Subsequent calls will reuse this data.\r\n```", "Thanks for reporting ! This sounds like an issue with python multiprocessing. If we switch to multithreading for the downloads it should be much more robust - let me know if this is something you'd like to contribute, I'd be happy to help and give you some pointers", "> an issue with python multiprocessing\r\n\r\nIf it is an issue with multiprocessing, should we report it to upstream?", "Debugging this would require quite some work in my opinion, and I've often failed to make reproducible examples, since it's pretty correlated to one's environment + hardware. So I wouldn't spend too much time on this unless we manage to reproduce this on another machine consistently.\r\n\r\nInstead I'd encourage a more pragmatic fix that is: not create tons of processes (on regular machines it may slow things down anyway), and instead use multithreading by default.", "I am not expert of python. I hear about python has GIL, which result in multi processing is worse than multi threading. So I am not sure if this change makes sense?\r\n\r\nAnd if this is a bug of multi processing, why not report to upstream and let them fix? And even if change it to multi threading, how can we make sure it can truly fix this problem?", "Just my 2c. No offense.", "> Just my 2c. No offense.\r\n\r\nsure np ^^\r\n\r\n> I hear about python has GIL, which result in multi processing is worse than multi threading. So I am not sure if this change makes sense?\r\n\r\nHere the bottleneck speed is the bandwidth used to download the files. When downloading, the GIL is released, so multithreading gives the same speed as multiprocessing.\r\n\r\n> And if this is a bug of multi processing, why not report to upstream and let them fix?\r\n\r\nUsually to fix a bug it's important to be able to reproduce it. This way you can share it, experiment with it, and then make sure it's fixed. Here I'm afraid it's not easy to reproduce. Though I think that spawning too many processes for your machine can lead to this kind of issues.\r\n\r\n> And even if change it to multi threading, how can we make sure it can truly fix this problem?\r\n\r\nMultithreading is more robust in python because IIRC there are less locks involved which are often the cause of code hanging for no reason." ]
2022-11-19T14:27:41
2022-11-21T15:27:16
null
NONE
null
null
null
### Describe the bug ```python In [9]: dataset = load_dataset('Freed-Wu/kodak', split='test') Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2.53k/2.53k [00:00<00:00, 1.88MB/s] [11/19/22 22:16:21] WARNING Using custom data configuration default builder.py:379 Downloading and preparing dataset kodak/default to /home/wzy/.cache/huggingface/datasets/Freed-Wu___kodak/default/0.0.1/bd1cc3434212e3e654f7e16ad618f8a1470b5982b086c91b1d6bc7187183c6e9... Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 531k/531k [00:02<00:00, 239kB/s] #10: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:04<00:00, 4.06s/obj] Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 534k/534k [00:02<00:00, 193kB/s] #14: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:04<00:00, 4.37s/obj] Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 692k/692k [00:02<00:00, 269kB/s] #12: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:04<00:00, 4.44s/obj] Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 566k/566k [00:02<00:00, 210kB/s] #5: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:04<00:00, 4.53s/obj] Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 613k/613k [00:02<00:00, 235kB/s] #13: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:04<00:00, 4.53s/obj] Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 786k/786k [00:02<00:00, 342kB/s] #3: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:04<00:00, 4.60s/obj] Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 619k/619k [00:02<00:00, 254kB/s] #4: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:04<00:00, 4.68s/obj] Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 737k/737k [00:02<00:00, 271kB/s] Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 788k/788k [00:02<00:00, 285kB/s] #6: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:05<00:00, 5.04s/obj] Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 618k/618k [00:04<00:00, 153kB/s] #0: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:11<00:00, 5.69s/obj] ^CProcess ForkPoolWorker-47: Process ForkPoolWorker-46: Process ForkPoolWorker-36: Process ForkPoolWorker-38:β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:05<00:00, 5.04s/obj] Process ForkPoolWorker-37: Process ForkPoolWorker-45: Process ForkPoolWorker-39: Process ForkPoolWorker-43: Process ForkPoolWorker-33: Process ForkPoolWorker-18: Traceback (most recent call last): Traceback (most recent call last): Traceback (most recent call last): Traceback (most recent call last): Traceback (most recent call last): File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker task = get() File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/queues.py", line 364, in get with self._rlock: File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__ return self._semlock.__enter__() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker task = get() File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker task = get() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/queues.py", line 364, in get with self._rlock: File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker task = get() File "/usr/lib/python3.10/multiprocessing/queues.py", line 364, in get with self._rlock: KeyboardInterrupt File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__ return self._semlock.__enter__() Traceback (most recent call last): Traceback (most recent call last): Traceback (most recent call last): KeyboardInterrupt File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker task = get() File "/usr/lib/python3.10/multiprocessing/queues.py", line 364, in get with self._rlock: File "/usr/lib/python3.10/multiprocessing/queues.py", line 364, in get with self._rlock: File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__ return self._semlock.__enter__() File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__ return self._semlock.__enter__() KeyboardInterrupt File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() KeyboardInterrupt File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker task = get() File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__ return self._semlock.__enter__() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker task = get() File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker task = get() File "/usr/lib/python3.10/multiprocessing/queues.py", line 364, in get with self._rlock: File "/usr/lib/python3.10/multiprocessing/queues.py", line 365, in get res = self._reader.recv_bytes() File "/usr/lib/python3.10/multiprocessing/queues.py", line 364, in get with self._rlock: File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__ return self._semlock.__enter__() KeyboardInterrupt File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__ return self._semlock.__enter__() File "/usr/lib/python3.10/multiprocessing/connection.py", line 221, in recv_bytes buf = self._recv_bytes(maxlength) KeyboardInterrupt KeyboardInterrupt File "/usr/lib/python3.10/multiprocessing/connection.py", line 419, in _recv_bytes buf = self._recv(4) File "/usr/lib/python3.10/multiprocessing/connection.py", line 384, in _recv chunk = read(handle, remaining) KeyboardInterrupt Traceback (most recent call last): File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker task = get() File "/usr/lib/python3.10/multiprocessing/queues.py", line 364, in get with self._rlock: File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__ return self._semlock.__enter__() KeyboardInterrupt Process ForkPoolWorker-20: Process ForkPoolWorker-44: Process ForkPoolWorker-22: Traceback (most recent call last): File "/usr/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/usr/lib/python3.10/multiprocessing/pool.py", line 48, in mapstar return list(map(*args)) File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 197, in _single_map_nested return function(data_struct) File "/usr/lib/python3.10/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 298, in cached_path output_path = get_from_cache( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 561, in get_from_cache response = http_head( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 476, in http_head response = _request_with_retry( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 405, in _request_with_retry response = requests.request(method=method.upper(), url=url, timeout=timeout, **params) File "/usr/lib/python3.10/site-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 587, in request resp = self.send(prep, **send_kwargs) File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 701, in send r = adapter.send(request, **kwargs) File "/usr/lib/python3.10/site-packages/requests/adapters.py", line 489, in send resp = conn.urlopen( File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen httplib_response = self._make_request( File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request self._validate_conn(conn) File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn conn.connect() File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 358, in connect self.sock = conn = self._new_conn() File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn conn = connection.create_connection( File "/usr/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection sock.connect(sa) KeyboardInterrupt #1: 0%| | 0/2 [03:00<?, ?obj/s] Traceback (most recent call last): Traceback (most recent call last): File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/usr/lib/python3.10/multiprocessing/pool.py", line 48, in mapstar return list(map(*args)) File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 197, in _single_map_nested return function(data_struct) File "/usr/lib/python3.10/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 298, in cached_path output_path = get_from_cache( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 659, in get_from_cache http_get( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 442, in http_get response = _request_with_retry( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 405, in _request_with_retry response = requests.request(method=method.upper(), url=url, timeout=timeout, **params) File "/usr/lib/python3.10/site-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 587, in request resp = self.send(prep, **send_kwargs) File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 701, in send r = adapter.send(request, **kwargs) File "/usr/lib/python3.10/site-packages/requests/adapters.py", line 489, in send resp = conn.urlopen( File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen httplib_response = self._make_request( File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request self._validate_conn(conn) File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn conn.connect() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 358, in connect self.sock = conn = self._new_conn() File "/usr/lib/python3.10/multiprocessing/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn conn = connection.create_connection( File "/usr/lib/python3.10/multiprocessing/pool.py", line 48, in mapstar return list(map(*args)) File "/usr/lib/python3.10/site-packages/urllib3/util/connection.py", line 72, in create_connection for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/usr/lib/python3.10/socket.py", line 955, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 197, in _single_map_nested return function(data_struct) File "/usr/lib/python3.10/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) KeyboardInterrupt File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 298, in cached_path output_path = get_from_cache( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 561, in get_from_cache response = http_head( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 476, in http_head response = _request_with_retry( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 405, in _request_with_retry response = requests.request(method=method.upper(), url=url, timeout=timeout, **params) File "/usr/lib/python3.10/site-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 587, in request resp = self.send(prep, **send_kwargs) File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 701, in send r = adapter.send(request, **kwargs) File "/usr/lib/python3.10/site-packages/requests/adapters.py", line 489, in send resp = conn.urlopen( File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen httplib_response = self._make_request( File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request self._validate_conn(conn) File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn conn.connect() File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 358, in connect self.sock = conn = self._new_conn() File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn conn = connection.create_connection( File "/usr/lib/python3.10/site-packages/urllib3/util/connection.py", line 72, in create_connection for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): File "/usr/lib/python3.10/socket.py", line 955, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): KeyboardInterrupt #3: 0%| | 0/2 [03:00<?, ?obj/s] #11: 0%| | 0/1 [00:49<?, ?obj/s] Traceback (most recent call last): File "/usr/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/usr/lib/python3.10/multiprocessing/pool.py", line 48, in mapstar return list(map(*args)) File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 197, in _single_map_nested return function(data_struct) File "/usr/lib/python3.10/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 298, in cached_path output_path = get_from_cache( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 561, in get_from_cache response = http_head( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 476, in http_head response = _request_with_retry( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 405, in _request_with_retry response = requests.request(method=method.upper(), url=url, timeout=timeout, **params) File "/usr/lib/python3.10/site-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 587, in request resp = self.send(prep, **send_kwargs) File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 723, in send history = [resp for resp in gen] File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 723, in <listcomp> history = [resp for resp in gen] File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 266, in resolve_redirects resp = self.send( File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 701, in send r = adapter.send(request, **kwargs) File "/usr/lib/python3.10/site-packages/requests/adapters.py", line 489, in send resp = conn.urlopen( File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen httplib_response = self._make_request( File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request self._validate_conn(conn) File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn conn.connect() File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 358, in connect self.sock = conn = self._new_conn() File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn conn = connection.create_connection( File "/usr/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection sock.connect(sa) KeyboardInterrupt #5: 0%| | 0/1 [03:00<?, ?obj/s] KeyboardInterrupt Process ForkPoolWorker-42: Traceback (most recent call last): File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/usr/lib/python3.10/multiprocessing/pool.py", line 48, in mapstar return list(map(*args)) File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 197, in _single_map_nested return function(data_struct) File "/usr/lib/python3.10/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 298, in cached_path output_path = get_from_cache( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 561, in get_from_cache response = http_head( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 476, in http_head response = _request_with_retry( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 405, in _request_with_retry response = requests.request(method=method.upper(), url=url, timeout=timeout, **params) File "/usr/lib/python3.10/site-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 587, in request resp = self.send(prep, **send_kwargs) File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 701, in send r = adapter.send(request, **kwargs) File "/usr/lib/python3.10/site-packages/requests/adapters.py", line 489, in send resp = conn.urlopen( File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen httplib_response = self._make_request( File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request self._validate_conn(conn) File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn conn.connect() File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 358, in connect self.sock = conn = self._new_conn() File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn conn = connection.create_connection( File "/usr/lib/python3.10/site-packages/urllib3/util/connection.py", line 72, in create_connection for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): File "/usr/lib/python3.10/socket.py", line 955, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): KeyboardInterrupt #9: 0%| | 0/1 [00:51<?, ?obj/s] ``` ### Steps to reproduce the bug ```python """Kodak. Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. """ import datasets NUMBER = 17 _DESCRIPTION = """\ The pictures below link to lossless, true color (24 bits per pixel, aka "full color") images. It is my understanding they have been released by the Eastman Kodak Company for unrestricted usage. Many sites use them as a standard test suite for compression testing, etc. Prior to this site, they were only available in the Sun Raster format via ftp. This meant that the images could not be previewed before downloading. Since their release, however, the lossless PNG format has been incorporated into all the major browsers. Since PNG supports 24-bit lossless color (which GIF and JPEG do not), it is now possible to offer this browser-friendly access to the images. """ _HOMEPAGE = "https://r0k.us/graphics/kodak/" _LICENSE = "GPLv3" _URLS = [ f"https://github.com/MohamedBakrAli/Kodak-Lossless-True-Color-Image-Suite/raw/master/PhotoCD_PCD0992/{i}.png" for i in range(1, 1 + NUMBER) ] class Kodak(datasets.GeneratorBasedBuilder): """Kodak datasets.""" VERSION = datasets.Version("0.0.1") def _info(self): features = datasets.Features( { "image": datasets.Image(), } ) return datasets.DatasetInfo( description=_DESCRIPTION, features=features, homepage=_HOMEPAGE, license=_LICENSE, ) def _split_generators(self, dl_manager): """Return SplitGenerators.""" file_paths = dl_manager.download_and_extract(_URLS) return [ datasets.SplitGenerator( name=datasets.Split.TEST, gen_kwargs={ "file_paths": file_paths, }, ), ] def _generate_examples(self, file_paths): """Yield examples.""" for file_path in file_paths: yield file_path, {"image": file_path} ``` ### Expected behavior When `len(_URLS) < 16`, it works. ```python In [3]: dataset = load_dataset('Freed-Wu/kodak', split='test') Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2.53k/2.53k [00:00<00:00, 3.02MB/s] [11/19/22 22:04:28] WARNING Using custom data configuration default builder.py:379 Downloading and preparing dataset kodak/default to /home/wzy/.cache/huggingface/datasets/Freed-Wu___kodak/default/0.0.1/d26017602a592b5bfa7e008127cdf9dec5af220c9068005f1b4eda036031f475... Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 593k/593k [00:00<00:00, 2.88MB/s] Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 621k/621k [00:03<00:00, 166kB/s] Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 531k/531k [00:01<00:00, 366kB/s] 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 16/16 [00:13<00:00, 1.18it/s] 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 16/16 [00:00<00:00, 3832.38it/s] Dataset kodak downloaded and prepared to /home/wzy/.cache/huggingface/datasets/Freed-Wu___kodak/default/0.0.1/d26017602a592b5bfa7e008127cdf9dec5af220c9068005f1b4eda036031f475. Subsequent calls will reuse this data. ``` ### Environment info - `datasets` version: 2.7.0 - Platform: Linux-6.0.8-arch1-1-x86_64-with-glibc2.36 - Python version: 3.10.8 - PyArrow version: 9.0.0 - Pandas version: 1.4.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5270/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5270/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5269
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5269/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5269/comments
https://api.github.com/repos/huggingface/datasets/issues/5269/events
https://github.com/huggingface/datasets/issues/5269
1,456,485,799
I_kwDODunzps5W0DWn
5,269
Shell completions
{ "login": "Freed-Wu", "id": 32936898, "node_id": "MDQ6VXNlcjMyOTM2ODk4", "avatar_url": "https://avatars.githubusercontent.com/u/32936898?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Freed-Wu", "html_url": "https://github.com/Freed-Wu", "followers_url": "https://api.github.com/users/Freed-Wu/followers", "following_url": "https://api.github.com/users/Freed-Wu/following{/other_user}", "gists_url": "https://api.github.com/users/Freed-Wu/gists{/gist_id}", "starred_url": "https://api.github.com/users/Freed-Wu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Freed-Wu/subscriptions", "organizations_url": "https://api.github.com/users/Freed-Wu/orgs", "repos_url": "https://api.github.com/users/Freed-Wu/repos", "events_url": "https://api.github.com/users/Freed-Wu/events{/privacy}", "received_events_url": "https://api.github.com/users/Freed-Wu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "I don't think we need completion on the datasets-cli, since we're mainly developing huggingface-cli", "I see." ]
2022-11-19T13:48:59
2022-11-21T15:06:15
2022-11-21T15:06:14
NONE
null
null
null
### Feature request Like <https://github.com/huggingface/huggingface_hub/issues/1197>, datasets-cli maybe need it, too. ### Motivation See above. ### Your contribution Maybe.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5269/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5269/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5268
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5268/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5268/comments
https://api.github.com/repos/huggingface/datasets/issues/5268/events
https://github.com/huggingface/datasets/pull/5268
1,455,633,978
PR_kwDODunzps5DPIsp
5,268
Sharded save_to_disk + multiprocessing
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Added both num_shards and max_shard_size in push_to_hub/save_to_disk. Will take care of updating the tests later", "It's ready for a final review @mariosasko and @albertvillanova, let me know what you think :)", "Took your comments into account, and also changed `iflatmap_unordered` to take an iterable of kwargs to make the code more redable :)" ]
2022-11-18T18:50:01
2022-12-14T18:25:52
2022-12-14T18:22:58
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5268", "html_url": "https://github.com/huggingface/datasets/pull/5268", "diff_url": "https://github.com/huggingface/datasets/pull/5268.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5268.patch", "merged_at": "2022-12-14T18:22:58" }
Added `num_shards=` and `num_proc=` to `save_to_disk()` EDIT: also added `max_shard_size=` to `save_to_disk()`, and also `num_shards=` to `push_to_hub` I also: - deprecated the fs parameter in favor of storage_options (for consistency with the rest of the lib) in save_to_disk and load_from_disk - always embed the image/audio data in arrow when doing `save_to_disk` - added a tqdm bar in `save_to_disk` - Use the MockFileSystem in tests for `save_to_disk` and `load_from_disk` - removed the unused integration tests with S3, since we can now test with `mockfs` instead of `s3fs` TODO: - [x] implem save_to_disk for dataset dict - [x] save_to_disk for dataset dict tests - [x] deprecate fs in dataset dict load_from_disk as well - [x] update docs Close #5263 Close https://github.com/huggingface/datasets/issues/4196 Close https://github.com/huggingface/datasets/issues/4351
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5268/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5268/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5267
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5267/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5267/comments
https://api.github.com/repos/huggingface/datasets/issues/5267/events
https://github.com/huggingface/datasets/pull/5267
1,455,466,464
PR_kwDODunzps5DOlFR
5,267
Fix `max_shard_size` docs
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-11-18T16:55:22
2022-11-18T17:28:58
2022-11-18T17:25:27
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5267", "html_url": "https://github.com/huggingface/datasets/pull/5267", "diff_url": "https://github.com/huggingface/datasets/pull/5267.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5267.patch", "merged_at": "2022-11-18T17:25:26" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5267/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5267/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5266
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5266/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5266/comments
https://api.github.com/repos/huggingface/datasets/issues/5266/events
https://github.com/huggingface/datasets/pull/5266
1,455,281,310
PR_kwDODunzps5DN9BT
5,266
Specify arguments as keywords in librosa.reshape to avoid future errors
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-11-18T14:58:47
2022-11-21T15:45:02
2022-11-21T15:41:57
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5266", "html_url": "https://github.com/huggingface/datasets/pull/5266", "diff_url": "https://github.com/huggingface/datasets/pull/5266.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5266.patch", "merged_at": "2022-11-21T15:41:57" }
Fixes a warning and future deprecation from `librosa.reshape`: ``` FutureWarning: Pass orig_sr=16000, target_sr=48000 as keyword args. From version 0.10 passing these as positional arguments will result in an error array = librosa.resample(array, sampling_rate, self.sampling_rate, res_type="kaiser_best") ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5266/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5266/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5265
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5265/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5265/comments
https://api.github.com/repos/huggingface/datasets/issues/5265/events
https://github.com/huggingface/datasets/issues/5265
1,455,274,864
I_kwDODunzps5Wvbtw
5,265
Get an IterableDataset from a map-style Dataset
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 3287858981, "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming", "name": "streaming", "color": "fef2c0", "default": false, "description": "" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "I think `stream` could be misleading since the data is not being streamed from remote endpoints (one could think that's the case when they see `load_dataset` followed by `stream`). Hence, I prefer the second option.\r\n\r\nPS: When we resolve https://github.com/huggingface/datasets/issues/4542, we could add `as_tf_dataset` to the API for consistency and deprecate `to_tf_dataset`." ]
2022-11-18T14:54:40
2023-02-01T16:36:03
2023-02-01T16:36:03
MEMBER
null
null
null
This is useful to leverage iterable datasets specific features like: - fast approximate shuffling - lazy map, filter etc. Iterating over the resulting iterable dataset should be at least as fast at iterating over the map-style dataset. Here are some ideas regarding the API: ```python # 1. # - consistency with load_dataset(..., streaming=True) # - gives intuition that map/filter/etc. are done on-the-fly ids = ds.stream() # 2. # - more explicit on the output type # - but maybe sounds like a conversion tool rather than a step in a processing pipeline ids = ds.as_iterable_dataset() ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5265/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5265/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5264
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5264/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5264/comments
https://api.github.com/repos/huggingface/datasets/issues/5264/events
https://github.com/huggingface/datasets/issues/5264
1,455,252,906
I_kwDODunzps5WvWWq
5,264
`datasets` can't read a Parquet file in Python 3.9.13
{ "login": "loubnabnl", "id": 44069155, "node_id": "MDQ6VXNlcjQ0MDY5MTU1", "avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4", "gravatar_id": "", "url": "https://api.github.com/users/loubnabnl", "html_url": "https://github.com/loubnabnl", "followers_url": "https://api.github.com/users/loubnabnl/followers", "following_url": "https://api.github.com/users/loubnabnl/following{/other_user}", "gists_url": "https://api.github.com/users/loubnabnl/gists{/gist_id}", "starred_url": "https://api.github.com/users/loubnabnl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loubnabnl/subscriptions", "organizations_url": "https://api.github.com/users/loubnabnl/orgs", "repos_url": "https://api.github.com/users/loubnabnl/repos", "events_url": "https://api.github.com/users/loubnabnl/events{/privacy}", "received_events_url": "https://api.github.com/users/loubnabnl/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Could you share the full stack trace please ?\r\n\r\n\r\nCan you also try running this code ? It can be useful to determine if the issue comes from `datasets` or `fsspec` (streaming) or `pyarrow` (parquet reading):\r\n```python\r\nds = load_dataset(\"parquet\", data_files=a_parquet_file_url, use_auth_token=True)\r\n```", "Here's the full trace\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/loubna_huggingface_co/load.py\", line 15, in <module>\r\n ds_all = load_dataset(\"bigcode/the-stack-dedup-pjj\", data_dir=\"data/java\",use_auth_token=True, split=\"train\", revision=\"v1.1.a1\")\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/load.py\", line 1742, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/builder.py\", line 814, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/builder.py\", line 905, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/builder.py\", line 1502, in _prepare_split\r\n for key, table in logging.tqdm(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/tqdm/std.py\", line 1195, in __iter__\r\n for obj in iterable:\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py\", line 67, in _generate_tables\r\n parquet_file = pq.ParquetFile(f)\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/pyarrow/parquet/__init__.py\", line 286, in __init__\r\n self.reader.open(\r\n File \"pyarrow/_parquet.pyx\", line 1227, in pyarrow._parquet.ParquetReader.open\r\n File \"pyarrow/error.pxi\", line 100, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.\r\n```\r\n\r\nwhen running\r\n```python\r\nds = load_dataset(\"parquet\", data_files=\"https://huggingface.co/datasets/bigcode/the-stack-dedup-pjj/blob/v1.1.a1/data/java/data_0000.parquet\", use_auth_token=True)\r\n```\r\nI get 401 error, but that's the case for the python subset too which I can load properly\r\n```\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/load.py\", line 1719, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/load.py\", line 1497, in load_dataset_builder\r\n dataset_module = dataset_module_factory(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/load.py\", line 1134, in dataset_module_factory\r\n return PackagedDatasetModuleFactory(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/load.py\", line 707, in get_module\r\n data_files = DataFilesDict.from_local_or_remote(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/data_files.py\", line 795, in from_local_or_remote\r\n DataFilesList.from_local_or_remote(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/data_files.py\", line 764, in from_local_or_remote\r\n origin_metadata = _get_origin_metadata_locally_or_by_urls(data_files, use_auth_token=use_auth_token)\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/data_files.py\", line 710, in _get_origin_metadata_locally_or_by_urls\r\n return thread_map(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/tqdm/contrib/concurrent.py\", line 94, in thread_map\r\n return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/tqdm/contrib/concurrent.py\", line 76, in _executor_map\r\n return list(tqdm_class(ex.map(fn, *iterables, **map_args), **kwargs))\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/tqdm/std.py\", line 1183, in __iter__\r\n for obj in iterable:\r\n File \"/opt/conda/envs/venv/lib/python3.9/concurrent/futures/_base.py\", line 609, in result_iterator\r\n yield fs.pop().result()\r\n File \"/opt/conda/envs/venv/lib/python3.9/concurrent/futures/_base.py\", line 446, in result\r\n return self.__get_result()\r\n File \"/opt/conda/envs/venv/lib/python3.9/concurrent/futures/_base.py\", line 391, in __get_result\r\n raise self._exception\r\n File \"/opt/conda/envs/venv/lib/python3.9/concurrent/futures/thread.py\", line 58, in run\r\n result = self.fn(*self.args, **self.kwargs)\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/data_files.py\", line 701, in _get_single_origin_metadata_locally_or_by_urls\r\n return (request_etag(data_file, use_auth_token=use_auth_token),)\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/utils/file_utils.py\", line 411, in request_etag\r\n response.raise_for_status()\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/requests/models.py\", line 960, in raise_for_status\r\n raise HTTPError(http_error_msg, response=self)\r\nrequests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/datasets/bigcode/the-stack-dedup-pjj/blob/v1.1.a1/data/python/data_0000.parquet```", "Can you check you used the right token ? You shouldn't get a 401 using your token", "I checked it’s the right token, when loading the full dataset I get the error after data extraction so I can access the files. \r\n```\r\nDownloading and preparing dataset parquet/bigcode--the-stack-dedup-pjj to /home/loubna_huggingface_co/.cache/huggingface/datasets/bigcode___parquet/bigcode--the-stack-dedup-pjj-872ffac7f4bb46ca/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec...\r\nDownloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 22.38it/s]\r\nExtracting data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 49.91it/s]\r\nTraceback (most recent call last):\r\n File \"/home/loubna_huggingface_co/load_ds.py\", line 5, in <module>\r\n ds = load_dataset(\"bigcode/the-stack-dedup-pjj\", data_dir=\"data/java\", use_auth_token=True,split=\"train\", revision=\"v1.1.a1\")\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/load.py\", line 1742, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/builder.py\", line 814, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/builder.py\", line 905, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/builder.py\", line 1502, in _prepare_split\r\n for key, table in logging.tqdm(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/tqdm/std.py\", line 1195, in __iter__\r\n for obj in iterable:\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py\", line 67, in _generate_tables\r\n parquet_file = pq.ParquetFile(f)\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/pyarrow/parquet/__init__.py\", line 286, in __init__\r\n self.reader.open(\r\n File \"pyarrow/_parquet.pyx\", line 1227, in pyarrow._parquet.ParquetReader.open\r\n File \"pyarrow/error.pxi\", line 100, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.\r\n```\r\nCould it be that I'm using a wrong url, I just copied it from the address bar", "The URL is wrong indeed, the right one is the one with \"resolve\" (the one you get when clicking on \"download\")- otherwise you try to download an html page ;)\r\n```\r\nhttps://huggingface.co/datasets/bigcode/the-stack-dedup-pjj/resolve/v1.1.a1/data/java/data_0000.parquet\r\n```", "Ah thanks! So I tried it with the first parquet file and it works, is there a way to know which parquet file was causing the issue since there are a lot of shards?", "I think you have to try them all :/\r\n\r\nAlternatively you can add a try/catch in `parquet.py` in `datasets` to raise the name of the file that fails at doing `parquet_file = pq.ParquetFile(f)` when you run your initial code\r\n```python\r\nload_dataset(\"bigcode/the-stack-dedup-pjj\", data_dir=\"data/java\", split=\"train\", revision=\"v1.1.a1\", use_auth_token=True)\r\n```\r\nbut it will still iterate on all the files until it fails", "Ok I will do that", "I did find the file, and I get the same error as before \r\n```\r\nDownloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 8160.12it/s]\r\nExtracting data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 1447.81it/s]\r\n \r\n---------------------------------------------------------------------------\r\nArrowInvalid Traceback (most recent call last)\r\nInput In [22], in <cell line: 7>()\r\n 4 data_features = (data[\"train\"].features)\r\n 6 url = \"/home/loubna_huggingface_co/.cache/huggingface/datasets/downloads/93431bc4380de07de8b0ab533666cb5a6120cbe266779e0a63c86bf7717475d7\"\r\n----> 7 data = load_dataset(\"parquet\", \r\n 8 data_files=url,\r\n 9 split=\"train\",\r\n 10 features=data_features,\r\n 11 use_auth_token=True)\r\n\r\nFile /opt/conda/envs/venv/lib/python3.9/site-packages/datasets/load.py:1742, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)\r\n 1739 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES\r\n 1741 # Download and prepare data\r\n-> 1742 builder_instance.download_and_prepare(\r\n 1743 download_config=download_config,\r\n 1744 download_mode=download_mode,\r\n 1745 ignore_verifications=ignore_verifications,\r\n 1746 try_from_hf_gcs=try_from_hf_gcs,\r\n 1747 use_auth_token=use_auth_token,\r\n 1748 )\r\n 1750 # Build dataset for splits\r\n 1751 keep_in_memory = (\r\n 1752 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)\r\n 1753 )\r\n\r\nFile /opt/conda/envs/venv/lib/python3.9/site-packages/datasets/builder.py:814, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, storage_options, **download_and_prepare_kwargs)\r\n 808 if not downloaded_from_gcs:\r\n 809 prepare_split_kwargs = {\r\n 810 \"file_format\": file_format,\r\n 811 \"max_shard_size\": max_shard_size,\r\n 812 **download_and_prepare_kwargs,\r\n 813 }\r\n--> 814 self._download_and_prepare(\r\n 815 dl_manager=dl_manager,\r\n 816 verify_infos=verify_infos,\r\n 817 **prepare_split_kwargs,\r\n 818 **download_and_prepare_kwargs,\r\n 819 )\r\n 820 # Sync info\r\n 821 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())\r\n\r\nFile /opt/conda/envs/venv/lib/python3.9/site-packages/datasets/builder.py:905, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 901 split_dict.add(split_generator.split_info)\r\n 903 try:\r\n 904 # Prepare split will record examples associated to the split\r\n--> 905 self._prepare_split(split_generator, **prepare_split_kwargs)\r\n 906 except OSError as e:\r\n 907 raise OSError(\r\n 908 \"Cannot find data file. \"\r\n 909 + (self.manual_download_instructions or \"\")\r\n 910 + \"\\nOriginal error:\\n\"\r\n 911 + str(e)\r\n 912 ) from None\r\n\r\nFile /opt/conda/envs/venv/lib/python3.9/site-packages/datasets/builder.py:1502, in ArrowBasedBuilder._prepare_split(self, split_generator, file_format, max_shard_size)\r\n 1500 total_num_examples, total_num_bytes = 0, 0\r\n 1501 try:\r\n-> 1502 for key, table in logging.tqdm(\r\n 1503 generator,\r\n 1504 unit=\" tables\",\r\n 1505 leave=False,\r\n 1506 disable=not logging.is_progress_bar_enabled(),\r\n 1507 ):\r\n 1508 if max_shard_size is not None and writer._num_bytes > max_shard_size:\r\n 1509 num_examples, num_bytes = writer.finalize()\r\n\r\nFile /opt/conda/envs/venv/lib/python3.9/site-packages/tqdm/std.py:1195, in tqdm.__iter__(self)\r\n 1192 time = self._time\r\n 1194 try:\r\n-> 1195 for obj in iterable:\r\n 1196 yield obj\r\n 1197 # Update and possibly print the progressbar.\r\n 1198 # Note: does not call self.update(1) for speed optimisation.\r\n\r\nFile /opt/conda/envs/venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py:67, in Parquet._generate_tables(self, files)\r\n 65 for file_idx, file in enumerate(itertools.chain.from_iterable(files)):\r\n 66 with open(file, \"rb\") as f:\r\n---> 67 parquet_file = pq.ParquetFile(f)\r\n 68 try:\r\n 69 for batch_idx, record_batch in enumerate(\r\n 70 parquet_file.iter_batches(batch_size=self.config.batch_size, columns=self.config.columns)\r\n 71 ):\r\n\r\nFile /opt/conda/envs/venv/lib/python3.9/site-packages/pyarrow/parquet/__init__.py:286, in ParquetFile.__init__(self, source, metadata, common_metadata, read_dictionary, memory_map, buffer_size, pre_buffer, coerce_int96_timestamp_unit, decryption_properties, thrift_string_size_limit, thrift_container_size_limit)\r\n 280 def __init__(self, source, *, metadata=None, common_metadata=None,\r\n 281 read_dictionary=None, memory_map=False, buffer_size=0,\r\n 282 pre_buffer=False, coerce_int96_timestamp_unit=None,\r\n 283 decryption_properties=None, thrift_string_size_limit=None,\r\n 284 thrift_container_size_limit=None):\r\n 285 self.reader = ParquetReader()\r\n--> 286 self.reader.open(\r\n 287 source, use_memory_map=memory_map,\r\n 288 buffer_size=buffer_size, pre_buffer=pre_buffer,\r\n 289 read_dictionary=read_dictionary, metadata=metadata,\r\n 290 coerce_int96_timestamp_unit=coerce_int96_timestamp_unit,\r\n 291 decryption_properties=decryption_properties,\r\n 292 thrift_string_size_limit=thrift_string_size_limit,\r\n 293 thrift_container_size_limit=thrift_container_size_limit,\r\n 294 )\r\n 295 self.common_metadata = common_metadata\r\n 296 self._nested_paths_by_prefix = self._build_nested_paths()\r\n\r\nFile /opt/conda/envs/venv/lib/python3.9/site-packages/pyarrow/_parquet.pyx:1227, in pyarrow._parquet.ParquetReader.open()\r\n\r\nFile /opt/conda/envs/venv/lib/python3.9/site-packages/pyarrow/error.pxi:100, in pyarrow.lib.check_status()\r\n\r\nArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.\r\n```", "Can you check the JSON file associated to `/home/loubna_huggingface_co/.cache/huggingface/datasets/downloads/93431bc4380de07de8b0ab533666cb5a6120cbe266779e0a63c86bf7717475d7` ? In the JSON file we can know from where it was downloaded\r\n\r\nYou can find it at `/home/loubna_huggingface_co/.cache/huggingface/datasets/downloads/93431bc4380de07de8b0ab533666cb5a6120cbe266779e0a63c86bf7717475d7.json`", "It's this file `https://huggingface.co/datasets/bigcode/the-stack-dedup-pjj/resolve/f48656daa9f3a3607dacf8b57a65810a6a7a7f73/data/java/data_0022.parquet` loading it gives the same error", "I'm able to load it properly using\r\n```python\r\nds = load_dataset(\"parquet\", data_files=a_parquet_file_url, use_auth_token=token)\r\n```\r\n\r\nMy guess is that your download was corrupted. Please delete `93431bc4380de07de8b0ab533666cb5a6120cbe266779e0a63c86bf7717475d7` and `93431bc4380de07de8b0ab533666cb5a6120cbe266779e0a63c86bf7717475d7.json` locally and try again", "That worked, thanks! But I thought if something went wrong with a download `datasets` creates new cache for all the files, that's not the case? (at some point I even changed dataset versions so it was still using that cache?)", "Cool !\r\n\r\n> But I thought if something went wrong with a download datasets creates new cache for all the files\r\n\r\nWe don't perform integrity verifications if we don't know in advance the hash of the file to download.\r\n\r\n> at some point I even changed dataset versions so it was still using that cache?\r\n\r\n`datasets` caches the files by URL and ETag. If the content of a file changes, then the ETag changes and so it redownloads the file", "I see, thank you!\r\n", "I experience the same error in v 2.12.0. But found out it was due to one column from polars was a categorical dtype (related to the error from #5706. Temporarily resolved it by casting the column to str instead." ]
2022-11-18T14:44:01
2023-05-07T09:52:59
2022-11-22T11:18:08
NONE
null
null
null
### Describe the bug I have an error when trying to load this [dataset](https://huggingface.co/datasets/bigcode/the-stack-dedup-pjj) (it's private but I can add you to the bigcode org). `datasets` can't read one of the parquet files in the Java subset ```python from datasets import load_dataset ds = load_dataset("bigcode/the-stack-dedup-pjj", data_dir="data/java", split="train", revision="v1.1.a1", use_auth_token=True) ```` ``` File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file. ``` It seems to be an issue with new Python versions, Because it works in these two environements: ``` - `datasets` version: 2.6.1 - Platform: Linux-5.4.0-131-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyArrow version: 9.0.0 - Pandas version: 1.3.4 ``` ``` - `datasets` version: 2.6.1 - Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-debian-10.13 - Python version: 3.7.12 - PyArrow version: 9.0.0 - Pandas version: 1.3.4 ``` But not in this: ``` - `datasets` version: 2.6.1 - Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-glibc2.28 - Python version: 3.9.13 - PyArrow version: 9.0.0 - Pandas version: 1.3.4 ``` ### Steps to reproduce the bug Load the dataset in python 3.9.13 ### Expected behavior Load the dataset without the pyarrow error. ### Environment info ``` - `datasets` version: 2.6.1 - Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-glibc2.28 - Python version: 3.9.13 - PyArrow version: 9.0.0 - Pandas version: 1.3.4 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5264/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5264/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5263
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5263/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5263/comments
https://api.github.com/repos/huggingface/datasets/issues/5263/events
https://github.com/huggingface/datasets/issues/5263
1,455,252,626
I_kwDODunzps5WvWSS
5,263
Save a dataset in a determined number of shards
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[]
2022-11-18T14:43:54
2022-12-14T18:22:59
2022-12-14T18:22:59
MEMBER
null
null
null
This is useful to distribute the shards to training nodes. This can be implemented in `save_to_disk` and can also leverage multiprocessing to speed up the process
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5263/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5263/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5262
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5262/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5262/comments
https://api.github.com/repos/huggingface/datasets/issues/5262/events
https://github.com/huggingface/datasets/issues/5262
1,455,171,100
I_kwDODunzps5WvCYc
5,262
AttributeError: 'Value' object has no attribute 'names'
{ "login": "emnaboughariou", "id": 102913847, "node_id": "U_kgDOBiJXNw", "avatar_url": "https://avatars.githubusercontent.com/u/102913847?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emnaboughariou", "html_url": "https://github.com/emnaboughariou", "followers_url": "https://api.github.com/users/emnaboughariou/followers", "following_url": "https://api.github.com/users/emnaboughariou/following{/other_user}", "gists_url": "https://api.github.com/users/emnaboughariou/gists{/gist_id}", "starred_url": "https://api.github.com/users/emnaboughariou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emnaboughariou/subscriptions", "organizations_url": "https://api.github.com/users/emnaboughariou/orgs", "repos_url": "https://api.github.com/users/emnaboughariou/repos", "events_url": "https://api.github.com/users/emnaboughariou/events{/privacy}", "received_events_url": "https://api.github.com/users/emnaboughariou/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! It looks like your \"isDif\" column is a Sequence of Value(\"string\"), not a Sequence of ClassLabel.\r\n\r\nYou can convert your Value(\"string\") feature type to a ClassLabel feature type this way:\r\n```python\r\nfrom datasets import ClassLabel, Sequence\r\n\r\n# provide the label_names yourself\r\nlabel_names = [...]\r\n# OR get them from the dataset\r\nlabel_names = sorted(set(label for labels in raw_datasets[\"train\"][\"isDif\"] for label in labels))\r\n\r\n# Cast to ClassLabel\r\nraw_datasets = raw_datasets.cast_column(\"isDif\", Sequence(ClassLabel(names=label_names)))\r\n```\r\n", "thank you \r\nit works πŸ’― " ]
2022-11-18T13:58:42
2022-11-22T10:09:24
2022-11-22T10:09:23
NONE
null
null
null
Hello I'm trying to build a model for custom token classification I already followed the token classification course on huggingface while adapting the code to my work, this message occures : 'Value' object has no attribute 'names' Here's my code: `raw_datasets` generates DatasetDict({ train: Dataset({ features: ['isDisf', 'pos', 'tokens', 'id'], num_rows: 14 }) }) `raw_datasets["train"][3]["isDisf"]` generates ['B_RM', 'I_RM', 'I_RM', 'B_RP', 'I_RP', 'O', 'O'] `dis_feature = raw_datasets["train"].features["isDisf"] dis_feature` generates Sequence(feature=Value(dtype='string', id=None), length=-1, id=None) and `label_names = dis_feature.feature.names label_names` generates AttributeError Traceback (most recent call last) [<ipython-input-28-972fd54a869a>](https://localhost:8080/#) in <module> ----> 1 label_names = dis_feature.feature.names 2 label_names AttributeError: 'Value' object has AttributeError: 'Value' object has no attribute 'names' Thank you for your help
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5262/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5262/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5261
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5261/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5261/comments
https://api.github.com/repos/huggingface/datasets/issues/5261/events
https://github.com/huggingface/datasets/issues/5261
1,454,647,861
I_kwDODunzps5WtCo1
5,261
Add PubTables-1M
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[ "cc @albertvillanova the author would like to add this dataset to the hub: https://github.com/microsoft/table-transformer/issues/68#issuecomment-1319114621. Could you help him out?" ]
2022-11-18T07:56:36
2022-11-18T08:02:18
null
CONTRIBUTOR
null
null
null
### Name PubTables-1M ### Paper https://openaccess.thecvf.com/content/CVPR2022/html/Smock_PubTables-1M_Towards_Comprehensive_Table_Extraction_From_Unstructured_Documents_CVPR_2022_paper.html ### Data https://github.com/microsoft/table-transformer ### Motivation Table Transformer is now available in πŸ€— Transformer, and it was trained on PubTables-1M. It's a large dataset for table extraction and structure recognition in unstructured documents.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5261/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5261/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5260
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5260/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5260/comments
https://api.github.com/repos/huggingface/datasets/issues/5260/events
https://github.com/huggingface/datasets/issues/5260
1,453,921,697
I_kwDODunzps5WqRWh
5,260
consumer-finance-complaints dataset not loading
{ "login": "adiprasad", "id": 8098496, "node_id": "MDQ6VXNlcjgwOTg0OTY=", "avatar_url": "https://avatars.githubusercontent.com/u/8098496?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adiprasad", "html_url": "https://github.com/adiprasad", "followers_url": "https://api.github.com/users/adiprasad/followers", "following_url": "https://api.github.com/users/adiprasad/following{/other_user}", "gists_url": "https://api.github.com/users/adiprasad/gists{/gist_id}", "starred_url": "https://api.github.com/users/adiprasad/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adiprasad/subscriptions", "organizations_url": "https://api.github.com/users/adiprasad/orgs", "repos_url": "https://api.github.com/users/adiprasad/repos", "events_url": "https://api.github.com/users/adiprasad/events{/privacy}", "received_events_url": "https://api.github.com/users/adiprasad/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @adiprasad.\r\n\r\nWe are having a look at it.", "I have opened an issue in that dataset Community tab on the Hub: https://huggingface.co/datasets/consumer-finance-complaints/discussions/1\r\n\r\nPlease note that in the meantime, you can load the dataset by passing `ignore_verifications=True`:\r\n```python\r\n>>> ds = load_dataset(\"consumer-finance-complaints\", ignore_verifications=True)\r\n>>> ds\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['Date Received', 'Product', 'Sub Product', 'Issue', 'Sub Issue', 'Complaint Text', 'Company Public Response', 'Company', 'State', 'Zip Code', 'Tags', 'Consumer Consent Provided', 'Submitted via', 'Date Sent To Company', 'Company Response To Consumer', 'Timely Response', 'Consumer Disputed', 'Complaint ID'],\r\n num_rows: 3079747\r\n })\r\n})\r\n```", "PR fixing this issue: https://huggingface.co/datasets/consumer-finance-complaints/discussions/2" ]
2022-11-17T20:10:26
2022-11-18T10:16:53
null
NONE
null
null
null
### Describe the bug Error during dataset loading ### Steps to reproduce the bug ``` >>> import datasets >>> cf_raw = datasets.load_dataset("consumer-finance-complaints") Downloading builder script: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 8.42k/8.42k [00:00<00:00, 3.33MB/s] Downloading metadata: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 5.60k/5.60k [00:00<00:00, 2.90MB/s] Downloading readme: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 16.6k/16.6k [00:00<00:00, 510kB/s] Downloading and preparing dataset consumer-finance-complaints/default to /root/.cache/huggingface/datasets/consumer-finance-complaints/default/0.0.0/30e483d37fb4b25bb98cad1bfd2dc48f6ed6d1f3371eb4568c625a61d1a79b69... Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 511M/511M [00:04<00:00, 103MB/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/skunk-pod-storage-lee-2emartie-40ibm-2ecom-pvc/anaconda3/envs/datasets/lib/python3.8/site-packages/datasets/load.py", line 1741, in load_dataset builder_instance.download_and_prepare( File "/skunk-pod-storage-lee-2emartie-40ibm-2ecom-pvc/anaconda3/envs/datasets/lib/python3.8/site-packages/datasets/builder.py", line 822, in download_and_prepare self._download_and_prepare( File "/skunk-pod-storage-lee-2emartie-40ibm-2ecom-pvc/anaconda3/envs/datasets/lib/python3.8/site-packages/datasets/builder.py", line 1555, in _download_and_prepare super()._download_and_prepare( File "/skunk-pod-storage-lee-2emartie-40ibm-2ecom-pvc/anaconda3/envs/datasets/lib/python3.8/site-packages/datasets/builder.py", line 931, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/skunk-pod-storage-lee-2emartie-40ibm-2ecom-pvc/anaconda3/envs/datasets/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 74, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=1605177353, num_examples=2455765, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=2043641693, num_examples=3079747, shard_lengths=[721000, 656000, 788000, 846000, 68747], dataset_name='consumer-finance-complaints')}] ``` ### Expected behavior dataset should load ### Environment info >>> datasets.__version__ '2.7.0' Python 3.8.10 "Ubuntu 20.04.4 LTS"
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5260/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5260/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5259
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5259/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5259/comments
https://api.github.com/repos/huggingface/datasets/issues/5259/events
https://github.com/huggingface/datasets/issues/5259
1,453,555,923
I_kwDODunzps5Wo4DT
5,259
datasets 2.7 introduces sharding error
{ "login": "DCNemesis", "id": 3616964, "node_id": "MDQ6VXNlcjM2MTY5NjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/3616964?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DCNemesis", "html_url": "https://github.com/DCNemesis", "followers_url": "https://api.github.com/users/DCNemesis/followers", "following_url": "https://api.github.com/users/DCNemesis/following{/other_user}", "gists_url": "https://api.github.com/users/DCNemesis/gists{/gist_id}", "starred_url": "https://api.github.com/users/DCNemesis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DCNemesis/subscriptions", "organizations_url": "https://api.github.com/users/DCNemesis/orgs", "repos_url": "https://api.github.com/users/DCNemesis/repos", "events_url": "https://api.github.com/users/DCNemesis/events{/privacy}", "received_events_url": "https://api.github.com/users/DCNemesis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I notice a comment in the code says:\r\n`Having lists of different sizes makes sharding ambigious, raise an error in this case until we decide how to define sharding without ambiguity for users` \r\n \r\n ... which suggests this update was pushed knowing that it might break some things. But, it didn't seem to have a useful error message of an argument that could be passed to avoid the error.", "Sorry for the inconvenience, I opened a PR in your repo to fix this: https://huggingface.co/datasets/sil-ai/bloom-speech/discussions/2\r\n\r\nBasically we've always considered lists in `gen_kwargs` to be a shard list that we can split and pass into different workers to generate the dataset (e.g. if you pass `num_proc=` in `load_dataset()` to generate the dataset in parallel), but it was documented only recently", "@lhoestq Thanks for the help. It looks like that took care of it." ]
2022-11-17T15:36:52
2022-12-24T01:44:02
2022-11-18T12:52:05
NONE
null
null
null
### Describe the bug dataset fails to load with runtime error `RuntimeError: Sharding is ambiguous for this dataset: we found several data sources lists of different lengths, and we don't know over which list we should parallelize: - key audio_files has length 46 - key data has length 0 To fix this, check the 'gen_kwargs' and make sure to use lists only for data sources, and use tuples otherwise. In the end there should only be one single list, or several lists with the same length.` ### Steps to reproduce the bug With datasets[audio] 2.7 loaded, and logged into hugging face, `data = datasets.load_dataset('sil-ai/bloom-speech', 'bis', use_auth_token=True)` creates the error. Full stack trace: ```--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) [<ipython-input-7-8cb9ca0f79f0>](https://localhost:8080/#) in <module> ----> 1 data = datasets.load_dataset('sil-ai/bloom-speech', 'bis', use_auth_token=True) 5 frames [/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, **config_kwargs) 1745 try_from_hf_gcs=try_from_hf_gcs, 1746 use_auth_token=use_auth_token, -> 1747 num_proc=num_proc, 1748 ) 1749 [/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 824 verify_infos=verify_infos, 825 **prepare_split_kwargs, --> 826 **download_and_prepare_kwargs, 827 ) 828 # Sync info [/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verify_infos, **prepare_splits_kwargs) 1554 def _download_and_prepare(self, dl_manager, verify_infos, **prepare_splits_kwargs): 1555 super()._download_and_prepare( -> 1556 dl_manager, verify_infos, check_duplicate_keys=verify_infos, **prepare_splits_kwargs 1557 ) 1558 [/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 911 try: 912 # Prepare split will record examples associated to the split --> 913 self._prepare_split(split_generator, **prepare_split_kwargs) 914 except OSError as e: 915 raise OSError( [/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size) 1362 fpath = path_join(self._output_dir, fname) 1363 -> 1364 num_input_shards = _number_of_shards_in_gen_kwargs(split_generator.gen_kwargs) 1365 if num_input_shards <= 1 and num_proc is not None: 1366 logger.warning( [/usr/local/lib/python3.7/dist-packages/datasets/utils/sharding.py](https://localhost:8080/#) in _number_of_shards_in_gen_kwargs(gen_kwargs) 16 + "\n".join(f"\t- key {key} has length {length}" for key, length in lists_lengths.items()) 17 + "\nTo fix this, check the 'gen_kwargs' and make sure to use lists only for data sources, " ---> 18 + "and use tuples otherwise. In the end there should only be one single list, or several lists with the same length." 19 ) 20 ) RuntimeError: Sharding is ambiguous for this dataset: we found several data sources lists of different lengths, and we don't know over which list we should parallelize: - key audio_files has length 46 - key data has length 0 To fix this, check the 'gen_kwargs' and make sure to use lists only for data sources, and use tuples otherwise. In the end there should only be one single list, or several lists with the same length.``` ### Expected behavior the dataset loads in datasets version 2.6.1 and should load with datasets 2.7 ### Environment info - `datasets` version: 2.7.0 - Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.15 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5259/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5259/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5258
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5258/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5258/comments
https://api.github.com/repos/huggingface/datasets/issues/5258/events
https://github.com/huggingface/datasets/issues/5258
1,453,516,636
I_kwDODunzps5Woudc
5,258
Restore order of split names in dataset_info for canonical datasets
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 4564477500, "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution", "name": "dataset contribution", "color": "0e8a16", "default": false, "description": "Contribution to a dataset script" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "The bulk edit is running...\r\n\r\nSee for example: \r\n- A single config: https://huggingface.co/datasets/acronym_identification/discussions/2\r\n- Multiple configs: https://huggingface.co/datasets/babi_qa/discussions/1", "TODO: Add \"dataset_info\" YAML metadata to:\r\n- [x] \"chr_en\" has no metadata JSON file, nor \"dataset_info\" YAML tag in its card\r\n - Fixing PR: https://huggingface.co/datasets/chr_en/discussions/1 \r\n- [x] \"conll2000\" has no metadata JSON file, but it has \"dataset_info\" YAML tag in its card\r\n- [x] \"crime_and_punish\" has no metadata JSON file, but it has \"dataset_info\" YAML tag in its card\r\n- [x] \"dart\" has no metadata JSON file, but it has \"dataset_info\" YAML tag in its card\r\n- [x] \"iwslt2017\" has no metadata JSON file, but it has \"dataset_info\" YAML tag in its card\r\n- [ ] \"mc4\" has no metadata JSON file, nor \"dataset_info\" YAML tag in its card\r\n- [ ] \"the_pile\" has no metadata JSON file, nor \"dataset_info\" YAML tag in its card\r\n- [ ] \"timit_asr\" has no metadata JSON file, nor \"dataset_info\" YAML tag in its card", "The bulk edit is finished." ]
2022-11-17T15:13:15
2023-02-16T09:49:05
2022-11-19T06:51:37
MEMBER
null
null
null
After a bulk edit of canonical datasets to create the YAML `dataset_info` metadata, the split names were accidentally sorted alphabetically. See for example: - https://huggingface.co/datasets/bc2gm_corpus/commit/2384629484401ecf4bb77cd808816719c424e57c Note that this order is the one appearing in the preview of the datasets. I'm making a bulk edit to align the order of the splits appearing in the metadata info with the order appearing in the loading script. Related to: - #5202
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5258/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5258/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5257
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5257/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5257/comments
https://api.github.com/repos/huggingface/datasets/issues/5257/events
https://github.com/huggingface/datasets/pull/5257
1,452,656,891
PR_kwDODunzps5DFENm
5,257
remove an unused statement
{ "login": "WrRan", "id": 7569098, "node_id": "MDQ6VXNlcjc1NjkwOTg=", "avatar_url": "https://avatars.githubusercontent.com/u/7569098?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WrRan", "html_url": "https://github.com/WrRan", "followers_url": "https://api.github.com/users/WrRan/followers", "following_url": "https://api.github.com/users/WrRan/following{/other_user}", "gists_url": "https://api.github.com/users/WrRan/gists{/gist_id}", "starred_url": "https://api.github.com/users/WrRan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WrRan/subscriptions", "organizations_url": "https://api.github.com/users/WrRan/orgs", "repos_url": "https://api.github.com/users/WrRan/repos", "events_url": "https://api.github.com/users/WrRan/events{/privacy}", "received_events_url": "https://api.github.com/users/WrRan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2022-11-17T04:00:50
2022-11-18T11:04:08
2022-11-18T11:04:08
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5257", "html_url": "https://github.com/huggingface/datasets/pull/5257", "diff_url": "https://github.com/huggingface/datasets/pull/5257.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5257.patch", "merged_at": "2022-11-18T11:04:08" }
remove the unused statement: `input_pairs = list(zip())`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5257/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5257/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5256
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5256/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5256/comments
https://api.github.com/repos/huggingface/datasets/issues/5256/events
https://github.com/huggingface/datasets/pull/5256
1,452,652,586
PR_kwDODunzps5DFDY0
5,256
fix wrong print
{ "login": "WrRan", "id": 7569098, "node_id": "MDQ6VXNlcjc1NjkwOTg=", "avatar_url": "https://avatars.githubusercontent.com/u/7569098?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WrRan", "html_url": "https://github.com/WrRan", "followers_url": "https://api.github.com/users/WrRan/followers", "following_url": "https://api.github.com/users/WrRan/following{/other_user}", "gists_url": "https://api.github.com/users/WrRan/gists{/gist_id}", "starred_url": "https://api.github.com/users/WrRan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WrRan/subscriptions", "organizations_url": "https://api.github.com/users/WrRan/orgs", "repos_url": "https://api.github.com/users/WrRan/repos", "events_url": "https://api.github.com/users/WrRan/events{/privacy}", "received_events_url": "https://api.github.com/users/WrRan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2022-11-17T03:54:26
2022-11-18T11:05:32
2022-11-18T11:05:32
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5256", "html_url": "https://github.com/huggingface/datasets/pull/5256", "diff_url": "https://github.com/huggingface/datasets/pull/5256.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5256.patch", "merged_at": "2022-11-18T11:05:32" }
print `encoded_dataset.column_names` not `dataset.column_names`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5256/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5256/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5255
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5255/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5255/comments
https://api.github.com/repos/huggingface/datasets/issues/5255/events
https://github.com/huggingface/datasets/issues/5255
1,452,631,517
I_kwDODunzps5WlWXd
5,255
Add a Depth Estimation dataset - DIODE / NYUDepth / KITTI
{ "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
{ "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false }
[ { "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false } ]
null
[ "Also cc @mariosasko and @lhoestq ", "Cool ! Let us know if you have questions or if we can help :)\r\n\r\nI guess we'll also have to create the NYU CS Department on the Hub ?", "> I guess we'll also have to create the NYU CS Department on the Hub ?\r\n\r\nYes, you're right! Let me add it to my profile first, and then we can transfer. Meanwhile, if it's recommended to loop the dataset author in here, let me know. \r\n\r\nAlso, the NYU Depth dataset seems big. Any example scripts for creating image datasets that I could refer? ", "You can check the imagenet-1k one.\r\n\r\nPS: If the licenses allows it, it'b be nice to host the dataset as sharded TAR archives (like imagenet-1k) instead of the ZIP format they use:\r\n- it will make streaming much faster\r\n- ZIP compression is not well suited for images\r\n- it will allow parallel processing of the dataset (you can pass a subset of shards to each worker)\r\n\r\n> if it's recommended to loop the dataset author in here, let me know.\r\n\r\nIt's recommended indeed, you can send them an email once you have the dataset ready and invite them to the org on the Hub", "> You can check the imagenet-1k one.\r\n\r\nWhere can I find the script? Are you referring to https://huggingface.co/docs/datasets/image_process ? Or is there anything more specific? ", "You can find it here: https://huggingface.co/datasets/imagenet-1k/blob/main/imagenet-1k.py", "Update: started working on it here: https://huggingface.co/datasets/sayakpaul/nyu_depth_v2. \r\n\r\nI am facing an issue and I have detailed it here: https://huggingface.co/datasets/sayakpaul/nyu_depth_v2/discussions/1\r\n\r\nEdit: The issue is gone. \r\n\r\nHowever, since the dataset is distributed as a single TAR archive (following the [URL used in TensorFlow Datasets](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/datasets/nyu_depth_v2/nyu_depth_v2_dataset_builder.py)) the loading is taking longer. How would suggest to shard the single TAR archive? \r\n\r\n@lhoestq \r\n\r\n", "A Colab Notebook demonstrating the dataset loading part: \r\n\r\nhttps://colab.research.google.com/gist/sayakpaul/aa0958c8d4ad8518d52a78f28044d871/scratchpad.ipynb\r\n\r\n@osanseviero @lhoestq \r\n\r\nI will work on a notebook to work with the dataset including data visualization.", "@osanseviero @lhoestq things seem to work fine with the current version of the dataset [here](https://huggingface.co/datasets/sayakpaul/nyu_depth_v2). Here's a notebook I developed to help with visualization: https://colab.research.google.com/drive/1K3ZU8XUPRDOYD38MQS9nreQXJYitlKSW?usp=sharing. \r\n\r\n@lhoestq I need your help with the following:\r\n\r\n> However, since the dataset is distributed as a single TAR archive (following the [URL used in TensorFlow Datasets](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/datasets/nyu_depth_v2/nyu_depth_v2_dataset_builder.py)) the loading is taking longer. How would suggest to shard the single TAR archive?\r\n\r\n@osanseviero @lhoestq question for you:\r\n\r\nWhere should we host the dataset? I think hosting it under hf.co/datasets (that is HF is the org) is fine as we have ImageNet-1k hosted similarly. We could then reach out to Diana Wofk (author of [Fast Depth](https://github.com/dwofk/fast-depth) and the owner of the repo on which TFDS NYU Depth V2 is based) for a review. WDYT? ", "> However, since the dataset is distributed as a single TAR archive (following the [URL used in TensorFlow Datasets](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/datasets/nyu_depth_v2/nyu_depth_v2_dataset_builder.py)) the loading is taking longer. How would suggest to shard the single TAR archive?\r\n\r\nFirst you can separate the train data and the validation data.\r\n\r\nThen since the dataset is quite big, you can even shard the train split and the validation split in multiple TAR archives. Something around 16 archives for train and 4 for validation would be fine for example.\r\n\r\nAlso no need to gzip the TAR archives, the images are already compressed in png or jpeg.", "> Then since the dataset is quite big, you can even shard the train split and the validation split in multiple TAR archives. Something around 16 archives for train and 4 for validation would be fine for example.\r\n\r\nYes, I got you. But this process seems to be manual and should be tailored for the given dataset. Do you have any script that you used to create the ImageNet-1k shards? \r\n\r\n> Also no need to gzip the TAR archives, the images are already compressed in png or jpeg.\r\n\r\nI was not going to do that. Not sure what brought it up. ", "> Yes, I got you. But this process seems to be manual and should be tailored for the given dataset. Do you have any script that you used to create the ImageNet-1k shards?\r\n\r\nI don't, but I agree it'd be nice to have a script for that !\r\n\r\n> I was not going to do that. Not sure what brought it up.\r\n\r\nThe original dataset is gzipped for some reason", "Oh, I am using this URL for the download: https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/datasets/nyu_depth_v2/nyu_depth_v2_dataset_builder.py#L24. ", "> Where should we host the dataset? I think hosting it under hf.co/datasets (that is HF is the org) is fine as we have ImageNet-1k hosted similarly.\r\n\r\nMaybe you can create an org for NYU Courant (this is the institute of the lab of the main author of the dataset if I'm not mistaken), and invite the authors to join.\r\n\r\nWe don't add datasets without namespace anymore", "Updates: https://huggingface.co/datasets/sayakpaul/nyu_depth_v2/discussions/5\r\n\r\nThe entire process (preparing multiple archives, preparing data loading script, etc.) was fun and engaging, thanks to the documentation. I believe we could work on a small blog post that would work as a reference for the future contributors following this path. What say? \r\n\r\nCc: @lhoestq @osanseviero ", "> I believe we could work on a small blog post that would work as a reference for the future contributors following this path. What say?\r\n\r\n@polinaeterna already mentioned it would be nice to present this process for audio (it's exactly the same), I believe it can be useful to many people", "Cool. Let's work on that after the NYU Depth Dataset is fully in on Hub (under the appropriate org). πŸ€—", "@lhoestq need to discuss something while I am adding the dataset card to https://huggingface.co/datasets/sayakpaul/nyu_depth_v2/. \r\n\r\nAs per [Papers With Code](https://paperswithcode.com/dataset/nyuv2), NYU Depth v2 is used for many different tasks:\r\n\r\n* Monocular depth estimation\r\n* Depth estimation \r\n* Semantic segmentation\r\n* Plane instance segmentation \r\n* ...\r\n\r\nSo, while writing the supported task part of the dataset card, should we focus on all these? IMO, we could focus on just depth estimation and semantic segmentation for now since we have supported models for these two. WDYT?\r\n\r\nAlso, I am getting: \r\n\r\n\r\n```\r\nremote: Your push was accepted, but with warnings:\r\nremote: - Warning: The task_ids \"depth-estimation\" is not in the official list: acceptability-classification, entity-linking-classification, fact-checking, intent-classification, multi-class-classification, multi-label-classification, multi-input-text-classification, natural-language-inference, semantic-similarity-classification, sentiment-classification, topic-classification, semantic-similarity-scoring, sentiment-scoring, sentiment-analysis, hate-speech-detection, text-scoring, named-entity-recognition, part-of-speech, parsing, lemmatization, word-sense-disambiguation, coreference-resolution, extractive-qa, open-domain-qa, closed-domain-qa, news-articles-summarization, news-articles-headline-generation, dialogue-generation, dialogue-modeling, language-modeling, text-simplification, explanation-generation, abstractive-qa, open-domain-abstractive-qa, closed-domain-qa, open-book-qa, closed-book-qa, slot-filling, masked-language-modeling, keyword-spotting, speaker-identification, audio-intent-classification, audio-emotion-recognition, audio-language-identification, multi-label-image-classification, multi-class-image-classification, face-detection, vehicle-detection, instance-segmentation, semantic-segmentation, panoptic-segmentation, image-captioning, grasping, task-planning, tabular-multi-class-classification, tabular-multi-label-classification, tabular-single-column-regression, rdf-to-text, multiple-choice-qa, multiple-choice-coreference-resolution, document-retrieval, utterance-retrieval, entity-linking-retrieval, fact-checking-retrieval, univariate-time-series-forecasting, multivariate-time-series-forecasting, visual-question-answering, document-question-answering\r\nremote: ----------------------------------------------------------\r\nremote: Please find the documentation at:\r\nremote: https://huggingface.co/docs/hub/model-cards#model-card-metadata\r\n```\r\n\r\nWhat should be the plan of action for this?\r\n\r\nCc: @osanseviero \r\n\r\n", "> What should be the plan of action for this?\r\n\r\nWhen you merged https://github.com/huggingface/hub-docs/pull/488, there is a JS Interfaces GitHub Actions workflow that runs https://github.com/huggingface/hub-docs/actions/workflows/js-interfaces-tests.yml. It has a step called [export-task scripts](https://github.com/huggingface/hub-docs/actions/runs/3622479064/jobs/6107238948) which exports an interface you can use in `dataset`. If you look at the logs, it prints out a map. This map can replace https://github.com/huggingface/datasets/blob/main/src/datasets/utils/resources/tasks.json (tasks.json was generated with this script), which should add depth estimation\r\n", "Thanks @osanseviero. \r\n\r\nhttps://github.com/huggingface/datasets/pull/5335", "Closing the issue as the dataset has been successfully added: https://huggingface.co/datasets/sayakpaul/nyu_depth_v2" ]
2022-11-17T03:22:22
2022-12-17T12:20:38
2022-12-17T12:20:37
MEMBER
null
null
null
### Name NYUDepth ### Paper http://cs.nyu.edu/~silberman/papers/indoor_seg_support.pdf ### Data https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html ### Motivation Depth estimation is an important problem in computer vision. We have a couple of Depth Estimation models on Hub as well: * [GLPN](https://huggingface.co/docs/transformers/model_doc/glpn) * [DPT](https://huggingface.co/docs/transformers/model_doc/dpt) Would be nice to have a dataset for depth estimation. These datasets usually have three things: input image, depth map image, and depth mask (validity mask to indicate if a reading for a pixel is valid or not). Since we already have [semantic segmentation datasets on the Hub](https://huggingface.co/datasets?task_categories=task_categories:image-segmentation&sort=downloads), I don't think we need any extended utilities to support this addition. Having this dataset would also allow us to author data preprocessing guides for depth estimation, particularly like the ones we have for other tasks ([example](https://huggingface.co/docs/datasets/image_classification)). Ccing @osanseviero @nateraw @NielsRogge Happy to work on adding it.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5255/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5255/timeline
null
completed
false