url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
1.71B
1.82B
node_id
stringlengths
18
19
number
int64
5.87k
6.08k
title
stringlengths
1
280
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
3 values
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
9
16.9k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
1 value
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/6080
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6080/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6080/comments
https://api.github.com/repos/huggingface/datasets/issues/6080/events
https://github.com/huggingface/datasets/pull/6080
1,822,667,554
PR_kwDODunzps5WdL4K
6,080
Remove README link to deprecated Colab notebook
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-07-26T15:27:49
2023-07-26T15:27:49
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6080", "html_url": "https://github.com/huggingface/datasets/pull/6080", "diff_url": "https://github.com/huggingface/datasets/pull/6080.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6080.patch", "merged_at": null }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6080/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6080/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6079
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6079/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6079/comments
https://api.github.com/repos/huggingface/datasets/issues/6079/events
https://github.com/huggingface/datasets/issues/6079
1,822,597,471
I_kwDODunzps5soqFf
6,079
Iterating over DataLoader based on HF datasets is stuck forever
{ "login": "arindamsarkar93", "id": 5454868, "node_id": "MDQ6VXNlcjU0NTQ4Njg=", "avatar_url": "https://avatars.githubusercontent.com/u/5454868?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arindamsarkar93", "html_url": "https://github.com/arindamsarkar93", "followers_url": "https://api.github.com/users/arindamsarkar93/followers", "following_url": "https://api.github.com/users/arindamsarkar93/following{/other_user}", "gists_url": "https://api.github.com/users/arindamsarkar93/gists{/gist_id}", "starred_url": "https://api.github.com/users/arindamsarkar93/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arindamsarkar93/subscriptions", "organizations_url": "https://api.github.com/users/arindamsarkar93/orgs", "repos_url": "https://api.github.com/users/arindamsarkar93/repos", "events_url": "https://api.github.com/users/arindamsarkar93/events{/privacy}", "received_events_url": "https://api.github.com/users/arindamsarkar93/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "When the process starts to hang, can you interrupt it with CTRL + C and paste the error stack trace here? ", "Thanks @mariosasko for your prompt response, here's the stack trace:\r\n\r\n```\r\nKeyboardInterrupt Traceback (most recent call last)\r\nCell In[12], line 4\r\n 2 t = time.time()\r\n 3 iter_ = 0\r\n----> 4 for batch in train_dataloader:\r\n 5 #batch_proc = streaming_obj.collect_streaming_data_batch(batch)\r\n 6 iter_ += 1\r\n 8 if iter_ == 1:\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/utils/data/dataloader.py:634, in _BaseDataLoaderIter.__next__(self)\r\n 631 if self._sampler_iter is None:\r\n 632 # TODO(https://github.com/pytorch/pytorch/issues/76750)\r\n 633 self._reset() # type: ignore[call-arg]\r\n--> 634 data = self._next_data()\r\n 635 self._num_yielded += 1\r\n 636 if self._dataset_kind == _DatasetKind.Iterable and \\\r\n 637 self._IterableDataset_len_called is not None and \\\r\n 638 self._num_yielded > self._IterableDataset_len_called:\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/utils/data/dataloader.py:678, in _SingleProcessDataLoaderIter._next_data(self)\r\n 676 def _next_data(self):\r\n 677 index = self._next_index() # may raise StopIteration\r\n--> 678 data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n 679 if self._pin_memory:\r\n 680 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device)\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py:32, in _IterableDatasetFetcher.fetch(self, possibly_batched_index)\r\n 30 for _ in possibly_batched_index:\r\n 31 try:\r\n---> 32 data.append(next(self.dataset_iter))\r\n 33 except StopIteration:\r\n 34 self.ended = True\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/iterable_dataset.py:1353, in IterableDataset.__iter__(self)\r\n 1350 yield formatter.format_row(pa_table)\r\n 1351 return\r\n-> 1353 for key, example in ex_iterable:\r\n 1354 if self.features:\r\n 1355 # `IterableDataset` automatically fills missing columns with None.\r\n 1356 # This is done with `_apply_feature_types_on_example`.\r\n 1357 example = _apply_feature_types_on_example(\r\n 1358 example, self.features, token_per_repo_id=self._token_per_repo_id\r\n 1359 )\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/iterable_dataset.py:956, in BufferShuffledExamplesIterable.__iter__(self)\r\n 954 # this is the shuffle buffer that we keep in memory\r\n 955 mem_buffer = []\r\n--> 956 for x in self.ex_iterable:\r\n 957 if len(mem_buffer) == buffer_size: # if the buffer is full, pick and example from it\r\n 958 i = next(indices_iterator)\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/iterable_dataset.py:296, in ShuffledDataSourcesArrowExamplesIterable.__iter__(self)\r\n 294 for key, pa_table in self.generate_tables_fn(**kwargs_with_shuffled_shards):\r\n 295 for pa_subtable in pa_table.to_reader(max_chunksize=config.ARROW_READER_BATCH_SIZE_IN_DATASET_ITER):\r\n--> 296 formatted_batch = formatter.format_batch(pa_subtable)\r\n 297 for example in _batch_to_examples(formatted_batch):\r\n 298 yield key, example\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/formatting/formatting.py:448, in PythonFormatter.format_batch(self, pa_table)\r\n 446 if self.lazy:\r\n 447 return LazyBatch(pa_table, self)\r\n--> 448 batch = self.python_arrow_extractor().extract_batch(pa_table)\r\n 449 batch = self.python_features_decoder.decode_batch(batch)\r\n 450 return batch\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/formatting/formatting.py:150, in PythonArrowExtractor.extract_batch(self, pa_table)\r\n 149 def extract_batch(self, pa_table: pa.Table) -> dict:\r\n--> 150 return pa_table.to_pydict()\r\n\r\nKeyboardInterrupt: \r\n```\r\n", "Update: If i let it run, it eventually fails with:\r\n\r\n```\r\nRuntimeError Traceback (most recent call last)\r\nCell In[16], line 4\r\n 2 t = time.time()\r\n 3 iter_ = 0\r\n----> 4 for batch in train_dataloader:\r\n 5 #batch_proc = streaming_obj.collect_streaming_data_batch(batch)\r\n 6 iter_ += 1\r\n 8 if iter_ == 1:\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/utils/data/dataloader.py:634, in _BaseDataLoaderIter.__next__(self)\r\n 631 if self._sampler_iter is None:\r\n 632 # TODO(https://github.com/pytorch/pytorch/issues/76750)\r\n 633 self._reset() # type: ignore[call-arg]\r\n--> 634 data = self._next_data()\r\n 635 self._num_yielded += 1\r\n 636 if self._dataset_kind == _DatasetKind.Iterable and \\\r\n 637 self._IterableDataset_len_called is not None and \\\r\n 638 self._num_yielded > self._IterableDataset_len_called:\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/utils/data/dataloader.py:678, in _SingleProcessDataLoaderIter._next_data(self)\r\n 676 def _next_data(self):\r\n 677 index = self._next_index() # may raise StopIteration\r\n--> 678 data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n 679 if self._pin_memory:\r\n 680 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device)\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py:32, in _IterableDatasetFetcher.fetch(self, possibly_batched_index)\r\n 30 for _ in possibly_batched_index:\r\n 31 try:\r\n---> 32 data.append(next(self.dataset_iter))\r\n 33 except StopIteration:\r\n 34 self.ended = True\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/iterable_dataset.py:1360, in IterableDataset.__iter__(self)\r\n 1354 if self.features:\r\n 1355 # `IterableDataset` automatically fills missing columns with None.\r\n 1356 # This is done with `_apply_feature_types_on_example`.\r\n 1357 example = _apply_feature_types_on_example(\r\n 1358 example, self.features, token_per_repo_id=self._token_per_repo_id\r\n 1359 )\r\n-> 1360 yield format_dict(example) if format_dict else example\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/formatting/torch_formatter.py:85, in TorchFormatter.recursive_tensorize(self, data_struct)\r\n 84 def recursive_tensorize(self, data_struct: dict):\r\n---> 85 return map_nested(self._recursive_tensorize, data_struct, map_list=False)\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/utils/py_utils.py:463, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, types, disable_tqdm, desc)\r\n 461 num_proc = 1\r\n 462 if num_proc != -1 and num_proc <= 1 or len(iterable) < parallel_min_length:\r\n--> 463 mapped = [\r\n 464 _single_map_nested((function, obj, types, None, True, None))\r\n 465 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n 466 ]\r\n 467 else:\r\n 468 mapped = parallel_map(function, iterable, num_proc, types, disable_tqdm, desc, _single_map_nested)\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/utils/py_utils.py:464, in <listcomp>(.0)\r\n 461 num_proc = 1\r\n 462 if num_proc != -1 and num_proc <= 1 or len(iterable) < parallel_min_length:\r\n 463 mapped = [\r\n--> 464 _single_map_nested((function, obj, types, None, True, None))\r\n 465 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n 466 ]\r\n 467 else:\r\n 468 mapped = parallel_map(function, iterable, num_proc, types, disable_tqdm, desc, _single_map_nested)\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/utils/py_utils.py:366, in _single_map_nested(args)\r\n 364 # Singleton first to spare some computation\r\n 365 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):\r\n--> 366 return function(data_struct)\r\n 368 # Reduce logging to keep things readable in multiprocessing with tqdm\r\n 369 if rank is not None and logging.get_verbosity() < logging.WARNING:\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/formatting/torch_formatter.py:82, in TorchFormatter._recursive_tensorize(self, data_struct)\r\n 80 elif isinstance(data_struct, (list, tuple)):\r\n 81 return self._consolidate([self.recursive_tensorize(substruct) for substruct in data_struct])\r\n---> 82 return self._tensorize(data_struct)\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/formatting/torch_formatter.py:68, in TorchFormatter._tensorize(self, value)\r\n 66 if isinstance(value, PIL.Image.Image):\r\n 67 value = np.asarray(value)\r\n---> 68 return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs})\r\n\r\nRuntimeError: Could not infer dtype of decimal.Decimal\r\n```" ]
2023-07-26T14:52:37
2023-07-26T15:25:07
null
NONE
null
null
null
### Describe the bug I am using Amazon Sagemaker notebook (Amazon Linux 2) with python 3.10 based Conda environment. I have a dataset in parquet format locally. When I try to iterate over it, the loader is stuck forever. Note that the same code is working for python 3.6 based conda environment seamlessly. What should be my next steps here? ### Steps to reproduce the bug ``` train_dataset = load_dataset( "parquet", data_files = {'train': tr_data_path + '*.parquet'}, split = 'train', streaming = True ).with_format('torch') train_dataloader = DataLoader(train_dataset, batch_size = 512, num_workers = 32) t = time.time() iter_ = 0 for batch in train_dataloader: iter_ += 1 if iter_ == 1000: break print (time.time() - t) ``` ### Expected behavior The snippet should work normally and load the next batch of data. ### Environment info datasets: '2.14.0' pyarrow: '12.0.0' torch: '2.0.0' Python: 3.10.10 | packaged by conda-forge | (main, Mar 24 2023, 20:08:06) [GCC 11.3.0] !uname -r 5.10.178-162.673.amzn2.x86_64
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6079/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6079/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6078
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6078/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6078/comments
https://api.github.com/repos/huggingface/datasets/issues/6078/events
https://github.com/huggingface/datasets/issues/6078
1,822,501,472
I_kwDODunzps5soSpg
6,078
resume_download with streaming=True
{ "login": "NicolasMICAUX", "id": 72763959, "node_id": "MDQ6VXNlcjcyNzYzOTU5", "avatar_url": "https://avatars.githubusercontent.com/u/72763959?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NicolasMICAUX", "html_url": "https://github.com/NicolasMICAUX", "followers_url": "https://api.github.com/users/NicolasMICAUX/followers", "following_url": "https://api.github.com/users/NicolasMICAUX/following{/other_user}", "gists_url": "https://api.github.com/users/NicolasMICAUX/gists{/gist_id}", "starred_url": "https://api.github.com/users/NicolasMICAUX/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NicolasMICAUX/subscriptions", "organizations_url": "https://api.github.com/users/NicolasMICAUX/orgs", "repos_url": "https://api.github.com/users/NicolasMICAUX/repos", "events_url": "https://api.github.com/users/NicolasMICAUX/events{/privacy}", "received_events_url": "https://api.github.com/users/NicolasMICAUX/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-07-26T14:08:22
2023-07-26T14:08:22
null
NONE
null
null
null
### Describe the bug I used: ``` dataset = load_dataset( "oscar-corpus/OSCAR-2201", token=True, language="fr", streaming=True, split="train" ) ``` Unfortunately, the server had a problem during the training process. I saved the step my training stopped at. But how can I resume download from step 1_000_´000 without re-streaming all the first 1 million docs of the dataset? `download_config=DownloadConfig(resume_download=True)` seems to not work with streaming=True. ### Steps to reproduce the bug ``` from datasets import load_dataset, DownloadConfig dataset = load_dataset( "oscar-corpus/OSCAR-2201", token=True, language="fr", streaming=True, # optional split="train", download_config=DownloadConfig(resume_download=True) ) # interupt the run and try to relaunch it => this restart from scratch ``` ### Expected behavior I would expect a parameter to start streaming from a given index in the dataset. ### Environment info - `datasets` version: 2.14.0 - Platform: Linux-5.19.0-45-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.15.1 - PyArrow version: 12.0.1 - Pandas version: 2.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6078/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6078/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6077
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6077/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6077/comments
https://api.github.com/repos/huggingface/datasets/issues/6077/events
https://github.com/huggingface/datasets/issues/6077
1,822,486,810
I_kwDODunzps5soPEa
6,077
Mapping gets stuck at 99%
{ "login": "Laurent2916", "id": 21087104, "node_id": "MDQ6VXNlcjIxMDg3MTA0", "avatar_url": "https://avatars.githubusercontent.com/u/21087104?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Laurent2916", "html_url": "https://github.com/Laurent2916", "followers_url": "https://api.github.com/users/Laurent2916/followers", "following_url": "https://api.github.com/users/Laurent2916/following{/other_user}", "gists_url": "https://api.github.com/users/Laurent2916/gists{/gist_id}", "starred_url": "https://api.github.com/users/Laurent2916/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Laurent2916/subscriptions", "organizations_url": "https://api.github.com/users/Laurent2916/orgs", "repos_url": "https://api.github.com/users/Laurent2916/repos", "events_url": "https://api.github.com/users/Laurent2916/events{/privacy}", "received_events_url": "https://api.github.com/users/Laurent2916/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-07-26T14:00:40
2023-07-26T14:00:40
null
CONTRIBUTOR
null
null
null
### Describe the bug Hi ! I'm currently working with a large (~150GB) unnormalized dataset at work. The dataset is available on a read-only filesystem internally, and I use a [loading script](https://huggingface.co/docs/datasets/dataset_script) to retreive it. I want to normalize the features of the dataset, meaning I need to compute the mean and standard deviation metric for each feature of the entire dataset. I cannot load the entire dataset to RAM as it is too big, so following [this discussion on the huggingface discourse](https://discuss.huggingface.co/t/copy-columns-in-a-dataset-and-compute-statistics-for-a-column/22157) I am using a [map operation](https://huggingface.co/docs/datasets/v2.14.0/en/package_reference/main_classes#datasets.Dataset.map) to first compute the metrics and a second map operation to apply them on the dataset. The problem lies in the second mapping, as it gets stuck at ~99%. By checking what the process does (using `htop` and `strace`) it seems to be doing a lot of I/O operations, and I'm not sure why. Obviously, I could always normalize the dataset externally and then load it using a loading script. However, since the internal dataset is updated fairly frequently, using the library to perform normalization automatically would make it much easier for me. ### Steps to reproduce the bug I'm able to reproduce the problem using the following scripts: ```python # random_data.py import datasets import torch _VERSION = "1.0.0" class RandomDataset(datasets.GeneratorBasedBuilder): def _info(self): return datasets.DatasetInfo( version=_VERSION, supervised_keys=None, features=datasets.Features( { "positions": datasets.Array2D( shape=(30000, 3), dtype="float32", ), "normals": datasets.Array2D( shape=(30000, 3), dtype="float32", ), "features": datasets.Array2D( shape=(30000, 6), dtype="float32", ), "scalars": datasets.Sequence( feature=datasets.Value("float32"), length=20, ), }, ), ) def _split_generators(self, dl_manager): return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, # type: ignore gen_kwargs={"nb_samples": 1000}, ), datasets.SplitGenerator( name=datasets.Split.TEST, # type: ignore gen_kwargs={"nb_samples": 100}, ), ] def _generate_examples(self, nb_samples: int): for idx in range(nb_samples): yield idx, { "positions": torch.rand(30000, 3), "normals": torch.rand(30000, 3), "features": torch.rand(30000, 6), "scalars": torch.rand(20), } ``` ```python # main.py import datasets import torch def compute_mean_std( dataset: datasets.Dataset, ) -> dict[str, torch.Tensor]: """Compute the mean and standard deviation of each feature of the dataset. Args: dataset (`Dataset`): A huggingface dataset. Returns: dict: A dictionary containing the mean and standard deviation of each feature. """ result = {} for key in dataset: # extract data from dataset data: torch.Tensor = dataset[key] # type: ignore # reshape data, from (a, ..., b, c) -> (*, c) data = data.reshape(-1, data.shape[-1]) # compute mean and std mean = data.mean(dim=0) # (c) std = data.std(dim=0) # (c) # store in result result[key] = torch.stack((mean, std)) return result def apply_mean_std( dataset: datasets.Dataset, mean_std: datasets.Dataset, ) -> dict[str, torch.Tensor]: """Normalize the dataset using the mean and standard deviation of each feature. Args: dataset (`Dataset`): A huggingface dataset. mean_std (`Dataset`): A huggingface dataset containing the mean and standard deviation of each feature. Returns: dict: A dictionary containing the normalized dataset. """ result = {} for key in mean_std.column_names: # extract data from dataset data: torch.Tensor = dataset[key] # type: ignore # extract mean and std from dict mean = mean_std[key][0] # type: ignore std = mean_std[key][1] # type: ignore # normalize data normalized_data = (data - mean) / std result[key] = normalized_data return result # hack to force the map function to use the entire dataset MAX_MAP_BATCH_SIZE = 1_000_000_000 # get dataset ds = datasets.load_dataset( path="random_data.py", split="train", ).with_format("torch") # compute mean/std of each feature mean_std = ds.map( desc="Computing mean/std", # type: ignore remove_columns=ds.column_names, # type: ignore function=compute_mean_std, batch_size=MAX_MAP_BATCH_SIZE, batched=True, ) # normalize each feature of the dataset ds_normalized = ds.map( desc="Applying mean/std", # type: ignore function=apply_mean_std, batched=False, fn_kwargs={ "mean_std": mean_std, }, ) ``` ### Expected behavior Using the previous scripts, the `ds_normalized` mapping completes in ~5 minutes, but any subsequent use of `ds_normalized` is really really slow, for example reapplying `apply_mean_std` to `ds_normalized` takes forever. This is very strange, I'm sure I must be missing something, but I would still expect this to be faster. ### Environment info - `datasets` version: 2.13.1 - Platform: Linux-3.10.0-1160.66.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.10.12 - Huggingface_hub version: 0.15.1 - PyArrow version: 12.0.0 - Pandas version: 2.0.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6077/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6077/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6076
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6076/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6076/comments
https://api.github.com/repos/huggingface/datasets/issues/6076/events
https://github.com/huggingface/datasets/pull/6076
1,822,345,597
PR_kwDODunzps5WcGVR
6,076
No gzip encoding from github
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6076). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008191 / 0.011353 (-0.003162) | 0.004669 / 0.011008 (-0.006339) | 0.101315 / 0.038508 (0.062807) | 0.090235 / 0.023109 (0.067126) | 0.381265 / 0.275898 (0.105367) | 0.418266 / 0.323480 (0.094786) | 0.006292 / 0.007986 (-0.001693) | 0.003979 / 0.004328 (-0.000349) | 0.075946 / 0.004250 (0.071696) | 0.070678 / 0.037052 (0.033625) | 0.378006 / 0.258489 (0.119517) | 0.425825 / 0.293841 (0.131984) | 0.036325 / 0.128546 (-0.092221) | 0.009814 / 0.075646 (-0.065832) | 0.345687 / 0.419271 (-0.073584) | 0.063846 / 0.043533 (0.020313) | 0.386003 / 0.255139 (0.130864) | 0.400875 / 0.283200 (0.117675) | 0.027806 / 0.141683 (-0.113877) | 1.814810 / 1.452155 (0.362655) | 1.879897 / 1.492716 (0.387180) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218684 / 0.018006 (0.200677) | 0.501715 / 0.000490 (0.501225) | 0.004808 / 0.000200 (0.004608) | 0.000093 / 0.000054 (0.000039) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035494 / 0.037411 (-0.001917) | 0.100949 / 0.014526 (0.086423) | 0.114639 / 0.176557 (-0.061917) | 0.188908 / 0.737135 (-0.548227) | 0.115794 / 0.296338 (-0.180545) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.462537 / 0.215209 (0.247328) | 4.612469 / 2.077655 (2.534814) | 2.298065 / 1.504120 (0.793945) | 2.088738 / 1.541195 (0.547543) | 2.188072 / 1.468490 (0.719582) | 0.565412 / 4.584777 (-4.019364) | 4.180394 / 3.745712 (0.434681) | 3.848696 / 5.269862 (-1.421165) | 2.391381 / 4.565676 (-2.174296) | 0.067647 / 0.424275 (-0.356628) | 0.008847 / 0.007607 (0.001240) | 0.553288 / 0.226044 (0.327243) | 5.517962 / 2.268929 (3.249033) | 2.866622 / 55.444624 (-52.578002) | 2.439025 / 6.876477 (-4.437452) | 2.740156 / 2.142072 (0.598084) | 0.694796 / 4.805227 (-4.110431) | 0.159022 / 6.500664 (-6.341642) | 0.074471 / 0.075469 (-0.000998) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.534979 / 1.841788 (-0.306808) | 23.297273 / 8.074308 (15.222965) | 16.859178 / 10.191392 (6.667786) | 0.207594 / 0.680424 (-0.472830) | 0.021990 / 0.534201 (-0.512211) | 0.472059 / 0.579283 (-0.107224) | 0.497632 / 0.434364 (0.063268) | 0.565672 / 0.540337 (0.025335) | 0.772485 / 1.386936 (-0.614451) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007777 / 0.011353 (-0.003576) | 0.004679 / 0.011008 (-0.006329) | 0.077317 / 0.038508 (0.038809) | 0.087433 / 0.023109 (0.064324) | 0.437389 / 0.275898 (0.161491) | 0.479562 / 0.323480 (0.156082) | 0.006137 / 0.007986 (-0.001849) | 0.003938 / 0.004328 (-0.000390) | 0.074769 / 0.004250 (0.070518) | 0.066605 / 0.037052 (0.029553) | 0.454865 / 0.258489 (0.196376) | 0.485103 / 0.293841 (0.191262) | 0.036540 / 0.128546 (-0.092006) | 0.009983 / 0.075646 (-0.065664) | 0.083566 / 0.419271 (-0.335706) | 0.059527 / 0.043533 (0.015994) | 0.449154 / 0.255139 (0.194015) | 0.462542 / 0.283200 (0.179342) | 0.027581 / 0.141683 (-0.114102) | 1.776720 / 1.452155 (0.324565) | 1.847920 / 1.492716 (0.355204) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246792 / 0.018006 (0.228786) | 0.494513 / 0.000490 (0.494024) | 0.004376 / 0.000200 (0.004176) | 0.000115 / 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037837 / 0.037411 (0.000426) | 0.112752 / 0.014526 (0.098226) | 0.121742 / 0.176557 (-0.054815) | 0.189365 / 0.737135 (-0.547770) | 0.124366 / 0.296338 (-0.171973) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.492890 / 0.215209 (0.277681) | 4.920270 / 2.077655 (2.842615) | 2.565350 / 1.504120 (1.061230) | 2.378679 / 1.541195 (0.837484) | 2.483794 / 1.468490 (1.015304) | 0.579623 / 4.584777 (-4.005154) | 4.195924 / 3.745712 (0.450212) | 3.903382 / 5.269862 (-1.366479) | 2.466884 / 4.565676 (-2.098793) | 0.064145 / 0.424275 (-0.360130) | 0.008695 / 0.007607 (0.001088) | 0.579300 / 0.226044 (0.353256) | 5.809064 / 2.268929 (3.540136) | 3.145393 / 55.444624 (-52.299232) | 2.832760 / 6.876477 (-4.043717) | 3.020460 / 2.142072 (0.878388) | 0.700235 / 4.805227 (-4.104992) | 0.161262 / 6.500664 (-6.339402) | 0.076484 / 0.075469 (0.001015) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.606504 / 1.841788 (-0.235284) | 23.747863 / 8.074308 (15.673555) | 17.281712 / 10.191392 (7.090320) | 0.203874 / 0.680424 (-0.476550) | 0.021839 / 0.534201 (-0.512362) | 0.472365 / 0.579283 (-0.106918) | 0.475150 / 0.434364 (0.040786) | 0.571713 / 0.540337 (0.031376) | 0.759210 / 1.386936 (-0.627726) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c3a7fc003b1d181d8e8ece24d5ebd442ec5d6519 \"CML watermark\")\n" ]
2023-07-26T12:46:07
2023-07-26T14:01:21
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6076", "html_url": "https://github.com/huggingface/datasets/pull/6076", "diff_url": "https://github.com/huggingface/datasets/pull/6076.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6076.patch", "merged_at": null }
Don't accept gzip encoding from github, otherwise some files are not streamable + seekable. fix https://huggingface.co/datasets/code_x_glue_cc_code_to_code_trans/discussions/2#64c0e0c1a04a514ba6303e84 and making sure https://github.com/huggingface/datasets/issues/2918 works as well
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6076/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6076/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6075
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6075/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6075/comments
https://api.github.com/repos/huggingface/datasets/issues/6075/events
https://github.com/huggingface/datasets/issues/6075
1,822,341,398
I_kwDODunzps5snrkW
6,075
Error loading music files using `load_dataset`
{ "login": "susnato", "id": 56069179, "node_id": "MDQ6VXNlcjU2MDY5MTc5", "avatar_url": "https://avatars.githubusercontent.com/u/56069179?v=4", "gravatar_id": "", "url": "https://api.github.com/users/susnato", "html_url": "https://github.com/susnato", "followers_url": "https://api.github.com/users/susnato/followers", "following_url": "https://api.github.com/users/susnato/following{/other_user}", "gists_url": "https://api.github.com/users/susnato/gists{/gist_id}", "starred_url": "https://api.github.com/users/susnato/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/susnato/subscriptions", "organizations_url": "https://api.github.com/users/susnato/orgs", "repos_url": "https://api.github.com/users/susnato/repos", "events_url": "https://api.github.com/users/susnato/events{/privacy}", "received_events_url": "https://api.github.com/users/susnato/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "This code behaves as expected on my local machine or in Colab. Which version of `soundfile` do you have installed? MP3 requires `soundfile>=0.12.1`.", "I upgraded the `soundfile` and it's working now! \r\nThanks @mariosasko for the help!" ]
2023-07-26T12:44:05
2023-07-26T13:08:08
2023-07-26T13:08:08
NONE
null
null
null
### Describe the bug I tried to load a music file using `datasets.load_dataset()` from the repository - https://huggingface.co/datasets/susnato/pop2piano_real_music_test I got the following error - ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2803, in __getitem__ return self._getitem(key) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2788, in _getitem formatted_output = format_table( File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 629, in format_table return formatter(pa_table, query_type=query_type) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 398, in __call__ return self.format_column(pa_table) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 442, in format_column column = self.python_features_decoder.decode_column(column, pa_table.column_names[0]) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 218, in decode_column return self.features.decode_column(column, column_name) if self.features else column File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/features.py", line 1924, in decode_column [decode_nested_example(self[column_name], value) if value is not None else None for value in column] File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/features.py", line 1924, in <listcomp> [decode_nested_example(self[column_name], value) if value is not None else None for value in column] File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/features.py", line 1325, in decode_nested_example return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/audio.py", line 184, in decode_example array, sampling_rate = sf.read(f) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 372, in read with SoundFile(file, 'r', samplerate, channels, File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 740, in __init__ self._file = self._open(file, mode_int, closefd) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 1264, in _open _error_check(_snd.sf_error(file_ptr), File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 1455, in _error_check raise RuntimeError(prefix + _ffi.string(err_str).decode('utf-8', 'replace')) RuntimeError: Error opening <_io.BufferedReader name='/home/susnato/.cache/huggingface/datasets/downloads/d2b09cb974b967b13f91553297c40c0f02f3c0d4c8356350743598ff48d6f29e'>: Format not recognised. ``` ### Steps to reproduce the bug Code to reproduce the error - ```python from datasets import load_dataset ds = load_dataset("susnato/pop2piano_real_music_test", split="test") print(ds[0]) ``` ### Expected behavior I should be able to read the music file without any error. ### Environment info - `datasets` version: 2.14.0 - Platform: Linux-5.19.0-50-generic-x86_64-with-glibc2.35 - Python version: 3.9.16 - Huggingface_hub version: 0.15.1 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6075/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6075/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6074
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6074/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6074/comments
https://api.github.com/repos/huggingface/datasets/issues/6074/events
https://github.com/huggingface/datasets/pull/6074
1,822,299,128
PR_kwDODunzps5Wb8O_
6,074
Misc doc improvements
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006616 / 0.011353 (-0.004737) | 0.003915 / 0.011008 (-0.007093) | 0.083271 / 0.038508 (0.044763) | 0.072595 / 0.023109 (0.049485) | 0.307224 / 0.275898 (0.031326) | 0.337244 / 0.323480 (0.013764) | 0.005296 / 0.007986 (-0.002690) | 0.003325 / 0.004328 (-0.001003) | 0.064589 / 0.004250 (0.060339) | 0.056369 / 0.037052 (0.019316) | 0.310829 / 0.258489 (0.052340) | 0.345563 / 0.293841 (0.051722) | 0.030551 / 0.128546 (-0.097995) | 0.008519 / 0.075646 (-0.067127) | 0.286368 / 0.419271 (-0.132903) | 0.052498 / 0.043533 (0.008966) | 0.308735 / 0.255139 (0.053596) | 0.329234 / 0.283200 (0.046034) | 0.022588 / 0.141683 (-0.119095) | 1.453135 / 1.452155 (0.000981) | 1.525956 / 1.492716 (0.033239) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199417 / 0.018006 (0.181410) | 0.454621 / 0.000490 (0.454131) | 0.004928 / 0.000200 (0.004728) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028436 / 0.037411 (-0.008975) | 0.083722 / 0.014526 (0.069196) | 0.095162 / 0.176557 (-0.081395) | 0.153434 / 0.737135 (-0.583702) | 0.099480 / 0.296338 (-0.196859) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384647 / 0.215209 (0.169438) | 3.838406 / 2.077655 (1.760751) | 1.891267 / 1.504120 (0.387148) | 1.751432 / 1.541195 (0.210238) | 1.737443 / 1.468490 (0.268953) | 0.487758 / 4.584777 (-4.097019) | 3.635925 / 3.745712 (-0.109787) | 5.208718 / 5.269862 (-0.061144) | 3.029374 / 4.565676 (-1.536302) | 0.057613 / 0.424275 (-0.366662) | 0.007177 / 0.007607 (-0.000430) | 0.455596 / 0.226044 (0.229552) | 4.559969 / 2.268929 (2.291040) | 2.325321 / 55.444624 (-53.119303) | 2.034924 / 6.876477 (-4.841552) | 2.163869 / 2.142072 (0.021796) | 0.583477 / 4.805227 (-4.221750) | 0.132870 / 6.500664 (-6.367795) | 0.059618 / 0.075469 (-0.015851) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.263751 / 1.841788 (-0.578037) | 19.740004 / 8.074308 (11.665696) | 14.410980 / 10.191392 (4.219588) | 0.170367 / 0.680424 (-0.510057) | 0.018225 / 0.534201 (-0.515976) | 0.390101 / 0.579283 (-0.189182) | 0.404298 / 0.434364 (-0.030066) | 0.455295 / 0.540337 (-0.085043) | 0.621179 / 1.386936 (-0.765757) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006580 / 0.011353 (-0.004773) | 0.004078 / 0.011008 (-0.006930) | 0.065842 / 0.038508 (0.027334) | 0.074494 / 0.023109 (0.051385) | 0.403644 / 0.275898 (0.127746) | 0.430204 / 0.323480 (0.106724) | 0.005343 / 0.007986 (-0.002643) | 0.003366 / 0.004328 (-0.000963) | 0.064858 / 0.004250 (0.060607) | 0.056252 / 0.037052 (0.019200) | 0.412556 / 0.258489 (0.154067) | 0.434099 / 0.293841 (0.140258) | 0.031518 / 0.128546 (-0.097028) | 0.008543 / 0.075646 (-0.067104) | 0.071658 / 0.419271 (-0.347613) | 0.049962 / 0.043533 (0.006430) | 0.398511 / 0.255139 (0.143372) | 0.415908 / 0.283200 (0.132708) | 0.025011 / 0.141683 (-0.116672) | 1.492350 / 1.452155 (0.040195) | 1.552996 / 1.492716 (0.060280) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204971 / 0.018006 (0.186964) | 0.439965 / 0.000490 (0.439475) | 0.002071 / 0.000200 (0.001872) | 0.000084 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031673 / 0.037411 (-0.005738) | 0.087529 / 0.014526 (0.073004) | 0.099882 / 0.176557 (-0.076675) | 0.156994 / 0.737135 (-0.580141) | 0.101421 / 0.296338 (-0.194918) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.407480 / 0.215209 (0.192271) | 4.069123 / 2.077655 (1.991468) | 2.081288 / 1.504120 (0.577169) | 1.920367 / 1.541195 (0.379172) | 1.981053 / 1.468490 (0.512563) | 0.481995 / 4.584777 (-4.102782) | 3.546486 / 3.745712 (-0.199226) | 5.133150 / 5.269862 (-0.136712) | 3.056444 / 4.565676 (-1.509232) | 0.056650 / 0.424275 (-0.367625) | 0.007746 / 0.007607 (0.000139) | 0.490891 / 0.226044 (0.264847) | 4.902160 / 2.268929 (2.633232) | 2.564726 / 55.444624 (-52.879899) | 2.234988 / 6.876477 (-4.641489) | 2.387656 / 2.142072 (0.245583) | 0.576315 / 4.805227 (-4.228912) | 0.132065 / 6.500664 (-6.368599) | 0.060728 / 0.075469 (-0.014741) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.370568 / 1.841788 (-0.471220) | 19.883159 / 8.074308 (11.808851) | 14.442066 / 10.191392 (4.250674) | 0.150119 / 0.680424 (-0.530305) | 0.018359 / 0.534201 (-0.515842) | 0.394128 / 0.579283 (-0.185155) | 0.411697 / 0.434364 (-0.022667) | 0.460580 / 0.540337 (-0.079757) | 0.608490 / 1.386936 (-0.778446) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#035d0cf842b82b14059999baa78e8d158dfbed12 \"CML watermark\")\n", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6074). All of your documentation changes will be reflected on that endpoint." ]
2023-07-26T12:20:54
2023-07-26T14:42:56
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6074", "html_url": "https://github.com/huggingface/datasets/pull/6074", "diff_url": "https://github.com/huggingface/datasets/pull/6074.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6074.patch", "merged_at": null }
Removes the warning about requiring to write a dataset loading script to define multiple configurations, as the README YAML can be used instead (for simple cases). Also, deletes the section about using the `BatchSampler` in `torch<=1.12.1` to speed up loading, as `torch 1.12.1` is over a year old (and `torch 2.0` has been out for a while).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6074/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6074/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6073
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6073/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6073/comments
https://api.github.com/repos/huggingface/datasets/issues/6073/events
https://github.com/huggingface/datasets/issues/6073
1,822,167,804
I_kwDODunzps5snBL8
6,073
version2.3.2 load_dataset()data_files can't include .xxxx in path
{ "login": "BUAAChuanWang", "id": 45893496, "node_id": "MDQ6VXNlcjQ1ODkzNDk2", "avatar_url": "https://avatars.githubusercontent.com/u/45893496?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BUAAChuanWang", "html_url": "https://github.com/BUAAChuanWang", "followers_url": "https://api.github.com/users/BUAAChuanWang/followers", "following_url": "https://api.github.com/users/BUAAChuanWang/following{/other_user}", "gists_url": "https://api.github.com/users/BUAAChuanWang/gists{/gist_id}", "starred_url": "https://api.github.com/users/BUAAChuanWang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BUAAChuanWang/subscriptions", "organizations_url": "https://api.github.com/users/BUAAChuanWang/orgs", "repos_url": "https://api.github.com/users/BUAAChuanWang/repos", "events_url": "https://api.github.com/users/BUAAChuanWang/events{/privacy}", "received_events_url": "https://api.github.com/users/BUAAChuanWang/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Version 2.3.2 is over one year old, so please use the latest release (2.14.0) to get the expected behavior. Version 2.3.2 does not contain some fixes we made to fix resolving hidden files/directories (starting with a dot)." ]
2023-07-26T11:09:31
2023-07-26T12:34:45
null
NONE
null
null
null
### Describe the bug First, I cd workdir. Then, I just use load_dataset("json", data_file={"train":"/a/b/c/.d/train/train.json", "test":"/a/b/c/.d/train/test.json"}) that couldn't work and <FileNotFoundError: Unable to find '/a/b/c/.d/train/train.jsonl' at /a/b/c/.d/> And I debug, it is fine in version2.1.2 So there maybe a bug in path join. Here is the whole bug report: /x/datasets/loa │ │ d.py:1656 in load_dataset │ │ │ │ 1653 │ ignore_verifications = ignore_verifications or save_infos │ │ 1654 │ │ │ 1655 │ # Create a dataset builder │ │ ❱ 1656 │ builder_instance = load_dataset_builder( │ │ 1657 │ │ path=path, │ │ 1658 │ │ name=name, │ │ 1659 │ │ data_dir=data_dir, │ │ │ │ x/datasets/loa │ │ d.py:1439 in load_dataset_builder │ │ │ │ 1436 │ if use_auth_token is not None: │ │ 1437 │ │ download_config = download_config.copy() if download_config e │ │ 1438 │ │ download_config.use_auth_token = use_auth_token │ │ ❱ 1439 │ dataset_module = dataset_module_factory( │ │ 1440 │ │ path, │ │ 1441 │ │ revision=revision, │ │ 1442 │ │ download_config=download_config, │ │ │ │ x/datasets/loa │ │ d.py:1097 in dataset_module_factory │ │ │ │ 1094 │ │ │ 1095 │ # Try packaged │ │ 1096 │ if path in _PACKAGED_DATASETS_MODULES: │ │ ❱ 1097 │ │ return PackagedDatasetModuleFactory( │ │ 1098 │ │ │ path, │ │ 1099 │ │ │ data_dir=data_dir, │ │ 1100 │ │ │ data_files=data_files, │ │ │ │x/datasets/loa │ │ d.py:743 in get_module │ │ │ │ 740 │ │ │ if self.data_dir is not None │ │ 741 │ │ │ else get_patterns_locally(str(Path().resolve())) │ │ 742 │ │ ) │ │ ❱ 743 │ │ data_files = DataFilesDict.from_local_or_remote( │ │ 744 │ │ │ patterns, │ │ 745 │ │ │ use_auth_token=self.download_config.use_auth_token, │ │ 746 │ │ │ base_path=str(Path(self.data_dir).resolve()) if self.data │ │ │ │ x/datasets/dat │ │ a_files.py:590 in from_local_or_remote │ │ │ │ 587 │ │ out = cls() │ │ 588 │ │ for key, patterns_for_key in patterns.items(): │ │ 589 │ │ │ out[key] = ( │ │ ❱ 590 │ │ │ │ DataFilesList.from_local_or_remote( │ │ 591 │ │ │ │ │ patterns_for_key, │ │ 592 │ │ │ │ │ base_path=base_path, │ │ 593 │ │ │ │ │ allowed_extensions=allowed_extensions, │ │ │ │ /x/datasets/dat │ │ a_files.py:558 in from_local_or_remote │ │ │ │ 555 │ │ use_auth_token: Optional[Union[bool, str]] = None, │ │ 556 │ ) -> "DataFilesList": │ │ 557 │ │ base_path = base_path if base_path is not None else str(Path() │ │ ❱ 558 │ │ data_files = resolve_patterns_locally_or_by_urls(base_path, pa │ │ 559 │ │ origin_metadata = _get_origin_metadata_locally_or_by_urls(data │ │ 560 │ │ return cls(data_files, origin_metadata) │ │ 561 │ │ │ │ /x/datasets/dat │ │ a_files.py:195 in resolve_patterns_locally_or_by_urls │ │ │ │ 192 │ │ if is_remote_url(pattern): │ │ 193 │ │ │ data_files.append(Url(pattern)) │ │ 194 │ │ else: │ │ ❱ 195 │ │ │ for path in _resolve_single_pattern_locally(base_path, pat │ │ 196 │ │ │ │ data_files.append(path) │ │ 197 │ │ │ 198 │ if not data_files: │ │ │ │ /x/datasets/dat │ │ a_files.py:145 in _resolve_single_pattern_locally │ │ │ │ 142 │ │ error_msg = f"Unable to find '{pattern}' at {Path(base_path).r │ │ 143 │ │ if allowed_extensions is not None: │ │ 144 │ │ │ error_msg += f" with any supported extension {list(allowed │ │ ❱ 145 │ │ raise FileNotFoundError(error_msg) │ │ 146 │ return sorted(out) │ │ 147 ### Steps to reproduce the bug 1. Version=2.3.2 2. In shell, cd workdir.(cd /a/b/c/.d/) 3. load_dataset("json", data_file={"train":"/a/b/c/.d/train/train.json", "test":"/a/b/c/.d/train/test.json"}) ### Expected behavior fix it please~ ### Environment info 2.3.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6073/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6073/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6072
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6072/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6072/comments
https://api.github.com/repos/huggingface/datasets/issues/6072/events
https://github.com/huggingface/datasets/pull/6072
1,822,123,560
PR_kwDODunzps5WbWFN
6,072
Fix fsspec storage_options from load_dataset
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6072). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007617 / 0.011353 (-0.003736) | 0.004580 / 0.011008 (-0.006428) | 0.100913 / 0.038508 (0.062405) | 0.087703 / 0.023109 (0.064594) | 0.424159 / 0.275898 (0.148261) | 0.467195 / 0.323480 (0.143715) | 0.006890 / 0.007986 (-0.001096) | 0.003765 / 0.004328 (-0.000564) | 0.077513 / 0.004250 (0.073262) | 0.064889 / 0.037052 (0.027837) | 0.422349 / 0.258489 (0.163860) | 0.477391 / 0.293841 (0.183550) | 0.036025 / 0.128546 (-0.092522) | 0.009939 / 0.075646 (-0.065707) | 0.342409 / 0.419271 (-0.076862) | 0.061568 / 0.043533 (0.018035) | 0.431070 / 0.255139 (0.175931) | 0.462008 / 0.283200 (0.178809) | 0.027480 / 0.141683 (-0.114203) | 1.802271 / 1.452155 (0.350116) | 1.861336 / 1.492716 (0.368620) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.255806 / 0.018006 (0.237800) | 0.507969 / 0.000490 (0.507479) | 0.010060 / 0.000200 (0.009860) | 0.000112 / 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032286 / 0.037411 (-0.005125) | 0.104468 / 0.014526 (0.089942) | 0.112707 / 0.176557 (-0.063850) | 0.181285 / 0.737135 (-0.555850) | 0.113180 / 0.296338 (-0.183158) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.449265 / 0.215209 (0.234056) | 4.465941 / 2.077655 (2.388287) | 2.177889 / 1.504120 (0.673769) | 1.969864 / 1.541195 (0.428669) | 2.077502 / 1.468490 (0.609011) | 0.561607 / 4.584777 (-4.023170) | 4.281873 / 3.745712 (0.536161) | 4.975352 / 5.269862 (-0.294510) | 2.907121 / 4.565676 (-1.658555) | 0.070205 / 0.424275 (-0.354070) | 0.009164 / 0.007607 (0.001557) | 0.581921 / 0.226044 (0.355876) | 5.538667 / 2.268929 (3.269739) | 2.798853 / 55.444624 (-52.645771) | 2.314015 / 6.876477 (-4.562462) | 2.584836 / 2.142072 (0.442763) | 0.672333 / 4.805227 (-4.132894) | 0.153828 / 6.500664 (-6.346836) | 0.069757 / 0.075469 (-0.005712) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.559670 / 1.841788 (-0.282118) | 23.994639 / 8.074308 (15.920331) | 16.856160 / 10.191392 (6.664768) | 0.195555 / 0.680424 (-0.484869) | 0.021586 / 0.534201 (-0.512615) | 0.469295 / 0.579283 (-0.109989) | 0.481582 / 0.434364 (0.047218) | 0.588667 / 0.540337 (0.048329) | 0.734347 / 1.386936 (-0.652589) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009614 / 0.011353 (-0.001739) | 0.004616 / 0.011008 (-0.006392) | 0.077223 / 0.038508 (0.038715) | 0.103074 / 0.023109 (0.079965) | 0.447834 / 0.275898 (0.171936) | 0.524696 / 0.323480 (0.201216) | 0.007120 / 0.007986 (-0.000866) | 0.003890 / 0.004328 (-0.000438) | 0.076406 / 0.004250 (0.072156) | 0.073488 / 0.037052 (0.036436) | 0.466221 / 0.258489 (0.207732) | 0.532206 / 0.293841 (0.238365) | 0.037596 / 0.128546 (-0.090950) | 0.010029 / 0.075646 (-0.065617) | 0.084313 / 0.419271 (-0.334959) | 0.060088 / 0.043533 (0.016555) | 0.437792 / 0.255139 (0.182653) | 0.512850 / 0.283200 (0.229650) | 0.032424 / 0.141683 (-0.109259) | 1.762130 / 1.452155 (0.309975) | 1.946097 / 1.492716 (0.453381) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.250774 / 0.018006 (0.232768) | 0.506869 / 0.000490 (0.506379) | 0.008232 / 0.000200 (0.008032) | 0.000164 / 0.000054 (0.000110) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037779 / 0.037411 (0.000368) | 0.111933 / 0.014526 (0.097407) | 0.122385 / 0.176557 (-0.054172) | 0.190372 / 0.737135 (-0.546763) | 0.122472 / 0.296338 (-0.173866) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.488502 / 0.215209 (0.273293) | 4.878114 / 2.077655 (2.800459) | 2.504144 / 1.504120 (1.000024) | 2.321077 / 1.541195 (0.779883) | 2.416797 / 1.468490 (0.948307) | 0.583582 / 4.584777 (-4.001195) | 4.277896 / 3.745712 (0.532184) | 3.874780 / 5.269862 (-1.395082) | 2.540099 / 4.565676 (-2.025577) | 0.068734 / 0.424275 (-0.355541) | 0.009158 / 0.007607 (0.001550) | 0.578401 / 0.226044 (0.352357) | 5.763354 / 2.268929 (3.494426) | 3.167771 / 55.444624 (-52.276853) | 2.675220 / 6.876477 (-4.201257) | 2.920927 / 2.142072 (0.778855) | 0.673948 / 4.805227 (-4.131280) | 0.157908 / 6.500664 (-6.342756) | 0.071672 / 0.075469 (-0.003797) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.635120 / 1.841788 (-0.206668) | 24.853480 / 8.074308 (16.779172) | 17.162978 / 10.191392 (6.971586) | 0.209577 / 0.680424 (-0.470847) | 0.030110 / 0.534201 (-0.504091) | 0.546970 / 0.579283 (-0.032313) | 0.581912 / 0.434364 (0.147548) | 0.571460 / 0.540337 (0.031123) | 0.823411 / 1.386936 (-0.563525) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#83b792dddd074ccd007c407f942f6870aac7ee84 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006674 / 0.011353 (-0.004679) | 0.004198 / 0.011008 (-0.006810) | 0.084859 / 0.038508 (0.046351) | 0.076065 / 0.023109 (0.052955) | 0.316065 / 0.275898 (0.040167) | 0.352097 / 0.323480 (0.028617) | 0.005610 / 0.007986 (-0.002376) | 0.003600 / 0.004328 (-0.000729) | 0.064921 / 0.004250 (0.060671) | 0.054493 / 0.037052 (0.017441) | 0.318125 / 0.258489 (0.059636) | 0.370183 / 0.293841 (0.076342) | 0.031141 / 0.128546 (-0.097405) | 0.008755 / 0.075646 (-0.066891) | 0.288241 / 0.419271 (-0.131030) | 0.052379 / 0.043533 (0.008846) | 0.328147 / 0.255139 (0.073008) | 0.347548 / 0.283200 (0.064348) | 0.024393 / 0.141683 (-0.117290) | 1.480646 / 1.452155 (0.028492) | 1.575867 / 1.492716 (0.083151) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.268978 / 0.018006 (0.250971) | 0.586470 / 0.000490 (0.585980) | 0.003190 / 0.000200 (0.002990) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030595 / 0.037411 (-0.006816) | 0.083037 / 0.014526 (0.068511) | 0.103706 / 0.176557 (-0.072850) | 0.164104 / 0.737135 (-0.573031) | 0.104536 / 0.296338 (-0.191802) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.382274 / 0.215209 (0.167065) | 3.811878 / 2.077655 (1.734223) | 1.840098 / 1.504120 (0.335978) | 1.670949 / 1.541195 (0.129754) | 1.763755 / 1.468490 (0.295264) | 0.479526 / 4.584777 (-4.105251) | 3.544443 / 3.745712 (-0.201269) | 3.263004 / 5.269862 (-2.006858) | 2.092801 / 4.565676 (-2.472875) | 0.057167 / 0.424275 (-0.367108) | 0.007450 / 0.007607 (-0.000157) | 0.463731 / 0.226044 (0.237686) | 4.624630 / 2.268929 (2.355701) | 2.327078 / 55.444624 (-53.117546) | 1.977734 / 6.876477 (-4.898743) | 2.237152 / 2.142072 (0.095079) | 0.573210 / 4.805227 (-4.232018) | 0.132095 / 6.500664 (-6.368569) | 0.060283 / 0.075469 (-0.015186) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.243404 / 1.841788 (-0.598384) | 20.306778 / 8.074308 (12.232470) | 14.561660 / 10.191392 (4.370268) | 0.170826 / 0.680424 (-0.509598) | 0.018574 / 0.534201 (-0.515627) | 0.392367 / 0.579283 (-0.186916) | 0.402918 / 0.434364 (-0.031446) | 0.476629 / 0.540337 (-0.063708) | 0.653709 / 1.386936 (-0.733227) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006562 / 0.011353 (-0.004791) | 0.004092 / 0.011008 (-0.006916) | 0.065951 / 0.038508 (0.027443) | 0.078090 / 0.023109 (0.054981) | 0.369679 / 0.275898 (0.093781) | 0.411442 / 0.323480 (0.087962) | 0.005646 / 0.007986 (-0.002339) | 0.003537 / 0.004328 (-0.000791) | 0.066024 / 0.004250 (0.061773) | 0.058947 / 0.037052 (0.021895) | 0.389219 / 0.258489 (0.130730) | 0.414200 / 0.293841 (0.120359) | 0.030372 / 0.128546 (-0.098174) | 0.008631 / 0.075646 (-0.067015) | 0.071692 / 0.419271 (-0.347580) | 0.048035 / 0.043533 (0.004502) | 0.376960 / 0.255139 (0.121821) | 0.389847 / 0.283200 (0.106648) | 0.023940 / 0.141683 (-0.117743) | 1.487633 / 1.452155 (0.035479) | 1.561680 / 1.492716 (0.068964) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.301467 / 0.018006 (0.283461) | 0.544159 / 0.000490 (0.543669) | 0.000408 / 0.000200 (0.000208) | 0.000055 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030939 / 0.037411 (-0.006472) | 0.087432 / 0.014526 (0.072906) | 0.103263 / 0.176557 (-0.073293) | 0.154551 / 0.737135 (-0.582585) | 0.104631 / 0.296338 (-0.191707) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422348 / 0.215209 (0.207139) | 4.206003 / 2.077655 (2.128348) | 2.212619 / 1.504120 (0.708499) | 2.049616 / 1.541195 (0.508421) | 2.139093 / 1.468490 (0.670603) | 0.489647 / 4.584777 (-4.095130) | 3.523291 / 3.745712 (-0.222422) | 3.277657 / 5.269862 (-1.992205) | 2.111353 / 4.565676 (-2.454324) | 0.057597 / 0.424275 (-0.366679) | 0.007675 / 0.007607 (0.000068) | 0.493068 / 0.226044 (0.267023) | 4.939493 / 2.268929 (2.670565) | 2.695995 / 55.444624 (-52.748630) | 2.374904 / 6.876477 (-4.501573) | 2.600110 / 2.142072 (0.458038) | 0.586306 / 4.805227 (-4.218921) | 0.134137 / 6.500664 (-6.366527) | 0.061897 / 0.075469 (-0.013572) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.330628 / 1.841788 (-0.511160) | 20.557964 / 8.074308 (12.483656) | 14.251632 / 10.191392 (4.060240) | 0.148772 / 0.680424 (-0.531652) | 0.018383 / 0.534201 (-0.515817) | 0.392552 / 0.579283 (-0.186731) | 0.403959 / 0.434364 (-0.030405) | 0.462154 / 0.540337 (-0.078184) | 0.608832 / 1.386936 (-0.778104) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7a291b2b659a356199dff0ab004ad3845459034b \"CML watermark\")\n" ]
2023-07-26T10:44:23
2023-07-26T13:01:27
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6072", "html_url": "https://github.com/huggingface/datasets/pull/6072", "diff_url": "https://github.com/huggingface/datasets/pull/6072.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6072.patch", "merged_at": null }
close https://github.com/huggingface/datasets/issues/6071
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6072/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6072/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6071
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6071/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6071/comments
https://api.github.com/repos/huggingface/datasets/issues/6071/events
https://github.com/huggingface/datasets/issues/6071
1,821,990,749
I_kwDODunzps5smV9d
6,071
storage_options provided to load_dataset not fully piping through since datasets 2.14.0
{ "login": "exs-avianello", "id": 128361578, "node_id": "U_kgDOB6akag", "avatar_url": "https://avatars.githubusercontent.com/u/128361578?v=4", "gravatar_id": "", "url": "https://api.github.com/users/exs-avianello", "html_url": "https://github.com/exs-avianello", "followers_url": "https://api.github.com/users/exs-avianello/followers", "following_url": "https://api.github.com/users/exs-avianello/following{/other_user}", "gists_url": "https://api.github.com/users/exs-avianello/gists{/gist_id}", "starred_url": "https://api.github.com/users/exs-avianello/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/exs-avianello/subscriptions", "organizations_url": "https://api.github.com/users/exs-avianello/orgs", "repos_url": "https://api.github.com/users/exs-avianello/repos", "events_url": "https://api.github.com/users/exs-avianello/events{/privacy}", "received_events_url": "https://api.github.com/users/exs-avianello/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! Thanks for reporting, I opened a PR to fix this\r\n\r\nWhat filesystem are you using ?", "Hi @lhoestq ! Thank you so much 🙌 \r\n\r\nIt's a bit of a custom setup, but in practice I am using a [pyarrow.fs.S3FileSystem](https://arrow.apache.org/docs/python/generated/pyarrow.fs.S3FileSystem.html) (wrapped in a `fsspec.implementations.arrow.ArrowFSWrapper` [to make it](https://arrow.apache.org/docs/python/filesystems.html#using-arrow-filesystems-with-fsspec) `fsspec` compatible). I also register it as an entrypoint with `fsspec` so that it's the one that gets automatically resolved when looking for filesystems for the `s3` protocol\r\n\r\nIn my case the `storage_option` that seemed not getting piped through was the filesystem's `endpoint_override` that I use in some tests to point at a mock S3 bucket" ]
2023-07-26T09:37:20
2023-07-26T11:04:35
null
NONE
null
null
null
### Describe the bug Since the latest release of `datasets` (`2.14.0`), custom filesystem `storage_options` passed to `load_dataset()` do not seem to propagate through all the way - leading to problems if loading data files that need those options to be set. I think this is because of the new `_prepare_path_and_storage_options()` (https://github.com/huggingface/datasets/pull/6028), which returns the right `storage_options` to use given a path and a `DownloadConfig` - but which might not be taking into account the extra `storage_options` explicitly provided e.g. through `load_dataset()` ### Steps to reproduce the bug ```python import fsspec import pandas as pd import datasets # Generate mock parquet file data_files = "demo.parquet" pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}).to_parquet(data_files) _storage_options = {"x": 1, "y": 2} fs = fsspec.filesystem("file", **_storage_options) dataset = datasets.load_dataset( "parquet", data_files=data_files, storage_options=fs.storage_options ) ``` Looking at the `storage_options` resolved here: https://github.com/huggingface/datasets/blob/b0177910b32712f28d147879395e511207e39958/src/datasets/data_files.py#L331 they end up being `{}`, instead of propagating through the `storage_options` that were provided to `load_dataset` (`fs.storage_options`). As these then get used for the filesystem operation a few lines below https://github.com/huggingface/datasets/blob/b0177910b32712f28d147879395e511207e39958/src/datasets/data_files.py#L339 the call will fail if the user-provided `storage_options` were needed. --- A temporary workaround that seemed to work locally to bypass the problem was to bundle a duplicate of the `storage_options` into the `download_config`, so that they make their way all the way to `_prepare_path_and_storage_options()` and get extracted correctly: ```python dataset = datasets.load_dataset( "parquet", data_files=data_files, storage_options=fs.storage_options, download_config=datasets.DownloadConfig(storage_options={fs.protocol: fs.storage_options}), ) ``` ### Expected behavior `storage_options` provided to `load_dataset` take effect in all backend filesystem operations. ### Environment info datasets==2.14.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6071/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6071/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6070
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6070/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6070/comments
https://api.github.com/repos/huggingface/datasets/issues/6070/events
https://github.com/huggingface/datasets/pull/6070
1,820,836,330
PR_kwDODunzps5WXDLc
6,070
Fix Quickstart notebook link
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008473 / 0.011353 (-0.002880) | 0.004734 / 0.011008 (-0.006274) | 0.103895 / 0.038508 (0.065387) | 0.071838 / 0.023109 (0.048729) | 0.379949 / 0.275898 (0.104051) | 0.397375 / 0.323480 (0.073895) | 0.006695 / 0.007986 (-0.001290) | 0.004536 / 0.004328 (0.000207) | 0.076151 / 0.004250 (0.071901) | 0.058690 / 0.037052 (0.021638) | 0.379937 / 0.258489 (0.121448) | 0.411833 / 0.293841 (0.117992) | 0.046805 / 0.128546 (-0.081741) | 0.013689 / 0.075646 (-0.061958) | 0.327896 / 0.419271 (-0.091375) | 0.063873 / 0.043533 (0.020340) | 0.378451 / 0.255139 (0.123312) | 0.398725 / 0.283200 (0.115525) | 0.034961 / 0.141683 (-0.106722) | 1.604999 / 1.452155 (0.152845) | 1.748370 / 1.492716 (0.255654) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224634 / 0.018006 (0.206628) | 0.548468 / 0.000490 (0.547979) | 0.005049 / 0.000200 (0.004849) | 0.000097 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028144 / 0.037411 (-0.009267) | 0.092184 / 0.014526 (0.077659) | 0.102987 / 0.176557 (-0.073570) | 0.176987 / 0.737135 (-0.560149) | 0.103093 / 0.296338 (-0.193246) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.578410 / 0.215209 (0.363201) | 5.664781 / 2.077655 (3.587126) | 2.487763 / 1.504120 (0.983643) | 2.254213 / 1.541195 (0.713018) | 2.239693 / 1.468490 (0.771202) | 0.810380 / 4.584777 (-3.774397) | 5.036540 / 3.745712 (1.290828) | 7.064695 / 5.269862 (1.794834) | 4.215101 / 4.565676 (-0.350575) | 0.089792 / 0.424275 (-0.334483) | 0.008487 / 0.007607 (0.000879) | 0.692292 / 0.226044 (0.466248) | 6.780226 / 2.268929 (4.511297) | 3.245510 / 55.444624 (-52.199114) | 2.575984 / 6.876477 (-4.300493) | 2.747546 / 2.142072 (0.605473) | 0.956604 / 4.805227 (-3.848623) | 0.198937 / 6.500664 (-6.301727) | 0.070849 / 0.075469 (-0.004620) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.536469 / 1.841788 (-0.305319) | 21.750583 / 8.074308 (13.676275) | 20.559532 / 10.191392 (10.368140) | 0.241244 / 0.680424 (-0.439180) | 0.030078 / 0.534201 (-0.504123) | 0.462204 / 0.579283 (-0.117079) | 0.600103 / 0.434364 (0.165739) | 0.535074 / 0.540337 (-0.005264) | 0.764427 / 1.386936 (-0.622509) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009712 / 0.011353 (-0.001641) | 0.005036 / 0.011008 (-0.005972) | 0.073683 / 0.038508 (0.035175) | 0.078684 / 0.023109 (0.055574) | 0.445096 / 0.275898 (0.169198) | 0.496233 / 0.323480 (0.172754) | 0.006231 / 0.007986 (-0.001755) | 0.004720 / 0.004328 (0.000392) | 0.076444 / 0.004250 (0.072194) | 0.060932 / 0.037052 (0.023880) | 0.505727 / 0.258489 (0.247238) | 0.498702 / 0.293841 (0.204861) | 0.047115 / 0.128546 (-0.081431) | 0.014028 / 0.075646 (-0.061618) | 0.099292 / 0.419271 (-0.319980) | 0.061571 / 0.043533 (0.018038) | 0.468435 / 0.255139 (0.213296) | 0.481747 / 0.283200 (0.198547) | 0.033962 / 0.141683 (-0.107721) | 1.665397 / 1.452155 (0.213242) | 1.830488 / 1.492716 (0.337772) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.268217 / 0.018006 (0.250211) | 0.555123 / 0.000490 (0.554633) | 0.000451 / 0.000200 (0.000251) | 0.000156 / 0.000054 (0.000101) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034262 / 0.037411 (-0.003150) | 0.107807 / 0.014526 (0.093281) | 0.115631 / 0.176557 (-0.060926) | 0.175914 / 0.737135 (-0.561221) | 0.118775 / 0.296338 (-0.177564) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.583260 / 0.215209 (0.368051) | 5.934976 / 2.077655 (3.857321) | 2.752304 / 1.504120 (1.248184) | 2.382746 / 1.541195 (0.841551) | 2.389402 / 1.468490 (0.920912) | 0.794213 / 4.584777 (-3.790564) | 5.215269 / 3.745712 (1.469557) | 7.083595 / 5.269862 (1.813733) | 3.776136 / 4.565676 (-0.789540) | 0.091141 / 0.424275 (-0.333135) | 0.008803 / 0.007607 (0.001196) | 0.726510 / 0.226044 (0.500465) | 6.926860 / 2.268929 (4.657931) | 3.475612 / 55.444624 (-51.969012) | 2.730237 / 6.876477 (-4.146240) | 2.879145 / 2.142072 (0.737073) | 0.959956 / 4.805227 (-3.845271) | 0.189812 / 6.500664 (-6.310852) | 0.071624 / 0.075469 (-0.003845) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.748184 / 1.841788 (-0.093603) | 23.764520 / 8.074308 (15.690212) | 19.502461 / 10.191392 (9.311069) | 0.233987 / 0.680424 (-0.446437) | 0.028116 / 0.534201 (-0.506085) | 0.478838 / 0.579283 (-0.100445) | 0.560952 / 0.434364 (0.126588) | 0.529902 / 0.540337 (-0.010435) | 0.735095 / 1.386936 (-0.651841) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#dda3e389212f44117a40b44bb0cdf358cfd9f71e \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006735 / 0.011353 (-0.004618) | 0.004131 / 0.011008 (-0.006878) | 0.085619 / 0.038508 (0.047111) | 0.076973 / 0.023109 (0.053864) | 0.315175 / 0.275898 (0.039277) | 0.354703 / 0.323480 (0.031223) | 0.005409 / 0.007986 (-0.002577) | 0.003438 / 0.004328 (-0.000891) | 0.064773 / 0.004250 (0.060523) | 0.056117 / 0.037052 (0.019064) | 0.313825 / 0.258489 (0.055336) | 0.354654 / 0.293841 (0.060813) | 0.031384 / 0.128546 (-0.097163) | 0.008537 / 0.075646 (-0.067109) | 0.288528 / 0.419271 (-0.130744) | 0.053036 / 0.043533 (0.009504) | 0.312213 / 0.255139 (0.057074) | 0.335952 / 0.283200 (0.052752) | 0.023165 / 0.141683 (-0.118518) | 1.497559 / 1.452155 (0.045404) | 1.561949 / 1.492716 (0.069233) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212558 / 0.018006 (0.194552) | 0.456555 / 0.000490 (0.456065) | 0.000334 / 0.000200 (0.000134) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028571 / 0.037411 (-0.008840) | 0.085154 / 0.014526 (0.070628) | 0.095961 / 0.176557 (-0.080596) | 0.153041 / 0.737135 (-0.584094) | 0.099234 / 0.296338 (-0.197105) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.381796 / 0.215209 (0.166587) | 3.806948 / 2.077655 (1.729294) | 1.829597 / 1.504120 (0.325477) | 1.659065 / 1.541195 (0.117870) | 1.738524 / 1.468490 (0.270034) | 0.483379 / 4.584777 (-4.101398) | 3.540648 / 3.745712 (-0.205064) | 3.269188 / 5.269862 (-2.000673) | 2.042113 / 4.565676 (-2.523564) | 0.056905 / 0.424275 (-0.367370) | 0.007235 / 0.007607 (-0.000373) | 0.460581 / 0.226044 (0.234537) | 4.597451 / 2.268929 (2.328522) | 2.334284 / 55.444624 (-53.110340) | 1.960026 / 6.876477 (-4.916450) | 2.172118 / 2.142072 (0.030045) | 0.576758 / 4.805227 (-4.228470) | 0.131196 / 6.500664 (-6.369468) | 0.060053 / 0.075469 (-0.015417) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.289466 / 1.841788 (-0.552322) | 19.713059 / 8.074308 (11.638750) | 14.292390 / 10.191392 (4.100998) | 0.146199 / 0.680424 (-0.534225) | 0.018123 / 0.534201 (-0.516078) | 0.392492 / 0.579283 (-0.186791) | 0.416544 / 0.434364 (-0.017820) | 0.457166 / 0.540337 (-0.083171) | 0.645490 / 1.386936 (-0.741446) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006508 / 0.011353 (-0.004845) | 0.004010 / 0.011008 (-0.006998) | 0.065201 / 0.038508 (0.026693) | 0.076322 / 0.023109 (0.053213) | 0.364198 / 0.275898 (0.088300) | 0.398251 / 0.323480 (0.074771) | 0.005328 / 0.007986 (-0.002658) | 0.003298 / 0.004328 (-0.001031) | 0.064378 / 0.004250 (0.060128) | 0.056053 / 0.037052 (0.019000) | 0.365431 / 0.258489 (0.106942) | 0.402777 / 0.293841 (0.108936) | 0.031014 / 0.128546 (-0.097532) | 0.008507 / 0.075646 (-0.067140) | 0.071471 / 0.419271 (-0.347801) | 0.048300 / 0.043533 (0.004768) | 0.359700 / 0.255139 (0.104561) | 0.382244 / 0.283200 (0.099044) | 0.023783 / 0.141683 (-0.117900) | 1.517518 / 1.452155 (0.065363) | 1.569732 / 1.492716 (0.077015) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.257447 / 0.018006 (0.239440) | 0.452598 / 0.000490 (0.452109) | 0.015187 / 0.000200 (0.014987) | 0.000164 / 0.000054 (0.000109) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030958 / 0.037411 (-0.006454) | 0.090066 / 0.014526 (0.075540) | 0.101120 / 0.176557 (-0.075437) | 0.154295 / 0.737135 (-0.582840) | 0.103582 / 0.296338 (-0.192756) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415945 / 0.215209 (0.200736) | 4.146464 / 2.077655 (2.068809) | 2.121414 / 1.504120 (0.617294) | 1.956885 / 1.541195 (0.415690) | 2.047955 / 1.468490 (0.579465) | 0.486334 / 4.584777 (-4.098443) | 3.506263 / 3.745712 (-0.239449) | 4.942274 / 5.269862 (-0.327587) | 2.907836 / 4.565676 (-1.657841) | 0.057344 / 0.424275 (-0.366931) | 0.007813 / 0.007607 (0.000206) | 0.497888 / 0.226044 (0.271844) | 4.978017 / 2.268929 (2.709089) | 2.600447 / 55.444624 (-52.844177) | 2.335050 / 6.876477 (-4.541427) | 2.480373 / 2.142072 (0.338301) | 0.597954 / 4.805227 (-4.207274) | 0.134794 / 6.500664 (-6.365870) | 0.062605 / 0.075469 (-0.012864) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.344390 / 1.841788 (-0.497398) | 20.020067 / 8.074308 (11.945759) | 14.344626 / 10.191392 (4.153234) | 0.172101 / 0.680424 (-0.508322) | 0.018549 / 0.534201 (-0.515652) | 0.393589 / 0.579283 (-0.185694) | 0.438401 / 0.434364 (0.004037) | 0.463800 / 0.540337 (-0.076537) | 0.618269 / 1.386936 (-0.768667) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b0177910b32712f28d147879395e511207e39958 \"CML watermark\")\n" ]
2023-07-25T17:48:37
2023-07-25T18:19:01
2023-07-25T18:10:16
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6070", "html_url": "https://github.com/huggingface/datasets/pull/6070", "diff_url": "https://github.com/huggingface/datasets/pull/6070.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6070.patch", "merged_at": "2023-07-25T18:10:16" }
Reported in https://github.com/huggingface/datasets/pull/5902#issuecomment-1649885621 (cc @alvarobartt)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6070/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6070/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6069
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6069/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6069/comments
https://api.github.com/repos/huggingface/datasets/issues/6069/events
https://github.com/huggingface/datasets/issues/6069
1,820,831,535
I_kwDODunzps5sh68v
6,069
KeyError: dataset has no key "image"
{ "login": "etetteh", "id": 28512232, "node_id": "MDQ6VXNlcjI4NTEyMjMy", "avatar_url": "https://avatars.githubusercontent.com/u/28512232?v=4", "gravatar_id": "", "url": "https://api.github.com/users/etetteh", "html_url": "https://github.com/etetteh", "followers_url": "https://api.github.com/users/etetteh/followers", "following_url": "https://api.github.com/users/etetteh/following{/other_user}", "gists_url": "https://api.github.com/users/etetteh/gists{/gist_id}", "starred_url": "https://api.github.com/users/etetteh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/etetteh/subscriptions", "organizations_url": "https://api.github.com/users/etetteh/orgs", "repos_url": "https://api.github.com/users/etetteh/repos", "events_url": "https://api.github.com/users/etetteh/events{/privacy}", "received_events_url": "https://api.github.com/users/etetteh/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "You can list the dataset's columns with `ds.column_names` before `.map` to check whether the dataset has an `image` column. If it doesn't, then this is a bug. Otherwise, please paste the line with the `.map` call.\r\n\r\n\r\n", "This is the piece of code I am running:\r\n```\r\ndata_transforms = utils.get_data_augmentation(args)\r\nimage_dataset = utils.load_image_dataset(args.dataset)\r\n\r\ndef resize(examples):\r\n examples[\"pixel_values\"] = [image.convert(\"RGB\").resize((300, 300)) for image in examples[\"image\"]]\r\n return examples\r\n\r\ndef preprocess_train(example_batch):\r\n print(f\"Example batch: \\n{example_batch}\")\r\n example_batch[\"pixel_values\"] = [\r\n data_transforms[\"train\"](image.convert(\"RGB\")) for image in example_batch[\"pixel_values\"]\r\n ]\r\n return example_batch\r\n\r\ndef preprocess_val(example_batch):\r\n example_batch[\"pixel_values\"] = [\r\n data_transforms[\"val\"](image.convert(\"RGB\")) for image in example_batch[\"pixel_values\"]\r\n ]\r\n return example_batch\r\n\r\nimage_dataset = image_dataset.map(resize, remove_columns=[\"image\"], batched=True)\r\n\r\nimage_dataset[\"train\"].set_transform(preprocess_train)\r\nimage_dataset[\"validation\"].set_transform(preprocess_val)\r\n```\r\n\r\nWhen I print ds.column_names I get the following\r\n`{'train': ['image', 'label'], 'validation': ['image', 'label'], 'test': ['image', 'label']}`\r\n\r\nThe `print(f\"Example batch: \\n{example_batch}\")` in the `preprocess_train` function outputs only labels without images:\r\n```\r\nExample batch: \r\n{'label': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3]}\r\n```\r\n\r\nThe weird part of it all is that a sample code runs in a jupyter lab notebook without any bugs, but when I run my scripts from the terminal I get the bug. The same code.", "The `remove_columns=[\"image\"]` argument in the `.map` call removes the `image` column from the output, so drop this argument to preserve it.", "The problem is not with the removal of the image key. The bug is why only the labels are sent to be process, instead of all the featues or dictionary keys.\r\n\r\nP.S. I just dropped the removal argument as you've suggested, but that didn't solve the problem, because only the labels are being sent to be processed" ]
2023-07-25T17:45:50
2023-07-26T15:18:51
null
NONE
null
null
null
### Describe the bug I've loaded a local image dataset with: `ds = laod_dataset("imagefolder", data_dir=path-to-data)` And defined a transform to process the data, following the Datasets docs. However, I get a keyError error, indicating there's no "image" key in my dataset. When I printed out the example_batch sent to the transformation function, it shows only the labels are being sent to the function. For some reason, the images are not in the example batches. ### Steps to reproduce the bug I'm using the latest stable version of datasets ### Expected behavior I expect the example_batches to contain both images and labels ### Environment info I'm using the latest stable version of datasets
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6069/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6069/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6068
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6068/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6068/comments
https://api.github.com/repos/huggingface/datasets/issues/6068/events
https://github.com/huggingface/datasets/pull/6068
1,820,106,952
PR_kwDODunzps5WUkZi
6,068
fix tqdm lock deletion
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006573 / 0.011353 (-0.004780) | 0.004014 / 0.011008 (-0.006994) | 0.084999 / 0.038508 (0.046491) | 0.074965 / 0.023109 (0.051855) | 0.313294 / 0.275898 (0.037396) | 0.349678 / 0.323480 (0.026198) | 0.005401 / 0.007986 (-0.002585) | 0.003401 / 0.004328 (-0.000927) | 0.065363 / 0.004250 (0.061112) | 0.057159 / 0.037052 (0.020107) | 0.313260 / 0.258489 (0.054771) | 0.354654 / 0.293841 (0.060813) | 0.030895 / 0.128546 (-0.097651) | 0.008605 / 0.075646 (-0.067042) | 0.289190 / 0.419271 (-0.130081) | 0.052474 / 0.043533 (0.008942) | 0.316193 / 0.255139 (0.061054) | 0.339966 / 0.283200 (0.056767) | 0.024112 / 0.141683 (-0.117571) | 1.515606 / 1.452155 (0.063452) | 1.571428 / 1.492716 (0.078711) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203284 / 0.018006 (0.185278) | 0.452720 / 0.000490 (0.452230) | 0.003891 / 0.000200 (0.003691) | 0.000094 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028992 / 0.037411 (-0.008419) | 0.083170 / 0.014526 (0.068644) | 0.097739 / 0.176557 (-0.078817) | 0.153401 / 0.737135 (-0.583734) | 0.098628 / 0.296338 (-0.197711) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.390190 / 0.215209 (0.174981) | 3.901272 / 2.077655 (1.823617) | 1.887194 / 1.504120 (0.383074) | 1.723696 / 1.541195 (0.182501) | 1.800537 / 1.468490 (0.332047) | 0.481758 / 4.584777 (-4.103019) | 3.605098 / 3.745712 (-0.140614) | 3.304482 / 5.269862 (-1.965380) | 2.053515 / 4.565676 (-2.512161) | 0.056997 / 0.424275 (-0.367278) | 0.007347 / 0.007607 (-0.000260) | 0.461367 / 0.226044 (0.235323) | 4.606152 / 2.268929 (2.337223) | 2.470048 / 55.444624 (-52.974576) | 2.060019 / 6.876477 (-4.816458) | 2.320507 / 2.142072 (0.178435) | 0.575050 / 4.805227 (-4.230178) | 0.133030 / 6.500664 (-6.367634) | 0.061508 / 0.075469 (-0.013962) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.275430 / 1.841788 (-0.566357) | 19.725453 / 8.074308 (11.651145) | 14.396360 / 10.191392 (4.204968) | 0.157980 / 0.680424 (-0.522443) | 0.018516 / 0.534201 (-0.515685) | 0.394717 / 0.579283 (-0.184566) | 0.404948 / 0.434364 (-0.029415) | 0.474001 / 0.540337 (-0.066336) | 0.668463 / 1.386936 (-0.718474) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006697 / 0.011353 (-0.004656) | 0.004206 / 0.011008 (-0.006802) | 0.065458 / 0.038508 (0.026950) | 0.075845 / 0.023109 (0.052735) | 0.365051 / 0.275898 (0.089153) | 0.400919 / 0.323480 (0.077439) | 0.005347 / 0.007986 (-0.002638) | 0.003386 / 0.004328 (-0.000943) | 0.065398 / 0.004250 (0.061148) | 0.057320 / 0.037052 (0.020268) | 0.379161 / 0.258489 (0.120672) | 0.406892 / 0.293841 (0.113051) | 0.031986 / 0.128546 (-0.096560) | 0.008674 / 0.075646 (-0.066972) | 0.071723 / 0.419271 (-0.347549) | 0.049897 / 0.043533 (0.006364) | 0.372034 / 0.255139 (0.116895) | 0.394293 / 0.283200 (0.111094) | 0.023681 / 0.141683 (-0.118002) | 1.479793 / 1.452155 (0.027639) | 1.553105 / 1.492716 (0.060389) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.233660 / 0.018006 (0.215654) | 0.454412 / 0.000490 (0.453923) | 0.004473 / 0.000200 (0.004273) | 0.000085 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031115 / 0.037411 (-0.006296) | 0.090541 / 0.014526 (0.076015) | 0.104363 / 0.176557 (-0.072193) | 0.161022 / 0.737135 (-0.576114) | 0.105114 / 0.296338 (-0.191225) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.427668 / 0.215209 (0.212459) | 4.263145 / 2.077655 (2.185490) | 2.247043 / 1.504120 (0.742923) | 2.082554 / 1.541195 (0.541360) | 2.170505 / 1.468490 (0.702015) | 0.491802 / 4.584777 (-4.092975) | 3.587295 / 3.745712 (-0.158417) | 3.344697 / 5.269862 (-1.925165) | 2.060529 / 4.565676 (-2.505148) | 0.057829 / 0.424275 (-0.366446) | 0.007780 / 0.007607 (0.000173) | 0.503374 / 0.226044 (0.277330) | 5.034742 / 2.268929 (2.765814) | 2.701957 / 55.444624 (-52.742667) | 2.479002 / 6.876477 (-4.397474) | 2.622055 / 2.142072 (0.479982) | 0.591363 / 4.805227 (-4.213864) | 0.133834 / 6.500664 (-6.366830) | 0.062276 / 0.075469 (-0.013193) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.338788 / 1.841788 (-0.503000) | 20.333599 / 8.074308 (12.259291) | 14.783196 / 10.191392 (4.591804) | 0.168695 / 0.680424 (-0.511729) | 0.018478 / 0.534201 (-0.515723) | 0.397398 / 0.579283 (-0.181885) | 0.409900 / 0.434364 (-0.024464) | 0.475315 / 0.540337 (-0.065023) | 0.644267 / 1.386936 (-0.742669) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cb0b324e0bae4c93bb5509b2f0731bc346adb21b \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007315 / 0.011353 (-0.004038) | 0.004294 / 0.011008 (-0.006714) | 0.100300 / 0.038508 (0.061792) | 0.077780 / 0.023109 (0.054670) | 0.353728 / 0.275898 (0.077830) | 0.400538 / 0.323480 (0.077058) | 0.005807 / 0.007986 (-0.002178) | 0.003649 / 0.004328 (-0.000680) | 0.077548 / 0.004250 (0.073297) | 0.058834 / 0.037052 (0.021781) | 0.352064 / 0.258489 (0.093574) | 0.399951 / 0.293841 (0.106110) | 0.036472 / 0.128546 (-0.092074) | 0.008653 / 0.075646 (-0.066994) | 0.323089 / 0.419271 (-0.096182) | 0.075127 / 0.043533 (0.031594) | 0.334412 / 0.255139 (0.079273) | 0.375718 / 0.283200 (0.092519) | 0.027915 / 0.141683 (-0.113768) | 1.698795 / 1.452155 (0.246640) | 1.781447 / 1.492716 (0.288730) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216111 / 0.018006 (0.198104) | 0.507706 / 0.000490 (0.507216) | 0.000851 / 0.000200 (0.000651) | 0.000085 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030451 / 0.037411 (-0.006960) | 0.087488 / 0.014526 (0.072962) | 0.105094 / 0.176557 (-0.071462) | 0.168130 / 0.737135 (-0.569006) | 0.106791 / 0.296338 (-0.189547) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426291 / 0.215209 (0.211082) | 4.281046 / 2.077655 (2.203391) | 2.162268 / 1.504120 (0.658148) | 1.909503 / 1.541195 (0.368309) | 1.943165 / 1.468490 (0.474675) | 0.516667 / 4.584777 (-4.068110) | 4.113218 / 3.745712 (0.367506) | 5.931372 / 5.269862 (0.661510) | 3.563521 / 4.565676 (-1.002155) | 0.062415 / 0.424275 (-0.361860) | 0.007577 / 0.007607 (-0.000030) | 0.534588 / 0.226044 (0.308543) | 5.183490 / 2.268929 (2.914561) | 2.790662 / 55.444624 (-52.653962) | 2.258630 / 6.876477 (-4.617846) | 2.499930 / 2.142072 (0.357857) | 0.606154 / 4.805227 (-4.199073) | 0.136093 / 6.500664 (-6.364571) | 0.061151 / 0.075469 (-0.014318) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.398392 / 1.841788 (-0.443396) | 21.482150 / 8.074308 (13.407842) | 15.477336 / 10.191392 (5.285944) | 0.192878 / 0.680424 (-0.487546) | 0.021764 / 0.534201 (-0.512437) | 0.437149 / 0.579283 (-0.142134) | 0.439976 / 0.434364 (0.005612) | 0.514498 / 0.540337 (-0.025840) | 0.762642 / 1.386936 (-0.624294) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007504 / 0.011353 (-0.003849) | 0.004526 / 0.011008 (-0.006482) | 0.071008 / 0.038508 (0.032500) | 0.078305 / 0.023109 (0.055195) | 0.436160 / 0.275898 (0.160262) | 0.439048 / 0.323480 (0.115568) | 0.006061 / 0.007986 (-0.001925) | 0.003681 / 0.004328 (-0.000648) | 0.069445 / 0.004250 (0.065195) | 0.059258 / 0.037052 (0.022206) | 0.437745 / 0.258489 (0.179256) | 0.464247 / 0.293841 (0.170406) | 0.033286 / 0.128546 (-0.095260) | 0.009846 / 0.075646 (-0.065800) | 0.076330 / 0.419271 (-0.342941) | 0.051919 / 0.043533 (0.008386) | 0.432817 / 0.255139 (0.177678) | 0.426295 / 0.283200 (0.143095) | 0.029818 / 0.141683 (-0.111865) | 1.747640 / 1.452155 (0.295485) | 1.726653 / 1.492716 (0.233937) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.251253 / 0.018006 (0.233247) | 0.483394 / 0.000490 (0.482904) | 0.003992 / 0.000200 (0.003793) | 0.000096 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032180 / 0.037411 (-0.005231) | 0.095425 / 0.014526 (0.080900) | 0.105908 / 0.176557 (-0.070648) | 0.164732 / 0.737135 (-0.572403) | 0.115903 / 0.296338 (-0.180435) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.469467 / 0.215209 (0.254258) | 4.633239 / 2.077655 (2.555584) | 2.517557 / 1.504120 (1.013437) | 2.352726 / 1.541195 (0.811531) | 2.314618 / 1.468490 (0.846128) | 0.548446 / 4.584777 (-4.036331) | 3.908797 / 3.745712 (0.163085) | 3.525941 / 5.269862 (-1.743921) | 2.178858 / 4.565676 (-2.386819) | 0.057614 / 0.424275 (-0.366661) | 0.008604 / 0.007607 (0.000997) | 0.554756 / 0.226044 (0.328711) | 5.325635 / 2.268929 (3.056706) | 3.014266 / 55.444624 (-52.430359) | 2.844165 / 6.876477 (-4.032312) | 2.903019 / 2.142072 (0.760947) | 0.617750 / 4.805227 (-4.187478) | 0.144259 / 6.500664 (-6.356405) | 0.065944 / 0.075469 (-0.009525) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.504625 / 1.841788 (-0.337163) | 22.400787 / 8.074308 (14.326479) | 15.223702 / 10.191392 (5.032310) | 0.213357 / 0.680424 (-0.467067) | 0.019310 / 0.534201 (-0.514891) | 0.456596 / 0.579283 (-0.122687) | 0.473811 / 0.434364 (0.039447) | 0.517800 / 0.540337 (-0.022537) | 0.792468 / 1.386936 (-0.594468) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#03750f4a4c664125c7de910be004710b181dd354 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007420 / 0.011353 (-0.003933) | 0.004502 / 0.011008 (-0.006506) | 0.097882 / 0.038508 (0.059374) | 0.079084 / 0.023109 (0.055975) | 0.361797 / 0.275898 (0.085899) | 0.416563 / 0.323480 (0.093083) | 0.006106 / 0.007986 (-0.001879) | 0.003803 / 0.004328 (-0.000526) | 0.074669 / 0.004250 (0.070418) | 0.062168 / 0.037052 (0.025116) | 0.378844 / 0.258489 (0.120355) | 0.426601 / 0.293841 (0.132760) | 0.035619 / 0.128546 (-0.092927) | 0.009686 / 0.075646 (-0.065960) | 0.336481 / 0.419271 (-0.082790) | 0.065553 / 0.043533 (0.022021) | 0.362501 / 0.255139 (0.107362) | 0.399752 / 0.283200 (0.116552) | 0.028685 / 0.141683 (-0.112998) | 1.683495 / 1.452155 (0.231340) | 1.786105 / 1.492716 (0.293388) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220792 / 0.018006 (0.202786) | 0.501936 / 0.000490 (0.501447) | 0.000389 / 0.000200 (0.000189) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032180 / 0.037411 (-0.005232) | 0.093079 / 0.014526 (0.078553) | 0.107967 / 0.176557 (-0.068589) | 0.171747 / 0.737135 (-0.565389) | 0.107920 / 0.296338 (-0.188418) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.444431 / 0.215209 (0.229222) | 4.454934 / 2.077655 (2.377279) | 2.140265 / 1.504120 (0.636145) | 1.960126 / 1.541195 (0.418931) | 2.049649 / 1.468490 (0.581158) | 0.557861 / 4.584777 (-4.026916) | 4.046240 / 3.745712 (0.300528) | 4.513748 / 5.269862 (-0.756114) | 2.593643 / 4.565676 (-1.972034) | 0.066795 / 0.424275 (-0.357480) | 0.008302 / 0.007607 (0.000694) | 0.535643 / 0.226044 (0.309599) | 5.299429 / 2.268929 (3.030500) | 2.656019 / 55.444624 (-52.788606) | 2.281214 / 6.876477 (-4.595263) | 2.302910 / 2.142072 (0.160837) | 0.661696 / 4.805227 (-4.143532) | 0.149787 / 6.500664 (-6.350877) | 0.069609 / 0.075469 (-0.005860) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.509842 / 1.841788 (-0.331946) | 21.717504 / 8.074308 (13.643196) | 15.825102 / 10.191392 (5.633710) | 0.168115 / 0.680424 (-0.512309) | 0.021637 / 0.534201 (-0.512564) | 0.454270 / 0.579283 (-0.125013) | 0.458531 / 0.434364 (0.024167) | 0.523052 / 0.540337 (-0.017285) | 0.711219 / 1.386936 (-0.675717) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007189 / 0.011353 (-0.004164) | 0.004437 / 0.011008 (-0.006571) | 0.075111 / 0.038508 (0.036603) | 0.079245 / 0.023109 (0.056136) | 0.423169 / 0.275898 (0.147270) | 0.455007 / 0.323480 (0.131527) | 0.006076 / 0.007986 (-0.001909) | 0.003819 / 0.004328 (-0.000509) | 0.074976 / 0.004250 (0.070726) | 0.062127 / 0.037052 (0.025075) | 0.456809 / 0.258489 (0.198320) | 0.474707 / 0.293841 (0.180867) | 0.036221 / 0.128546 (-0.092325) | 0.009428 / 0.075646 (-0.066218) | 0.082842 / 0.419271 (-0.336429) | 0.057086 / 0.043533 (0.013553) | 0.436121 / 0.255139 (0.180982) | 0.453934 / 0.283200 (0.170734) | 0.026045 / 0.141683 (-0.115638) | 1.789782 / 1.452155 (0.337627) | 1.820934 / 1.492716 (0.328218) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230790 / 0.018006 (0.212784) | 0.497987 / 0.000490 (0.497497) | 0.002775 / 0.000200 (0.002575) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034418 / 0.037411 (-0.002994) | 0.105567 / 0.014526 (0.091041) | 0.113134 / 0.176557 (-0.063423) | 0.173742 / 0.737135 (-0.563394) | 0.115936 / 0.296338 (-0.180403) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.502259 / 0.215209 (0.287050) | 4.969877 / 2.077655 (2.892222) | 2.684860 / 1.504120 (1.180740) | 2.484386 / 1.541195 (0.943192) | 2.543061 / 1.468490 (1.074571) | 0.545733 / 4.584777 (-4.039044) | 4.029660 / 3.745712 (0.283948) | 5.927883 / 5.269862 (0.658021) | 3.528372 / 4.565676 (-1.037305) | 0.065957 / 0.424275 (-0.358318) | 0.008933 / 0.007607 (0.001326) | 0.601630 / 0.226044 (0.375585) | 5.825872 / 2.268929 (3.556944) | 3.230721 / 55.444624 (-52.213904) | 2.891308 / 6.876477 (-3.985169) | 3.054994 / 2.142072 (0.912922) | 0.665480 / 4.805227 (-4.139747) | 0.154815 / 6.500664 (-6.345849) | 0.072997 / 0.075469 (-0.002472) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.549892 / 1.841788 (-0.291896) | 22.337484 / 8.074308 (14.263176) | 16.308286 / 10.191392 (6.116894) | 0.189594 / 0.680424 (-0.490830) | 0.021844 / 0.534201 (-0.512357) | 0.456958 / 0.579283 (-0.122325) | 0.459957 / 0.434364 (0.025593) | 0.529014 / 0.540337 (-0.011323) | 0.700359 / 1.386936 (-0.686577) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#32e4df86b5fb0bc164433ce615af641ec3ba437e \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009050 / 0.011353 (-0.002303) | 0.004968 / 0.011008 (-0.006040) | 0.114315 / 0.038508 (0.075807) | 0.084475 / 0.023109 (0.061366) | 0.426325 / 0.275898 (0.150427) | 0.457870 / 0.323480 (0.134390) | 0.007076 / 0.007986 (-0.000910) | 0.004635 / 0.004328 (0.000307) | 0.082950 / 0.004250 (0.078700) | 0.065414 / 0.037052 (0.028361) | 0.441936 / 0.258489 (0.183447) | 0.476983 / 0.293841 (0.183142) | 0.048575 / 0.128546 (-0.079972) | 0.013929 / 0.075646 (-0.061717) | 0.377498 / 0.419271 (-0.041774) | 0.081503 / 0.043533 (0.037970) | 0.426706 / 0.255139 (0.171567) | 0.460374 / 0.283200 (0.177175) | 0.046052 / 0.141683 (-0.095631) | 1.894896 / 1.452155 (0.442741) | 1.998639 / 1.492716 (0.505923) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.313267 / 0.018006 (0.295261) | 0.607501 / 0.000490 (0.607012) | 0.003369 / 0.000200 (0.003169) | 0.000102 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032266 / 0.037411 (-0.005145) | 0.120138 / 0.014526 (0.105613) | 0.115044 / 0.176557 (-0.061513) | 0.181374 / 0.737135 (-0.555761) | 0.114681 / 0.296338 (-0.181657) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.648039 / 0.215209 (0.432830) | 6.005048 / 2.077655 (3.927394) | 2.674524 / 1.504120 (1.170404) | 2.284831 / 1.541195 (0.743637) | 2.360150 / 1.468490 (0.891660) | 0.888021 / 4.584777 (-3.696756) | 5.419840 / 3.745712 (1.674128) | 4.825816 / 5.269862 (-0.444046) | 3.140876 / 4.565676 (-1.424801) | 0.099511 / 0.424275 (-0.324764) | 0.009176 / 0.007607 (0.001569) | 0.735646 / 0.226044 (0.509602) | 7.224026 / 2.268929 (4.955097) | 3.551146 / 55.444624 (-51.893478) | 2.844374 / 6.876477 (-4.032103) | 3.145307 / 2.142072 (1.003235) | 1.077636 / 4.805227 (-3.727591) | 0.217754 / 6.500664 (-6.282910) | 0.081755 / 0.075469 (0.006286) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.670956 / 1.841788 (-0.170831) | 25.524961 / 8.074308 (17.450653) | 23.061596 / 10.191392 (12.870204) | 0.247524 / 0.680424 (-0.432899) | 0.031712 / 0.534201 (-0.502489) | 0.513049 / 0.579283 (-0.066234) | 0.614568 / 0.434364 (0.180204) | 0.574669 / 0.540337 (0.034331) | 0.816621 / 1.386936 (-0.570315) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009384 / 0.011353 (-0.001969) | 0.004959 / 0.011008 (-0.006049) | 0.084782 / 0.038508 (0.046274) | 0.098086 / 0.023109 (0.074977) | 0.544395 / 0.275898 (0.268497) | 0.585157 / 0.323480 (0.261677) | 0.006507 / 0.007986 (-0.001479) | 0.004151 / 0.004328 (-0.000178) | 0.088596 / 0.004250 (0.084345) | 0.069149 / 0.037052 (0.032097) | 0.533109 / 0.258489 (0.274620) | 0.604117 / 0.293841 (0.310276) | 0.047685 / 0.128546 (-0.080861) | 0.013651 / 0.075646 (-0.061996) | 0.096566 / 0.419271 (-0.322705) | 0.062022 / 0.043533 (0.018489) | 0.561897 / 0.255139 (0.306758) | 0.617636 / 0.283200 (0.334436) | 0.034636 / 0.141683 (-0.107047) | 1.854667 / 1.452155 (0.402512) | 1.908923 / 1.492716 (0.416207) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.260633 / 0.018006 (0.242627) | 0.622268 / 0.000490 (0.621778) | 0.002116 / 0.000200 (0.001916) | 0.000101 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035161 / 0.037411 (-0.002250) | 0.103707 / 0.014526 (0.089181) | 0.115467 / 0.176557 (-0.061090) | 0.180077 / 0.737135 (-0.557059) | 0.118871 / 0.296338 (-0.177467) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.628481 / 0.215209 (0.413271) | 6.304929 / 2.077655 (4.227275) | 3.027775 / 1.504120 (1.523655) | 2.753880 / 1.541195 (1.212686) | 2.820442 / 1.468490 (1.351952) | 0.851103 / 4.584777 (-3.733674) | 5.427383 / 3.745712 (1.681670) | 7.434310 / 5.269862 (2.164449) | 4.418790 / 4.565676 (-0.146887) | 0.101733 / 0.424275 (-0.322542) | 0.009701 / 0.007607 (0.002094) | 0.763033 / 0.226044 (0.536989) | 7.497927 / 2.268929 (5.228998) | 3.735335 / 55.444624 (-51.709290) | 3.149200 / 6.876477 (-3.727277) | 3.306214 / 2.142072 (1.164141) | 1.085440 / 4.805227 (-3.719787) | 0.207562 / 6.500664 (-6.293102) | 0.078091 / 0.075469 (0.002622) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.820097 / 1.841788 (-0.021691) | 25.525539 / 8.074308 (17.451231) | 21.874219 / 10.191392 (11.682827) | 0.228391 / 0.680424 (-0.452033) | 0.029584 / 0.534201 (-0.504617) | 0.511546 / 0.579283 (-0.067737) | 0.602719 / 0.434364 (0.168355) | 0.581874 / 0.540337 (0.041537) | 0.802861 / 1.386936 (-0.584075) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6063ea2069c8b5641b983ba2c1d39b60afe7c00a \"CML watermark\")\n" ]
2023-07-25T11:17:25
2023-07-25T15:29:39
2023-07-25T15:17:50
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6068", "html_url": "https://github.com/huggingface/datasets/pull/6068", "diff_url": "https://github.com/huggingface/datasets/pull/6068.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6068.patch", "merged_at": "2023-07-25T15:17:50" }
related to https://github.com/huggingface/datasets/issues/6066
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6068/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6068/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6067
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6067/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6067/comments
https://api.github.com/repos/huggingface/datasets/issues/6067/events
https://github.com/huggingface/datasets/pull/6067
1,819,919,025
PR_kwDODunzps5WT7EQ
6,067
fix tqdm lock
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006578 / 0.011353 (-0.004775) | 0.003953 / 0.011008 (-0.007055) | 0.084417 / 0.038508 (0.045908) | 0.076729 / 0.023109 (0.053620) | 0.315369 / 0.275898 (0.039471) | 0.347012 / 0.323480 (0.023533) | 0.005299 / 0.007986 (-0.002686) | 0.003321 / 0.004328 (-0.001007) | 0.063954 / 0.004250 (0.059704) | 0.055810 / 0.037052 (0.018758) | 0.317651 / 0.258489 (0.059162) | 0.352603 / 0.293841 (0.058762) | 0.031355 / 0.128546 (-0.097192) | 0.008493 / 0.075646 (-0.067153) | 0.287295 / 0.419271 (-0.131977) | 0.052716 / 0.043533 (0.009183) | 0.316410 / 0.255139 (0.061271) | 0.328893 / 0.283200 (0.045693) | 0.024005 / 0.141683 (-0.117678) | 1.520333 / 1.452155 (0.068178) | 1.601268 / 1.492716 (0.108552) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.205144 / 0.018006 (0.187138) | 0.459160 / 0.000490 (0.458670) | 0.000321 / 0.000200 (0.000121) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027503 / 0.037411 (-0.009908) | 0.081476 / 0.014526 (0.066950) | 0.096759 / 0.176557 (-0.079798) | 0.157888 / 0.737135 (-0.579247) | 0.094592 / 0.296338 (-0.201746) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384762 / 0.215209 (0.169553) | 3.843503 / 2.077655 (1.765849) | 1.921685 / 1.504120 (0.417565) | 1.752441 / 1.541195 (0.211246) | 1.822105 / 1.468490 (0.353615) | 0.480243 / 4.584777 (-4.104534) | 3.577220 / 3.745712 (-0.168492) | 5.047560 / 5.269862 (-0.222302) | 2.988008 / 4.565676 (-1.577669) | 0.056430 / 0.424275 (-0.367845) | 0.007180 / 0.007607 (-0.000427) | 0.458113 / 0.226044 (0.232069) | 4.584096 / 2.268929 (2.315168) | 2.395307 / 55.444624 (-53.049317) | 2.080530 / 6.876477 (-4.795947) | 2.239000 / 2.142072 (0.096927) | 0.575822 / 4.805227 (-4.229405) | 0.133303 / 6.500664 (-6.367361) | 0.059449 / 0.075469 (-0.016020) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.256496 / 1.841788 (-0.585291) | 19.651614 / 8.074308 (11.577306) | 14.232480 / 10.191392 (4.041088) | 0.146461 / 0.680424 (-0.533963) | 0.018632 / 0.534201 (-0.515569) | 0.399844 / 0.579283 (-0.179439) | 0.411225 / 0.434364 (-0.023139) | 0.458203 / 0.540337 (-0.082135) | 0.669916 / 1.386936 (-0.717020) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006463 / 0.011353 (-0.004890) | 0.003898 / 0.011008 (-0.007110) | 0.064037 / 0.038508 (0.025529) | 0.071982 / 0.023109 (0.048873) | 0.361936 / 0.275898 (0.086038) | 0.393165 / 0.323480 (0.069685) | 0.005207 / 0.007986 (-0.002779) | 0.003231 / 0.004328 (-0.001098) | 0.064318 / 0.004250 (0.060068) | 0.055776 / 0.037052 (0.018724) | 0.383087 / 0.258489 (0.124598) | 0.402428 / 0.293841 (0.108587) | 0.031587 / 0.128546 (-0.096959) | 0.008527 / 0.075646 (-0.067119) | 0.070495 / 0.419271 (-0.348777) | 0.048806 / 0.043533 (0.005273) | 0.369932 / 0.255139 (0.114793) | 0.385268 / 0.283200 (0.102068) | 0.023183 / 0.141683 (-0.118500) | 1.491175 / 1.452155 (0.039020) | 1.534191 / 1.492716 (0.041475) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224526 / 0.018006 (0.206520) | 0.445460 / 0.000490 (0.444970) | 0.003612 / 0.000200 (0.003412) | 0.000089 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029829 / 0.037411 (-0.007583) | 0.087951 / 0.014526 (0.073425) | 0.100069 / 0.176557 (-0.076487) | 0.154944 / 0.737135 (-0.582192) | 0.101271 / 0.296338 (-0.195067) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412385 / 0.215209 (0.197175) | 4.108038 / 2.077655 (2.030384) | 2.163578 / 1.504120 (0.659459) | 2.031934 / 1.541195 (0.490740) | 2.155857 / 1.468490 (0.687367) | 0.481132 / 4.584777 (-4.103645) | 3.620868 / 3.745712 (-0.124844) | 5.222175 / 5.269862 (-0.047687) | 3.115637 / 4.565676 (-1.450039) | 0.056480 / 0.424275 (-0.367795) | 0.007761 / 0.007607 (0.000154) | 0.483553 / 0.226044 (0.257509) | 4.830087 / 2.268929 (2.561159) | 2.629919 / 55.444624 (-52.814705) | 2.327551 / 6.876477 (-4.548926) | 2.539934 / 2.142072 (0.397861) | 0.587963 / 4.805227 (-4.217265) | 0.131085 / 6.500664 (-6.369579) | 0.060807 / 0.075469 (-0.014662) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.350003 / 1.841788 (-0.491785) | 19.491713 / 8.074308 (11.417405) | 14.030429 / 10.191392 (3.839037) | 0.174762 / 0.680424 (-0.505662) | 0.018523 / 0.534201 (-0.515678) | 0.394946 / 0.579283 (-0.184337) | 0.407652 / 0.434364 (-0.026712) | 0.465806 / 0.540337 (-0.074531) | 0.605417 / 1.386936 (-0.781519) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cc85979df3a39657079fdf0844c7e64547507f1a \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006235 / 0.011353 (-0.005118) | 0.003675 / 0.011008 (-0.007333) | 0.080680 / 0.038508 (0.042171) | 0.064378 / 0.023109 (0.041268) | 0.394312 / 0.275898 (0.118414) | 0.428143 / 0.323480 (0.104663) | 0.004794 / 0.007986 (-0.003191) | 0.002899 / 0.004328 (-0.001429) | 0.062592 / 0.004250 (0.058342) | 0.050957 / 0.037052 (0.013904) | 0.396831 / 0.258489 (0.138342) | 0.438280 / 0.293841 (0.144439) | 0.027743 / 0.128546 (-0.100804) | 0.008068 / 0.075646 (-0.067578) | 0.262541 / 0.419271 (-0.156730) | 0.060837 / 0.043533 (0.017304) | 0.397941 / 0.255139 (0.142802) | 0.417012 / 0.283200 (0.133813) | 0.030153 / 0.141683 (-0.111530) | 1.477115 / 1.452155 (0.024960) | 1.516642 / 1.492716 (0.023926) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.178032 / 0.018006 (0.160026) | 0.445775 / 0.000490 (0.445286) | 0.004275 / 0.000200 (0.004075) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025025 / 0.037411 (-0.012386) | 0.074113 / 0.014526 (0.059587) | 0.083814 / 0.176557 (-0.092743) | 0.148860 / 0.737135 (-0.588275) | 0.085408 / 0.296338 (-0.210931) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.393714 / 0.215209 (0.178505) | 3.936589 / 2.077655 (1.858934) | 1.910501 / 1.504120 (0.406381) | 1.729670 / 1.541195 (0.188475) | 1.777647 / 1.468490 (0.309156) | 0.499532 / 4.584777 (-4.085245) | 3.002385 / 3.745712 (-0.743327) | 2.906916 / 5.269862 (-2.362945) | 1.883321 / 4.565676 (-2.682356) | 0.057546 / 0.424275 (-0.366730) | 0.006492 / 0.007607 (-0.001115) | 0.463605 / 0.226044 (0.237560) | 4.620215 / 2.268929 (2.351287) | 2.399021 / 55.444624 (-53.045603) | 2.182962 / 6.876477 (-4.693514) | 2.357344 / 2.142072 (0.215272) | 0.583946 / 4.805227 (-4.221282) | 0.124644 / 6.500664 (-6.376021) | 0.060831 / 0.075469 (-0.014638) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.276412 / 1.841788 (-0.565375) | 18.462522 / 8.074308 (10.388214) | 13.877375 / 10.191392 (3.685983) | 0.150584 / 0.680424 (-0.529840) | 0.016675 / 0.534201 (-0.517526) | 0.331711 / 0.579283 (-0.247573) | 0.366659 / 0.434364 (-0.067705) | 0.396400 / 0.540337 (-0.143938) | 0.555418 / 1.386936 (-0.831518) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005995 / 0.011353 (-0.005358) | 0.003610 / 0.011008 (-0.007399) | 0.061802 / 0.038508 (0.023294) | 0.059265 / 0.023109 (0.036156) | 0.392628 / 0.275898 (0.116730) | 0.413143 / 0.323480 (0.089663) | 0.004687 / 0.007986 (-0.003299) | 0.002843 / 0.004328 (-0.001486) | 0.061932 / 0.004250 (0.057682) | 0.049466 / 0.037052 (0.012413) | 0.402718 / 0.258489 (0.144229) | 0.415039 / 0.293841 (0.121198) | 0.027352 / 0.128546 (-0.101194) | 0.007965 / 0.075646 (-0.067682) | 0.067456 / 0.419271 (-0.351815) | 0.042336 / 0.043533 (-0.001196) | 0.405543 / 0.255139 (0.150404) | 0.403209 / 0.283200 (0.120010) | 0.021459 / 0.141683 (-0.120224) | 1.442861 / 1.452155 (-0.009293) | 1.491213 / 1.492716 (-0.001503) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.248225 / 0.018006 (0.230219) | 0.434174 / 0.000490 (0.433684) | 0.001973 / 0.000200 (0.001773) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025475 / 0.037411 (-0.011936) | 0.077865 / 0.014526 (0.063339) | 0.086980 / 0.176557 (-0.089577) | 0.143682 / 0.737135 (-0.593453) | 0.088634 / 0.296338 (-0.207705) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417591 / 0.215209 (0.202382) | 4.168700 / 2.077655 (2.091045) | 2.335743 / 1.504120 (0.831623) | 2.208174 / 1.541195 (0.666980) | 2.256658 / 1.468490 (0.788168) | 0.503164 / 4.584777 (-4.081613) | 3.026667 / 3.745712 (-0.719045) | 4.496675 / 5.269862 (-0.773187) | 2.741049 / 4.565676 (-1.824628) | 0.057781 / 0.424275 (-0.366494) | 0.006810 / 0.007607 (-0.000797) | 0.490803 / 0.226044 (0.264759) | 4.914369 / 2.268929 (2.645441) | 2.594250 / 55.444624 (-52.850375) | 2.274552 / 6.876477 (-4.601925) | 2.397529 / 2.142072 (0.255456) | 0.593008 / 4.805227 (-4.212220) | 0.126194 / 6.500664 (-6.374470) | 0.062261 / 0.075469 (-0.013208) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.357561 / 1.841788 (-0.484227) | 18.622995 / 8.074308 (10.548687) | 14.142569 / 10.191392 (3.951177) | 0.146527 / 0.680424 (-0.533897) | 0.016863 / 0.534201 (-0.517338) | 0.336219 / 0.579283 (-0.243064) | 0.348650 / 0.434364 (-0.085714) | 0.385958 / 0.540337 (-0.154380) | 0.517958 / 1.386936 (-0.868978) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f3da7a5a7d0d0415476ecebb0458e7c60df24445 \"CML watermark\")\n" ]
2023-07-25T09:32:16
2023-07-25T10:02:43
2023-07-25T09:54:12
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6067", "html_url": "https://github.com/huggingface/datasets/pull/6067", "diff_url": "https://github.com/huggingface/datasets/pull/6067.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6067.patch", "merged_at": "2023-07-25T09:54:12" }
close https://github.com/huggingface/datasets/issues/6066
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6067/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6067/timeline
null
null
true

Dataset Card for "github-issues"

More Information needed

Downloads last month
69